content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
SVG (Scalable Vector Graphics) specifies vector-based graphics in XML format. In order to work with it, you need to have some basic understanding of HTML and basic XML.
SVG is used to specify vector-based graphics for the Web. Each element and attribute in SVG files can be animated. SVG is a W3C recommendation. It incorporates with other W3C standards, like the DOM and XSL.
Advantages of using SVG¶
Using SVG over other image formats, such as JPEG and GIF, has many advantages. Particularly:
- SVG images can be generated and modified with any text editor.
- SVG images can be scripted, indexed, searched, and compressed.
- You can print SVG images with high quality at any resolution.
- SVG images can be scaled and zoomed.
- SVG graphics don’t lose any quality in the case where they are zoomed or resized.
- SVG is an open standard.
Creating SVG Images¶
You can create SVG images with any text editor. But creating them with a drawing program, like Inkscape, is more suitable. | https://www.w3docs.com/learn-html/svg-intro.html | 2020-10-20T06:27:12 | CC-MAIN-2020-45 | 1603107869933.16 | [] | www.w3docs.com |
Articles
- How to clear smarty cache while making your own templates
- Why you have 4 different plugins for WHMCS WordPress integration
- WHmpress asking to verify again and again.
- Why my security questions are not coming to cart page?
- How to enable country currency feature in WHMpress?
- FAQs | http://docs.whmpress.com/docs/whmpress/common-questions-faq/ | 2020-10-20T05:45:12 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.whmpress.com |
Controlling Costs and Monitoring Queries with CloudWatch Metrics and Events
Workgroups allow you to set data usage control limits per query or per workgroup, set up alarms when those limits are exceeded, and publish query metrics to CloudWatch.
In each workgroup, you can:
Configure Data usage controls per query and per workgroup, and establish actions that will be taken if queries breach the thresholds.
View and analyze query metrics, and publish them to CloudWatch. If you create a workgroup in the console, the setting for publishing the metrics to CloudWatch is selected for you. If you use the API operations, you must enable publishing the metrics. When metrics are published, they are displayed under the Metrics tab in the Workgroups panel. Metrics are disabled by default for the primary workgroup.
Topics | https://docs.aws.amazon.com/athena/latest/ug/control-limits.html | 2020-10-20T06:38:54 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.aws.amazon.com |
Proxy server overview
You use a proxy server to control the connections that are allowed from your VPC or VNet and prevent unattended connections initiated from your environment.
Proxy servers can be used for:
- FreeIPA backups: Backups created on an hourly basis are uploaded to cloud storage S3/ADLS Gen2.
- Parcel downloads: Although CDP currently only supports pre-warmed images, it is a requirement to download parcels from archive.cloudera.com when an upgrade is performed.
- Cluster Connectivity Manager (CCM): The autossh process that maintains the reverse SSH tunnel.
For our purposes, because we are addressing an environment with no internet setup, we will only use a proxy server when CCM is being used. | https://docs.cloudera.com/management-console/cloud/proxy/topics/mc-proxy-server-overview.html | 2020-10-20T06:53:04 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.cloudera.com |
Authentication API
Couchbase Server supports authentication via external domains.
Authenticating Externally
Enterprises frequently centralize directory services, allowing all user-authentication to be handled by a single server or server-group. LDAP is frequently used in support of such centralization. The authentication handled in this way is therefore external to Couchbase Server.
Couchbase Server supports external authentication. Users are registered as external, for authentication purposes. When such users pass their credentials to Couchbase Server, Couchbase Server recognizes the user as external, and duly passes the credentials to the external authentication facility: if the authentication succeeds there, Couchbase Server is informed, and the user is given appropriate access, based on the roles and privileges on Couchbase Server that they have been assigned.
LDAP Groups
LDAP supports groups, of which multiple users can be members. Couchbase Server supports the association of LDAP groups with Couchbase-Server groups: a user successfully authenticated on an LDAP server may have their LDAP group information duly returned to Couchbase Server. If Couchbase Server has configured an association between one or more of the user’s LDAP groups and corresponding groups defined on Couchbase Server, the user is assigned the roles and privileges for the corresponding Couchbase-Server groups.
Configuration Options
Couchbase provides a recommended REST method for simple and expedited configuration of LDAP-based authentication. This is described in Configure LDAP.
Alternatively, a legacy REST API for establishing SASL administrator credentials can be used. Note that this requires prior, manual set-up of saslauthd for the cluster: see Configure saslauthd. | https://docs.couchbase.com/server/current/rest-api/rest-authentication.html | 2020-10-20T06:50:58 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.couchbase.com |
The Multi-Tenant w/ Subdomains mode works similarly to the Multi-Tenant strategy, but each workspace is accessed via its subdomain.
The term workspace is just a label, you can rename it to anything you find appropriate to your business model. Examples are Organizations, Companies, Teams, etc.
Each workspace has its own Users, Audit Logs, Settings, and Entities.
After a new user signs up via the root domain, they are asked to create a new workspace and assign a subdomain to it.
The user will be set as admin and redirected to the workspace's subdomain.
Users that sign-up via the subdomain are accessing a workspace that already exists and already has an admin.
In this case, new users will need the admins' approval to join.
You can override this behavior on the code and assign default permissions for users.
The file that contains this logic is
backend/src/services/auth/authService.ts.
Read the Architecture > Security section for more details.
If users came to the subdomain by invitation, they will have the roles the admin previously assigned to them. In that case, they will go straight to the application.
Users can switch, create, edit or delete workspaces on the Workspaces page that can be accessed via the User's menu. Permission to edit and delete a specific workspace depends on the role the user has on that workspace. | https://docs.scaffoldhub.io/features/tenant/multi-tenant-subdomains | 2020-10-20T05:14:00 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.scaffoldhub.io |
Basic information about a packet being received or already completed and awaiting processing, including memory pointers to its data in the circular receive FIFO buffer.
#include <
rail_types.h>
Basic information about a packet being received or already completed and awaiting processing, including memory pointers to its data in the circular receive FIFO buffer.
This packet information refers to remaining packet data that has not already been consumed by RAIL_ReadRxFifo().
- Note
- Because the receive FIFO buffer is circular, a packet might start near the end of the buffer and wrap around to the beginning of the buffer to finish, hence the distinction between the first and last portions. Packets that fit without wrapping only have a first portion (firstPortionBytes == packetBytes and lastPortionData will be NULL).
Definition at line
2594 of file
rail_types.h.
Field Documentation
◆ firstPortionBytes
The number of bytes in the first portion.
Definition at line
2598 of file
rail_types.h.
◆ firstPortionData
The pointer to the first portion of packet data containing firstPortionBytes number of bytes.
Definition at line
2599 of file
rail_types.h.
◆ lastPortionData
The pointer to the last portion of a packet, if any; NULL otherwise.
The number of bytes in this portion is packetBytes - firstPortionBytes.
Definition at line
2602 of file
rail_types.h.
◆ packetBytes
The number of packet data bytes available to read in this packet.
Definition at line
2596 of file
rail_types.h.
◆ packetStatus
The packet status of this packet.
Definition at line
2595 of file
rail_types.h.
The documentation for this struct was generated from the following file:
- common/
rail_types.h | https://docs.silabs.com/rail/2.8/struct-r-a-i-l-rx-packet-info-t | 2020-10-20T05:17:13 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.silabs.com |
Connections¶
- class
urllib3.connection.
HTTPConnection(*args, **kw)¶
Bases:
http.client.HTTPConnection,
object
Based on
http.client.HTTPConnectionbut provides an extra constructor backwards-compatibility layer between older and newer Pythons.
Additional keyword parameters are used to configure attributes of the connection. Accepted parameters include:
strict: See the documentation on
urllib3.connectionpool.HTTPConnectionPool
source_address: Set the source address for the current connection.
socket_options: Set specific options on the underlying socket. If not specified, then defaults are loaded from
HTTPConnection.default_socket_optionswhich includes disabling Nagle’s algorithm (sets TCP_NODELAY to 1) unless the connection is behind a proxy.
For example, if you wish to enable TCP Keep Alive in addition to the defaults, you might pass:
HTTPConnection.default_socket_options + [ (socket.SOL_SOCKET, socket.SO_KEEPALIVE, 1), ]
Or you may want to disable the defaults by passing an empty list (e.g.,
[]).
default_socket_options= [(6, 1, 1)]¶
Disable Nagle’s algorithm by default.
[(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)]
- property
host¶
Getter method to remove any trailing dots that indicate the hostname is an FQDN.
In general, SSL certificates don’t include the trailing dot indicating a fully-qualified domain name, and thus, they don’t validate properly when checked against a domain name that includes the dot. In addition, some servers may not expect to receive the trailing dot when provided.
However, the hostname with trailing dot is critical to DNS resolution; doing a lookup with the trailing dot will properly only resolve the appropriate FQDN, whereas a lookup without a trailing dot will search the system’s search domain list. Thus, it’s important to keep the original host around for use only in those cases where it’s appropriate (i.e., when doing DNS lookup to establish the actual TCP connection across which we’re going to send HTTP requests).
request_chunked(method, url, body=None, headers=None)¶
Alternative to the common request method, which sends the body with chunked encoding and not as one block
- class
urllib3.connection.
HTTPSConnection(host, port=None, key_file=None, cert_file=None, key_password=None, strict=None, timeout=<object object>, ssl_context=None, server_hostname=None, **kw)¶
Bases:
urllib3.connection.HTTPConnection
Many of the parameters to this constructor are passed to the underlying SSL socket by means of
urllib3.util.ssl_wrap_socket(). | https://urllib3.readthedocs.io/en/latest/reference/urllib3.connection.html | 2020-10-20T05:36:29 | CC-MAIN-2020-45 | 1603107869933.16 | [] | urllib3.readthedocs.io |
Upgrade processed. The duration of the downtime will depend on the data set and frequency of replications with mobile clients. To avoid this downtime, it is possible to pre-build the view index before directing traffic to the upgraded node (see the view indexing section).:
View Indexing
Sync Gateway uses Couchbase Server views to index and query documents. When Sync Gateway starts, it will publish a Design Document which contains the View definitions (map/reduce functions). For example, the Design Document for Sync Gateway is the following:
{ "views":{ "access":{ "map":"function (doc, meta) { ... }" }, "channels":{ "map":"function (doc, meta) { ... }" }, ... }, "index_xattr_on_deleted_docs":true }
Following the Design Document creation, it must run against all the documents in the Couchbase Server bucket to build the index which may result in downtime. During a Sync Gateway upgrade, the index may also have to be re-built if the Design Document definition has changed. To avoid this downtime, you can publish the Design Document and build the index before starting Sync Gateway by using the Couchbase Server REST API. The following curl commands refer to a Sync Gateway 1.3 → Sync Gateway 1.4 upgrade but they apply to any upgrade of Sync Gateway or Accelerator.
Start Sync Gateway 1.4 with Couchbase Server instance that isn’t your production environment. Then, copy the Design Document to a file with the following.
$ curl localhost:8092/<BUCKET_NAME>/_design/sync_gateway/ > ddoc.json
Create a Development Design Document on the cluster where Sync Gateway is going to be upgraded from 1.3:
$ curl -X PUT<BUCKET_NAME>/_design/dev_sync_gateway/ -d @ddoc.json -H "Content-Type: application/json"
This should return:
{"ok":true,"id":"_design/dev_sync_gateway"}
Run a View Query against the Development Design Document. By default, a Development Design Document will index one vBucket per node, however we can force it to index the whole bucket using the
full_setparameter:
$ curl ""
This may take some time to return, and you can track the index’s progress in the Couchbase Server UI. Note that this will consume disk space to build an almost duplicate index until the switch is made.
Upgrade Sync Gateway. When Sync Gateway 1.4 starts, it will publish the new Design Document to Couchbase Server. This will match the Development Design Document we just indexed, so will be available immediately. | https://docs.couchbase.com/sync-gateway/2.0/upgrade.html | 2020-10-20T06:25:36 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.couchbase.com |
Theme Documentation
HostCluster is a Hosting WordPress Theme and is one of our most valuable Server related business themes. hostcluster -> HostCluster Theme Panel -> Demo Importer
Demo-sliders are located into the downloaded package from themeforest
(*) - The sliders are automatically imported when the Demo Importer is fired. After completing the import the site will contain all demo data + revolution sliders
If you want to change the general Options of the Theme, go to your WordPress Admin Dashboard Area to HostCluster Theme Panel. Here you have a tabbed Navigation where you can change a lot of Options of your new Theme::
HostCluster Theme comes with a lot of custom widgets which can be found in your WordPress Admin Area under
Appearance > Widgets.
HostCluster comes with
WBBakery Page Builder Plugin included, so any customer of the theme can use this amazing drag and drop page builder.
Besides default WBBakery Page Builder shortcodes, HostCluster comes with
+30 custom shortcodes.
HostCluster comes a
huge list of shortcodes integrated directly into
WBB. | https://docs.modeltheme.com/hostcluster/ | 2020-10-20T07:22:35 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.modeltheme.com |
Date: Tue, 20 Oct 2020 06:48:54 +0000 (GMT) Message-ID: <668984529.14860.1603176534055@df68ed866f50> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_14859_254988816.1603176534054" ------=_Part_14859_254988816.1603176534054 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Contents:=20
The easiest way to reuse a recipe is to change its inputs in Flow View.<= br>
You can also create copies of recipes within the same flow.
Steps:
You can reuse a recipe by creating a copy of the flow that contains it a= nd then replacing the recipes inputs with different sources.
Steps:
You can download a recipe in text form in the following ways:
NOTE: A downloaded recipe is in a text form of Wrangle (a domain-specific language for data transformation)= . In this form, it cannot be used in the application. Downloaded rec= ipes are for archival purposes only. | https://docs.trifacta.com/exportword?pageId=135013710 | 2020-10-20T06:48:54 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.trifacta.com |
Grid Displays
Cloudera Machine Learning supports native grid displays of DataFrames across several languages.
Python
Using DataFrames with the pandas package requires per-session activation:
import pandas as pd pd.DataFrame(data=[range(1,100)])
For PySpark DataFrames, use pandas and run
df.toPandas() on a PySpark DataFrame. This will bring
the DataFrame into local memory as a pandas DataFrame.
R
In R, DataFrames will display as grids by default. For example, to view the Iris data set, you would just use:
iris
Similar to PySpark, bringing Sparklyr data into local memory with
as.data.framewill output a grid display.
sparkly_df %>% as.data.frame
Scala
Calling the
display() function on an existing
dataframe will trigger a collect, much like
df.show().
val df = sc.parallelize(1 to 100).toDF() display(df) | https://docs.cloudera.com/machine-learning/1.0/visualizations/topics/ml-grid-displays.html | 2020-10-20T05:14:23 | CC-MAIN-2020-45 | 1603107869933.16 | [] | docs.cloudera.com |
AWS Mobile SDK for Android Developer Guide
Welcome to the AWS Mobile SDK for Android Developer Guide. This guide will help you start developing Android applications using Amazon Web Services.
If you're new to the AWS Mobile SDK, you'll probably want to look first at What is the AWS Mobile SDK for Android? and Getting Started with the AWS Mobile SDK for Android. These topics explain what the AWS Mobile SDK includes, how to set up the SDK, and how to get started using AWS services from an Android application.
- What is the AWS Mobile SDK for Android?
- Set Up the AWS Mobile SDK for Android
- Getting Started with the AWS Mobile SDK for Android
- Authenticate Users with Amazon Cognito Identity
- Sync User Data with Amazon Cognito Sync
- Track App Usage Data with Amazon Mobile Analytics
- Store and Retrieve Files with Amazon S3
- Store and Retrieve App Data in Amazon DynamoDB
- Process Streaming Data with Amazon Kinesis and Firehose
- Execute Code On Demand with Amazon Lambda
- Understand Natural Language and Trigger Business Workflows with Amazon Lex
- Add Text to Speech Capability to your Android App with Amazon Polly
- Create Push Notification Campaigns with Amazon Pinpoint
- The AWS Mobile SDK for Android can be installed here.
- For more information about the AWS SDK for Android, including a complete list of supported AWS products, see the AWS Mobile SDK product page.
- The SDK reference documentation includes the ability to browse and search code included with the SDK. It provides thorough documentation and usage examples. You can find it at AWS SDK for Android API Reference.
- Post questions and feedback at the Mobile Developer Forum.
- Source code and sample applications are available at AWS Mobile SDK for Android sample repository. | http://docs.aws.amazon.com/mobile/sdkforandroid/developerguide/?r=7002 | 2016-12-02T22:19:11 | CC-MAIN-2016-50 | 1480698540698.78 | [] | docs.aws.amazon.com |
Show in Contents
Add to Favorites
Home: Revit Architecture 2010 User's Guide
Adding Values to a Color Scheme Definition
Color Schemes
Using a Color Scheme in a Section View
Rooms and Areas
>
Color Schemes
>
Applying a Color Scheme
In the Project Browser, right-click the floor plan view or section view to apply a color scheme to, and select Properties.
In the Instance Properties dialog, click in the Color Scheme cell.
In the Edit Color Scheme dialog, under Schemes, select a category and color scheme.
For information on creating a new color scheme, see
Creating a Color Scheme
.
Click OK.
For Color Scheme Location, select one of the following values:
Background
: Applies the color scheme to the background of the view only. For example, in a floor plan view, it applies the color scheme to the floor only. In a section view, it applies the color scheme to the background walls or surfaces only. The color scheme is not applied to foreground elements in the view.
Foreground
: Applies the color scheme to all model elements in the view.
Click OK.
Related topic
Color Schemes
Adding a Color Scheme Legend
Creating a Color Scheme
Adding Values to a Color Scheme Definition
Please send us your comment about this page | http://docs.autodesk.com/REVIT/2010/ENU/Revit%20Architecture%202010%20Users%20Guide/RAC/files/WS73099cc142f48755c31bc10e05da804b-7fbc.htm | 2014-03-07T10:26:35 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.autodesk.com |
Message-ID: <1074528023.21313.1394188021884.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_21312_113947556.1394188021883" ------=_Part_21312_113947556.1394188021883 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
During each analysis, the notif= ications are computed for each user who has subscribed to notifications= . Then, in an asynchronous way, these notifications are sent to users = by emails.
To set the delay between processing of notification queue, set the
=
sonar.notifications.delay property (in seconds) in SO=
NAR_HOME/conf/sonar.properties. Note that the server has to be restart=
ed for the new value to be taken into account.
To configure the email server, go to Settings > General Setti= ngs > General > Email.
Check also the Server base URL property at Sett= ings > General Settings > General > General to make sure = that links in those notification emails will redirect to the right SonarQub= e server URL. | http://docs.codehaus.org/exportword?pageId=231736645 | 2014-03-07T10:27:01 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.codehaus.org |
Help Center
Local Navigation
SIP User Password configuration setting
Description
This setting specifies the SIP user password that a BlackBerry® device uses to authenticate to your organization's SIP proxy server.
Usage
Configure this setting if you want to create a default value for all users.
If the user types a password. | http://docs.blackberry.com/en/admin/deliverables/10872/SIP_User_Passwordc_604747_11.jsp | 2014-03-07T10:34:15 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.blackberry.com |
Update Guide
Local Navigation
Backing up data and freeing storage space
To update software, you might need to free application storage.
- Back up smartphone data to your media card
- Restore smartphone data from your media card
- View the amount of available storage space on your smartphone
- Delete a language
- Delete an application
- Delete an appointment, meeting, or alarm
- Delete browsing information
Next topic: Back up smartphone data to your media card
Previous topic: About downloading apps with the BlackBerry App World
Was this information helpful? Send us your comments. | http://docs.blackberry.com/fr-fr/smartphone_users/deliverables/31989/Backing_up_data_free_storage_space_61_1736047_11.jsp | 2014-03-07T10:33:27 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.blackberry.com |
Scan a QR code or an NFC tag
The NFC feature might not be available, depending on your wireless service provider, your administrator's settings, and your BlackBerry device model.
When you scan a QR code by using the Smart Tags app, your device saves the information as a smart tag.
When you scan an NFC tag, your device presents options for viewing or saving the information. You can view or save the information in the Smart Tags app. You might also be able to view the info in another application such as the BlackBerry Browser.
Do any of the following:
- To scan a QR code, open the Smart Tags app. Tap
. Hold your device so that all four corners of the QR code appear on your screen.
- To scan an NFC tag, tap the back of your device against the NFC tag.
Delete a smart tag from your device
- In the Smart Tags app, highlight one or more tags that you want to delete.
- Tap
.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/50757/mes1335537272226.jsp | 2014-03-07T10:31:45 | CC-MAIN-2014-10 | 1393999642134 | [array(['mpr1354374387726_lowres_en-us.png',
'BlackBerry 10 device showing the back of the device directly above an NFC tag. Blue circles indicate a data exchange. BlackBerry 10 device showing the back of the device directly above an NFC tag. Blue circles indicate a data exchange.'],
dtype=object) ] | docs.blackberry.com |
requirementsto check if the required conditions are met.
Example:
Each On each panel the with a DynamicInstallerRequirementValidator is assigned to will be evaluated according to the requirements given as nested elements in
<dynamicinstallerrequirements>.
installerrequirement attributes:
... | http://docs.codehaus.org/pages/diffpages.action?originalId=230396971&pageId=230397358 | 2014-03-07T10:30:26 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.codehaus.org |
public class ScheduledExecutorFactoryBean extends ExecutorConfigurationSupport implements FactoryBean<ScheduledExecutorService>
FactoryBeanthat sets up a
ScheduledExecutorService(by default: a
ScheduledThreadPoolExecutor) and exposes it for bean references.
Allows for registration of
ScheduledExecutorTasks,
automatically starting the
ScheduledExecutorService on initialization and
cancelling it on destruction of the context. In scenarios that only require static
registration of tasks at startup, there is no need to access the
ScheduledExecutorService instance itself in application code at all;
ScheduledExecutorFactoryBean is then just being used for lifecycle integration.
Note that
ScheduledExecutorService
uses a
Runnable instance that is shared between repeated executions,
in contrast to Quartz which instantiates a new Job for each execution.
WARNING:
Runnables submitted via a native
ScheduledExecutorService are removed from
the execution schedule once they throw an exception. If you would prefer
to continue execution after such an exception, switch this FactoryBean's
"continueScheduledExecutionAfterException"
property to "true".
setPoolSize(int),
ExecutorConfigurationSupport.setThreadFactory(java.util.concurrent.ThreadFactory),
ScheduledExecutorTask,
ScheduledExecutorService,
ScheduledThreadPoolExecutor, Serialized Form
logger
afterPropertiesSet,
public ScheduledExecutorFactoryBean()
public void setPoolSize(int poolSize)
public void setScheduledExecutorTasks(ScheduledExecutorTask... scheduledExecutorTasks)
ScheduledExecutorService.schedule(java.lang.Runnable, long, java.util.concurrent.TimeUnit),
ScheduledExecutorService.scheduleWithFixedDelay(java.lang.Runnable, long, long, java.util.concurrent.TimeUnit),
ScheduledExecutorService.scheduleAtFixedRate(java.lang.Runnable, long, long, java.util.concurrent.TimeUnit)
public void setContinueScheduledExecutionAfterException(boolean continueScheduledExecutionAfterException)
Default is "false", matching the native behavior of a
ScheduledExecutorService.
Switch this flag to "true" for exception-proof execution of each task,
continuing scheduled execution as in the case of successful execution.
ScheduledExecutorService.scheduleAtFixedRate(java.lang.Runnable, long, long, java.util.concurrent.TimeUnit)
public void setExposeUnconfigurableExecutor(boolean exposeUnconfigurableExecutor)
Default is "false", exposing the raw executor as bean reference. Switch this flag to "true" to strictly prevent clients from modifying the executor's configuration.
Executors.unconfigurableScheduledExecutorService(java.util.concurrent.ScheduledExecut ScheduledExecutorService createExecutor(int poolSize, ThreadFactory threadFactory, RejectedExecutionHandler rejectedExecutionHandler)
ScheduledExecutorServiceinstance.
The default implementation creates a
ScheduledThreadPoolExecutor.
Can be overridden in subclasses to provide custom
ScheduledExecutorService instances.
poolSize- the specified pool size
threadFactory- the ThreadFactory to use
rejectedExecutionHandler- the RejectedExecutionHandler to use
ExecutorConfigurationSupport.afterPropertiesSet(),
ScheduledThreadPoolExecutor
protected void registerTasks(ScheduledExecutorTask[] tasks, ScheduledExecutorService executor)
ScheduledExecutorTaskson the given
ScheduledExecutorService.
tasks- the specified ScheduledExecutorTasks (never empty)
executor- the ScheduledExecutorService to register the tasks on.
protected Runnable getRunnableToSchedule(ScheduledExecutorTask task)
Wraps the task's Runnable in a
DelegatingErrorHandlingRunnable
that will catch and log the Exception. If necessary, it will suppress the
Exception according to the
"continueScheduledExecutionAfterException"
flag.
task- the ScheduledExecutorTask to schedule
public ScheduledExecut<ScheduledExecutorService>
null)
FactoryBeanNotInitializedException
public Class<? extends ScheduledExecut<ScheduledExecut<ScheduledExecutorService>
FactoryBean.getObject(),
SmartFactoryBean.isPrototype() | http://docs.spring.io/spring-framework/docs/current/javadoc-api/org/springframework/scheduling/concurrent/ScheduledExecutorFactoryBean.html | 2014-03-07T10:28:01 | CC-MAIN-2014-10 | 1393999642134 | [] | docs.spring.io |
Disk Provisioning on Google Cloud Platform (GCP)
The steps below will help you enable dynamic provisioning of Portworx volumes in your GCP cluster.
Prerequisites
Key-value store
Portworx uses a key-value store for its clustering metadata. Please have a clustered key-value database (etcd or consul) installed and ready. For etcd installation instructions refer this doc.
Firewall
Ensure ports 9001-9022 are open between the nodes that will run Portworx. Your nodes should also be able to reach the port KVDB is running on (for example etcd usually runs on port 2379).
Create a GCP cluster
To manage and auto provision GCP disks, Portworx needs access to the GCP Compute Engine API. There are two ways to do this.
Using instance privileges
Using an account file
This json file needs to be made available on any GCP instance that will run Portworx. Place this file under a
/etc/pwx/ directory on each GCP instance. For example,
/etc/pwx/gcp.json.
Install
If you used an account file above, you will have to configure the Portworx installation arguments to access this file by way of it’s environmnet variables. In the installation arguments for Portworx, pass in the location of this file via the environment variable
GOOGLE_APPLICATION_CREDENTIALS. For example, use
-e GOOGLE_APPLICATION_CREDENTIALS=/etc/pwx/gcp.json.
If you installing on Kuberenetes, you can use a secret to mount
/etc/pwx/gcp.json into the Portworx Daemonset and then expose
GOOGLE_APPLICATION_CREDENTIALS as an env in the Daemonset.
Follow these instructions to install Portworx based on your container orchestration environment.
Disk template
Portworx takes in a disk spec which gets used to provision GCP persistent disks dynamically.
A GCP disk template defines the Google persistent disk properties that Portworx will use as a reference. There are 2 ways you can provide this template to Portworx.
1. Using a template specification
The spec follows the following format:
"type=<GCP disk type>,size=<size of disk>"
- type: Following two types are supported
- pd-standard
- pd-ssd
- size: This is the size of the disk in GB
See GCP disk for more details on above parameters.
Examples:
"type=pd-ssd,size=200"
"type=pd-standard,size=200", "type=pd-ssd,size=100"
2. Using existing GCP disks as templates
You can also reference an existing GCP disk as a template. On every node where Portworx is brought up as a storage node, a new GCP disk(s) identical to the template will be created.
For example, if you created a template GCP disk called px-disk-template-1, you can pass this in to Portworx as a parameter as a storage device.
Ensure that these disks are created in the same zone as the GCP node group.=pd-ssd=pd-standard,size=200", "-s", "type=pd-ssd,size=100", ". | https://2.3.docs.portworx.com/cloud-references/auto-disk-provisioning/gcp/ | 2020-05-25T08:24:06 | CC-MAIN-2020-24 | 1590347388012.14 | [] | 2.3.docs.portworx.com |
"How To" Guides
This section contains "how to" guides for some specific questions frequently asked. While some of them are common development tasks and not directly related to the ABP Commercial, we think it is useful to have some concrete examples those directly work with your ABP Commercial based applications. | https://docs.abp.io/en/commercial/latest/how-to/index | 2020-05-25T09:04:40 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.abp.io |
Artifacts and guidance for Azure DevTest Labs
We are pleased to announce that the team, spearheaded by Derek Keeler and Josh Garverick, has achieved their goal. They set out to investigate how to extend the lab and complement the gallery by adding a few missing artifacts to engage with the vibrant artifact community.
Artefacts on Azure/azure-devtestlab
You can access, use, and enhance these artefacts today:
- Install Google Chrome
- JDKs for Linux
- Set up Web Deploy server
- Install Sublime Text
- Install Slack
- NPM Package
- Selenium Grid
- Apt-Get
- Yum Package
- gVim (Cream) for Windows
The guidance will be included in a series of post, starting with Getting started with DevTest Labs Custom Artifact Development and an exciting article that the team is working on.
Here’s a rough snippet to wet your appetite:
Team who made it all happen
Darren Rich, Derek Keeler, Esteban Garcia, Igor Shcheglovitov, Josh Garverick, Martin Kulov, Oscar Garcia Colon, Rui Melo, Tommy Sundling, and Xiaoying Guo.
What our product owner had to say
Thank you, Derek, Josh, Oscar, Darren, Tommy and everyone, for your wonderful contribution to this project! The artifacts you've built have made DevTest Labs a very competitive service in Azure, and I kept hearing from customers how they liked those artifacts out-of-the-box whenever I talked with them! As Derek mentioned, you are super welcome to add more artifacts into the DevTest Labs GitHub repo whenever you have time and interests in doing that. It's a completely open-source and community-contributed GitHub Repo. Hopefully I'll see your pull requests in the future, and hopefully we will get chance to work together again. Best wished to all of you! - Xiaoying Guo
We need your feedback
Here are some ways to connect with the team:
- Add a comment below
- Send us a tweet @almrangers | https://docs.microsoft.com/en-us/archive/blogs/visualstudioalmrangers/artifacts-and-guidance-for-azure-devtest-labs | 2020-05-25T09:17:52 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
..
When this parameter is specified any warnings that might normally appear during the uninstallation and removal of the domain controller will be suppressed to allow the cmdlet to complete its operation. This parameter can be useful to include when scripting uninstallation.
Forces the removal of a domain controller. Use this parameter to force the uninstall of AD DS if you need to remove the domain controller and do not have connectivity to other domain controllers within the domain topology... whether to whether to remove application partitions during the removal of AD DS from a domain controller... | https://docs.microsoft.com/en-us/powershell/module/addsdeployment/uninstall-addsdomaincontroller?view=winserver2012r2-ps&redirectedfrom=MSDN | 2020-05-25T09:27:45 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.microsoft.com |
Construct a new OnmsHTTPOptions object.
HTTP data to be passed when POSTing
HTTP headers to be passed to the request.
HTTP parameters to be passed on the URL.
The server to use if no server is set on the HTTP implementation.
How long to wait for ReST calls to time out.
How long to wait for ReST calls to time out.
Add a URL parameter. Returns the OnmsHTTPOptions object so it can be chained.
the parameter's key
the parameter's value
Options to be used when making HTTP ReST calls. | https://docs.opennms.org/opennms-js/branches/mbrooks.drift2/opennms-js/classes/onmshttpoptions.html | 2020-05-25T09:14:43 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.opennms.org |
xlwings with other Office Apps¶
xlwings can also be used to call Python functions from VBA within Office apps other than Excel (like Outlook, Access etc.).
Note
New in v0.12.0 and still in a somewhat early stage that involves a bit of manual work.
Currently, this functionality is only available on Windows for UDFs. The
RunPython functionality
is currently not supported.
How To¶
As usual, write your Python function and import it into Excel (see VBA: User Defined Functions (UDFs)).
Alt-F11to get into the VBA editor, then right-click on the
xlwings_udfsVBA module and select
Export File.... Save the
xlwings_udfs.basfile somewhere.
Switch into the other Office app, e.g. Microsoft Access and click again
Alt-F11to get into the VBA editor. Right-click on the VBA Project and
Import File..., then select the file that you exported in the previous step. Once imported, replace the app name in the first line to the one that you are using, i.e.
Microsoft Accessor
Microsoft Outlooketc. so that the first line then reads:
#Const App = "Microsoft Access"
Now import the standalone xlwings VBA module (
xlwings.bas). You can find it in your xlwings installation folder. To know where that is, do:
>>> import xlwings as xw >>> xlwings.__path__
And finally do the same as in the previous step and replace the App name in the first line with the name of the corresponding app that you are using. You are now able to call the Python function from VBA.
Config¶
The other Office apps will use the same global config file as you are editing via the Excel ribbon add-in. When it makes sense,
you’ll be able to use the directory config file (e.g. you can put it next to your Access or Word file) or you can hardcode
the path to the config file in the VBA standalone module, e.g. in the function
GetDirectoryConfigFilePath
(e.g. suggested when using Outlook that doesn’t really have the same concept of files like the other Office apps).
NOTE: For Office apps without file concept, you need to make sure that the
PYTHONPATH points to the directory with the
Python source file.
For details on the different config options, see Config. | https://docs.xlwings.org/en/0.18.0/other_office_apps.html | 2020-05-25T07:54:11 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.xlwings.org |
cd <rhq-working-copy-root> find . -name target | xargs rm -rf
Performing Full Builds and Module Specific Builds
Purging and Updating the Database Schema
Updating the Storage Node DB (4.8+)
Building an upgrade database
Building Oracle without Running Tests or Validating Schemas
The dev Storage Node (4.8+)
Deploying Multiple Dev Storage Nodes (4.8+)
GWT Compilation For Different Browsers
GWT Compilation Memory Requirements
GWT Compilation for Different Locales.
You can customize how Maven performs its builds by creating for yourself a settings.xml file and placing that file in the $HOME/.m2 directory. There is an example settings.xml checked into the Git repository in the /etc/m2/ directory.
/etc/m2/settings.xml provides the necessary configuration data. For the configuration present in this file to take effect, you must put it in a location where Maven will be looking for it. By default, that is $HOME/.m2/settings.xml..
When you do a full build, you must do it from the root directory (i.e. <rhq-working-copy-root>/). Do not run "mvn install" from <rhq-working-copy-root>/modules because that will not perform a complete full build - specifically, if a 3rd party dependency has been updated since your last build, you will not pick up the changes to the dependencies. Building out of root is required to rebuild the project metadata that contains the dependency version information.
If you are switching between branches that change the version you are going to build (such as going from master branch to an older release branch, e.g. going from 4.1.0 to 4.0.0), you should manually delete all target directories using your standard operating system commands. This is to avoid having to work around dependency issues when maven attempts to clean. On UNIX, you can do this manual removal of all target directories by doing this:
cd <rhq-working-copy-root> find . -name target | xargs rm -rf:
cd <rhq-working-copy-root>/modules/core/dbutils mvn -Ddbsetup install
:
<!-- defaults for datasource used by integration tests - these may be overridden in ~/.m2/settings.xml --> <rhq.test.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhq</rhq.test.ds.connection-url> <rhq.test.ds.driver-class>org.postgresql.Driver</rhq.test.ds.driver-class> <rhq.test.ds.xa-datasource-class>org.postgresql.xa.PGXADataSource</rhq.test.ds.xa-datasource-class> <rhq.test.ds.user-name>rhqadmin</rhq.test.ds.user-name> <rhq.test.ds.password>rhqadmin</rhq.test.ds.password> <rhq.test.ds.type-mapping>PostgreSQL</rhq.test.ds.type-mapping> <rhq.test.ds.server-name>127.0.0.1</rhq.test.ds.server-name> <rhq.test.ds.port>5432</rhq.test.ds.port> <rhq.test.ds.db-name>rhq</rhq.test.ds.db-name> <rhq.test.ds.hibernate-dialect>org.hibernate.dialect.PostgreSQLDialect</rhq.test.ds.hibernate-dialect> <rhq.test.quartz.driverDelegateClass>org.quartz.impl.jdbcjobstore.PostgreSQLDelegate</rhq.test.quartz.driverDelegateClass> <rhq.test.quartz.selectWithLockSQL>SELECT * FROM {0}LOCKS ROWLOCK WHERE LOCK_NAME = ? FOR UPDATE</rhq.test.quartz.selectWithLockSQL> <rhq.test.quartz.lockHandlerClass>org.quartz.impl.jdbcjobstore.StdRowLockSemaphore</rhq.test.quartz.lockHandlerClass> <!-- defaults for datasource used by the dev container build (see dev docs on the 'dev' profile) - these may be overridden in \~/.m2/settings.xml --> <rhq.dev.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhqdev</rhq.dev.ds.connection-url> .user-name>rhqadmin</rhq.dev.ds.user-name> <rhq.dev.ds.password>rhqadmin</rhq.dev.ds.password> <rhq.dev.ds.password.encrypted>1eeb2f255e832171df8592078de921bc</rhq.dev.ds.password.encrypted> <rhq.dev.ds.type-mapping>PostgreSQL</rhq.dev.ds.type-mapping> >.:
mvn -Ddb=dev -Ddbreset
would drop and create the dev DB schema and then run dbsetup to populate it. And:
mvn -Ddb=test -Ddbreset
would drop and create the test DB schema and then run dbsetup to populate it.:
mvn -Pdev -Ddbreset -Dstorage-schema
There are times, particularly for testing, when you will want to build a database that is upgraded from some past release. In the dbutils module we can generate a JON 2.3.1 database and then upgrade it to whatever is in HEAD.
mvn -Ddbreset -Djon.release=2.3.1 -Ddb=test
Run the following command to build oracle without running tests or validating schemas:
mvn --settings settings.xml --activate-profiles enterprise,dist,ojdbc-driver --errors --debug -Ddbsetup-do-not-check-schema=true \ -DskipTests -Drhq.test.db.type=oracle -Dmaven.repo.local=${WORKSPACE}/.m2/repository clean install.
In order to understand things like profiles and modules, you should be familiar with Maven. Read the Maven documentation for more information..
Your $HOME/.m2/settings.xml Maven configuration file can be used to tune how certain things are built. In order to use the dev profile, you should set the rhq.rootDir property to the full path to the directory where RHQ <rhq-working-copy-root> is checked out (e.g. C:/Projects/rhq-src). The dev profile will then use "<rhq.rootDir>/dev-container" as the external container location. Alternatively, if you want your external container to live somewhere other than under the RHQ <rhq-working-copy-root> directory, you can set the rhq.containerDir property to the full path of the directory where you want your external container live..
If the RHQ Server was already running, you do not have to shut it down and restart it when changing plugin code; just rebuild it using -Pdev and the plugin jar will be copied in the appropriate location in the RHQ Server. The RHQ Server will pick up the change, deploy the plugin properly and your agents will then be free to update their plugins to pick up the new one (see the agent's "plugins update" prompt command for one way to do this)..:
<profile> <id>dev</id> <properties> <!-- Set the below prop to the absolute path of your RHQ source dir (e.g. /home/bob/projects/rhq). (${rhq.rootDir}/dev-container will be used as the dev container dir) --> <rhq.rootDir>/home/ips/Projects/rhq</rhq.rootDir> <!-- Alternatively, if you don't want to use the default location of {rhq.rootDir}/dev-container/ for your dev container, then set the below prop to the desired location. --> <!--<rhq.containerDir>C:/home/bob/rhq-dev-container</rhq.containerDir>--> <rhq.dev.ds.connection-url>jdbc:postgresql://127.0.0.1:5432/rhqdev</rhq.dev.ds.connection-url> <rhq.dev.ds.user-name>rhqadmin</rhq.dev.ds.user-name> <rhq.dev.ds.password>rhqadmin</rhq.dev.ds.password> <rhq.dev.ds.type-mapping>PostgreSQL</rhq.dev.ds.type-mapping> > <!-- quartz properties --> > </properties> </profile>:
cd modules/core/dbutils mvn -Pdev -Ddbreset -Dmaven.test.skip=true
The -Pdev activates the dev profile, which tells the dbutils module to use the dev DB, rather than the test DB by default. To use the test DB instead, you can either deactivate the dev profile using -P'!dev' or explicitly tell the dbutils module to use the test DB via -Ddb=test.
And if you ever want to wipe all data from your dev DB and/or upgrade it to the latest schema, run the following commands:
cd modules/core/dbutils mvn -Pdev -Ddbsetup -Dmaven.test.skip=true
The rhq.dev.ds.* should not be confused with the rhq.test.ds.* properties, which define the DB that is used by the domain and server-jar unit tests.
Running multiple nodes relies in part on using localhost aliases. If you are on a Linux platform, you should not have to create the aliases. On other platforms like Mac OS X, you will have to create the aliases which can be done as follows,
$ sudo ifconfig lo0 alias 127.0.0.2 up $ sudo ifconfig lo0 alias 127.0.0.3 up
On RHEL you can explicitly set those up via
$ sudo ifconfig lo add 127.0.0.2 $ sudo ifconfig lo add 127.0.0.3.
$ mvn clean package -Pdev -Drhq.storage.num-nodes=3,
$ mvn -o groovy:execute -Pdev -Dsource=src/main/script/storage_setup.groovy -Drhq.storage.num-nodes=2
This will generate dev-container/rhq-server-2. If we later decide that we want two more nodes, run,
$ mvn -o groovy:execute -Pdev -Dsource=src/main/script/storage_setup.groovy -Drhq.storage.num-nodes=4:
mvn -P'!linux-plugins' -P'!misc-plugins' -P'!validate-plugins':
<properties> <!-- This property is substituted, by the resource plugin during the resources phase, as the value of the user.agent property in RHQDomain.gwt.xml and CoreGUI.gwt.xml. The default value results in these GWT modules being compiled into JavaScript for all supported browsers. To limit compilation to your preferred browser(s) to speed up compile time, specify the gwt.userAgent property on the mvn command line (e.g. -Dgwt.userAgent=gecko1_8) or in your ~/.m2/settings.xml As of GWT 2.5.0, the recognized agents (defined in gwt-user.jar:com/google/gwt/user/UserAgent.gwt.xml) are as follows: ie8: IE8 ie9: IE9 (and IE 10+ but without all advanced features) gecko: FF2 gecko1_8: FF3 safari: Safari/Chrome opera: Opera Multiple agents can be specified as a comma-delimited list, as demonstrated by the default value below. NOTE: we don't compile for Opera normally. --> <gwt.userAgent>ie8,ie9, gecko,gecko1_8,safari,opera</gwt.userAgent> <!-- Override this via mvn command line or your ~/.m2/settings.xml to speed up compilation. --> <gwt.draftCompile>false</gwt.draftCompile> </properties>
Here is what a typical developer's ~/.m2/settings.xml could look like:
<profile> <id>dev</id> <properties> ... <!-- Only gwt-compile JavaScript for Firefox 3.x. --> <gwt.userAgent>gecko1_8</gwt.userAgent> <!-- Enable faster, but less-optimized, gwt compilations. --> <gwt.draftCompile>true</gwt.draftCompile> </properties> </profile>-plugin.extraJvmArgs>-Xms512M -Xmx768M -XX:PermSize=128M -XX:MaxPermSize=256M</gwt-plugin.extraJvmArgs> <gwt-plugin.localWorkers>2</gwt-plugin.localWorkers>
You can limit the locales that the GWT build compiles. This also helps to further reduce the memory requirements of the build. You can put the the following settings in your settings.xml file:
<gwt.locale>en,de</gwt.locale>
This will only compile RHQ with English and German locales. The value is a comma-separated list of locale names.
To skip running tests specify
-DskipTests
on the maven command line. To skip building and running tests specify
-Dmaven.test.skip
on the maven command line.
Please note that you must not use the -Dmaven.test.skip property when using -Dtest, otherwise the unit test will not be executed.
mvn -Pdev -Dintegration.tests test
mvn -Pdev,integration-tests test.
The integration tests will fire up 2 Cassandra nodes, that listen on 127.0.0.1 and 127.0.0.2, You may need to configure 127.0.0.2 manually.
This is described above.
If you get OutOfMemoryErrors due to those Cassandra nodes failing to start up during itests-2 runs, then follow the instructions here:
That page describes some things to set to get it to work - you need to increase things like number of processes a user is allowed to run.
On Windows you must use -Ditest.use-external-storage-node. This is because the nature of Windows prevents happy interaction between the spawned Arquillian, EAP and Cassandra processes.
If performing server (or domain) integration tests against Oracle you must specify the following additional profile:
-Pitest.oracle
Otherwise tests default to Postgres.
In order to attach a debugger to itest code you must specify -Ditest.debug and then attach to port 8798.
In order to run REST API tests your dev container or local RHQ Server must be running. Then run REST testsuite using:
cd modules/integration-tests/rest-api && mvn install
Read the README file in that "rest-api" directory - there are some things you have to setup (like making sure a platform with a specific ID is in inventory, users are created, etc).). | https://docs.jboss.org/author/display/RHQ/Advanced%20Build%20Notes.html | 2020-05-25T08:17:53 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.jboss.org |
Runtime Manager Agent 1.12.2 Release Notes
May 3, 2019
This document describes new features and enhancements, known limitations, issues, and fixes in Anypoint Runtime Manager agent, version 1.12.2.
Fixed Issues
This release contains the following fixed issues:
HTTP 400 response when uploading metrics using Mule agent for flow names with non-ASCII characters.
Upgraded the versions of several dependency libraries. | https://docs.mulesoft.com/release-notes/runtime-manager-agent/runtime-manager-agent-1.12.2-release-notes | 2020-05-25T08:48:49 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.mulesoft.com |
Microsoft Dynamics CRM supports claims based authentication using the WS-Federation (Passive) protocol. Typically, claims are configured with ADFS as the Service Provider to handle authentication requests with the claims provider. Optionally, CRM can use a custom Security Token Service (STS) in order to enable federated authentication. The WSO2 Identity Server provides a secure token service by default. In order to support using the Identity Server with CRM, a custom metadata file needs to be generated and it should be accessible to the CRM claims configuration wizard, which will give CRM the STS passive endpoint and private key for signing of claims. Microsoft Dynamics CRM can be setup with internal claims based authentication, or further secured for external claims based authentication as an Internet Facing Deployment (IFD).
Internet Facing Deployment (IFD) means that the functionality of the application is externally exposed and is outside of your local network. This is used by enterprises to set up their deployment to allow their employees to access the application away from work. Using an Internet Facing Deployment changes the URL structure CRM uses to load organizations, and thus has an effect on the settings required in the Identity Server.
The following must be configured to log in to Microsoft Dynamics CRM using the WSO2 Identity Server.
Configuring user stores
Users need to be configured within the Identity Server in order to perform authentication. This can be done by manually adding users to the Identity Server or connecting directly to an LDAP server. The only requirements are that the user records represented in the Identity Server have a username field in the format of [email protected] or DOMAIN\username in order to correctly log in to CRM, and that username field matches a username field within CRM.
Configuring the service provider
Within WSO2, a service provider needs to be created to represent the Microsoft Dynamics CRM server that requests for tokens. The only two items that must be setup within the service provider configuration are the inbound authentication WS-Federation (Passive) configuration, and the claims configurations. If CRM is also configured for IFD, a service provider needs to be created to represent each organization that requests for tokens due to how CRM handles the organization's URLs.
Within the service provider, in the inbound authentication section, a Passive STS realm must be defined under the WS-Federation (Passive) Configuration area. This value should match the CRM server URL. Typically, it will be in the format (for non-IFD) or (for IFD). For IFD servers, one server provider must be created for each organization, with each one having the specific organization's URL set as the Passive STS Realm in the Inbound WS-Federation authentication settings. Ensure that the trailing "/" is included, as CRM appends this by default to all its endpoints and the values must match exactly.
- Sign in. Enter your username and password to log on to the Management Console.
- Navigate to the Main menu to access the Identity menu. Click Add under Service Providers.
- Fill in the Service Provider Name and provide a brief Description of the service provider.
- Expand the Inbound Authentication Configuration section followed by the WS-Federation (Passive) Configuration section.
- Enter an appropriate value for the Passive STS Realm as explained above.
- Expand the Claim Configuration section. Claims must be configured in order to log the requester into CRM as the correct user. Microsoft Dynamics CRM expects two specific claims returned from the STS. They are as follows.
In order to retrieve these values from WSO2, map the local claim value to the CRM value. In the Subject Claim URI, select the. This example assumes that the the username field and the a
DOMAIN\usernameor
[email protected] field that matches up to a username that exists in the CRM organization that is being accessed.
- Click Update.
Configure Microsoft Dynamics CRM
In order to authenticate with a security token service, CRM expects federation metadata that contains specific details about the service. It requires the certificate that the STS uses to sign the responses as well as the passive STS endpoint for the WSO2 server, in addition to the claims expected. A sample file can be found inside
<IS_HOME>/repository/deployment/server/webapps/mex directory. This file needs to be hosted somewhere accessible to the CRM server. For the purposes of testing this scenario, you can add it to the wwwroot folder for easy access.
Once the metadata XML is in place, and assuming all the certificates have been placed correctly on the servers if they differ between the Identity Server and CRM, claims based authentication can be enabled from the CRM deployment wizard. The federation metadata URL should point to the file that was created above.
On the next screen, select the certificate that is used to encrypt the data sent between the STS and CRM.
Continue through the wizard and apply the final settings. In this example, an IFD CRM environment is used, so IFD needs to be re-enabled at this point from within the CRM deployment manager. Then, perform an IIS reset on the CRM server. Claims based authentication and IFD should now be enabled, and if configured correctly, redirect the user to the WSO2 logon screen when the user navigates to https://<orgname>.crmdomain.com.
- To test out WSO2 Identity Server's passive security token service using a sample, see Testing Identity Server's Passive STS. | https://docs.wso2.com/display/IS570/Logging+in+to+Microsoft+Dynamics+CRM+with+WS-Federation | 2020-05-25T09:17:36 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.wso2.com |
24.07. Transaction categories
Categorising transactions allows you to group transactions together when reviewing or reporting them. Each type of transaction can have its own list of categories. For example customer invoices might have a category “normal” and “urgent”. Inventory adjustements might have categories “expired”, “damaged”, “stocktake”, “annual stockatake” or “monthly stocktake” etc.
From the Special menu, choose Transaction categories… to see this window:
First of all, select the transaction type the categories you create will belong to in the Transaction type drop down list. Customer (which refers to customer invoices) is selected by default so the table will contain all the previously created customer invoice categories.
Click on “New” button to create a new category and this window appears:
Here you enter the details of the category:
- Master category: select the master category that this one will belong to. Master categories are a way of grouping other transaction categories together, even categories of different types, to make reporting easier (see section 11.03. Transaction reports for details of how master transaction categories can be used in reporting). Master transaction categories are fixed in the system and cannot be edited. Those available are:
- Damaged
- Expired
- Found
- Lost
- Stocktake
- Stolen
- Other
- Category code: the code of the transaction category. Not displayed in mSupply but you can filter categories by this so it provides another way to select relevant categories when reporting.
- Category Description: the name of the category, visible throughout mSupply. Also filterable when reporting.
Once you have created transaction categories they will be selectable in a drop down list when you create a new invoice or transaction.
For systems using Remote Synchronisation, Transaction Categories are 'System' data and can only be edited on the Primary Server. Any edits then synchronise to all relevant satellites.
There is a 'server' preference to Require category entry on customer invoices. This would apply to all customer invoices issued for all stores Active on the server.
See the 26.08. How to report by invoice category section for details on how to use the transaction categories in reporting. | http://docs.msupply.org.nz/other_stuff:transaction_categories | 2020-05-25T07:46:23 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.msupply.org.nz |
Released on:
Thursday, January 17, 2019 - 11:00
Notes
A new version of the agent has been released. Follow standard procedures to update your Infrastructure agent.
Features
- Added disable_all_plugins config option that disables all the inventory plugins which don't have their own frequency option specified. Check out the documentation.
- Added cpu_profile config option for creating pprof cpu profiles.
Improvements
- Reduced CPU consumption by 80% on average.
Changes
- Decreased sysctl sampling frequency.
Bug fixes
- Fixed an issue that avoids the agent being installed in old Ubuntu versions. | https://docs.newrelic.com/docs/release-notes/infrastructure-release-notes/infrastructure-agent-release-notes/new-relic-infrastructure-agent-121 | 2020-05-25T09:30:20 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.newrelic.com |
Lightning previewer
The Lightning Components and Applications Previewer along with the Lightning Editor in TWS is a convenient way to check all your Lightning changes directly in the IDE. You don't need anymore to switch between different windows and waste your time.
Once you open any file from a Lightning bundle, you'll see the Lightning Components and Application Previewer. All the files, included in the bundle, are available via tabs below the previewer window.
The previewer is located in the top part of the combined editor, and all the bundle's members are present lower part in the tabs. This means that you can switch between the tabs, and the Lightning previewer will be available for you always. When you make changes in any file in the bundle and build them to your Salesforce Organization, all the updates will be automatically displayed in the previewer in seconds, just after a change is saved on your Org.
In the case, when you are working with a Component bundle, the IDE provides you with a possibility to specify which Lightning Application should be used for previewing the component. You can select the needed Application itself, and specify the appropriate URL parameters for it in the toolbar of the previewer. Also, you have a possibility to type / paste any URL for previewing your Lightning Component, for example, to preview it using a detail view of any record in the Org.
When you are working with an Application bundle, of course, you don't need to select an application for previewing it. So for this case, The Welkin Suite provides you with the ability to just specify additional URL parameters. This setting is also present in the toolbar of the previewer.
The Welkin Suite uses the built-in browser based on the Chromium engine for previewing. At the same time, you can refresh it manually using the Reload button in the top left corner of the previewer. In addition, you can open a preview of your Lightning Application in your default browser directly from the IDE — use the appropriate button in the top right corner of the previewer.
This feature is available for you on the projects which are associated with the Org that has a custom domain. You should enable a custom domain for your organization on Salesforce and then use it during the creation of a project.
In case, if you want to enable a custom domain for your existing project, guide the following steps:
- open the Properties of a project,
- enter the URL of your custom domain into the
URLfield,
- verify the credentials, including the security token,
- click Apply and then OK to save changes,
- save your solution and restart.
The Lighting Previewer will be automatically opened when you open any application file in the IDE.
In addition, you can easily collapse Lightning Previewer, if you do not need it now. To do so, click on the double arrow between the Previewer and the code editor. To expand the Previewer back, click on this button one more time.
You're also able to collapse Lightning previewer by default. To do so, navigate to the Main Menu:
Tools ⇒ Options ⇒ Projects ⇒ Presentation and check the box next to the Collapse Lightning Previewer option.
| https://docs.welkinsuite.com/?id=windows:how_does_it_work:how_to_work_with_built-in_editor:lightning_editor:lightning_previewer&amp;s%5B%5D=lightning&amp;s%5B%5D=previewer&amp;s%5B%5D=applications | 2020-05-25T08:51:16 | CC-MAIN-2020-24 | 1590347388012.14 | [array(['/lib/exe/fetch.php?media=windows:how_does_it_work:how_to_work_with_built-in_editor:lightning_editor:lightning-previewer.png',
'Lightning application previewer Lightning application previewer'],
dtype=object)
array(['/lib/exe/fetch.php?media=windows:how_does_it_work:how_to_work_with_built-in_editor:lightning_editor:previewer.png',
'Collapse the Previewer Collapse the Previewer'], dtype=object)
array(['/lib/exe/fetch.php?media=windows:how_does_it_work:how_to_work_with_built-in_editor:lightning_editor:collapse-visualforce-lightning-previewers.png',
'Collapse Lightning Previewer Collapse Lightning Previewer'],
dtype=object) ] | docs.welkinsuite.com |
Upgrade Portworx on Kubernetes
This guide describes the procedure to upgrade Portworx running as OCI container using talisman.
Upgrade Portworx
To upgrade to the 2.3 release, run the following command:
PXVER='2.3' curl -fsL{PXVER}/upgrade |
2.0.3.7, Portworx, Inc. recommends upgrading directly to
2.1.2or later as this version fixes several issues in the previous build. Please see the release notes page for more details.
Upgrade Stork
On a machine that has kubectl access to your cluster, enter the following commands to download the latest Stork specs:
KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}') PXVER='2.3' curl -fsL -o stork-spec.yaml "{PXVER}?kbver=${KBVER}&comp=stork"
If you are using your own private or custom registry for your container images, add
®=<your-registry-url>to the URL. Example:
KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}') PXVER='2.3' curl -fsL -o stork-spec.yaml "{PXVER}?kbver=${KBVER}&comp=stork®=artifactory.company.org:6555"
Next, apply the spec with:
kubectl apply -f stork-spec.yaml
Upgrade Lighthouse
On a machine that has kubectl access to your cluster, enter the following commands to download the latest Lighthouse specs:
KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}') PXVER='2.3' curl -fsL -o lighthouse-spec.yaml "{PXVER}?kbver=${KBVER}&comp=lighthouse"
If you are using your own private or custom registry for your container images, add
®=<your-registry-url>to the URL. Example:
KBVER=$(kubectl version --short | awk -Fv '/Server Version: /{print $3}') PXVER='2.3' curl -fsL -o lighthouse-spec.yaml "{PXVER}?kbver=${KBVER}&comp=lighthouse®=artifactory.company.org:6555"
Apply the spec by running:
kubectl apply -f lighthouse-spec.0.3.4 image.
PXVER='2.3' curl -fsL{PXVER}/upgrade | bash -s -- -t 2.0.3.4
Airgapped clusters
When upgrading Portworx in Kubernetes using the curl command in examples above, a number of docker images are fetched from container registries on the Internet (e.g. docker.io, gcr.io). If your nodes don’t have access to these registries, you need to first pull the required images in your cluster and then provide the precise image names to the upgrade process.
The below sections outline the exact steps for this.
Step 1: Pull the required images
If you want to upgrade to the latest 2.3 stable release, set the
PX_VERenvironment variable to the following value:
export PX_VER=$(curl -fs | awk -F'=' '/^OCI_MON_TAG=/{print $2}')
NOTE: To upgrade to a specific release, you can manually set the
PX_VERenvironment variable to the desired value. Example:
export PX_VER=2.3.6
Pull the Portworx images:
export PX_IMGS="portworx/oci-monitor:$PX_VER portworx/px-enterprise:$PX_VER portworx/talisman:latest" echo $PX_IMGS | xargs -n1 docker pull
Step 2: Loading Portworx images on your nodes
If you have nodes which have access to a private registry, follow Step 2a: Push to local registry server, accessible by air-gapped nodes.
Otherwise, follow Step 2b: Push directly to nodes using tarball.
Step 2a: Push to local registry server, accessible
Now that you have the images in your registry, continue with Step 3: Start the upgrade.
Step 2b: Push directly to nodes using
Step 3: Start the upgrade
Run the below script to start the upgrade on your airgapped cluster.
# Default image names TALISMAN_IMAGE=portworx/talisman OCIMON_IMAGE=portworx/oci-monitor # Do we have container registry override? if [ "x$REGISTRY" != x ]; then echo $REGISTRY | grep -q / if [ $? -eq 0 ]; then # REGISTRY defines both registry and repository TALISMAN_IMAGE=$REGISTRY/talisman OCIMON_IMAGE=$REGISTRY/oci-monitor else # $REGISTRY contains only registry, we'll assume default repositories TALISMAN_IMAGE=$REGISTRY/portworx/talisman OCIMON_IMAGE=$REGISTRY/portworx/oci-monitor fi fi [[ -z "$PX_VER" ]] || ARG_PX_VER="-t $PX_VER" curl -fsL | bash -s -- -I $TALISMAN_IMAGE -i $OCIMON_IMAGE $ARG_PX_VER
Troubleshooting -it ). | https://2.3.docs.portworx.com/portworx-install-with-kubernetes/operate-and-maintain-on-kubernetes/upgrade/ | 2020-05-25T07:45:00 | CC-MAIN-2020-24 | 1590347388012.14 | [] | 2.3.docs.portworx.com |
ABP Documentation
ABP is an open source application framework focused on ASP.NET Core based web application development, but also supports developing other types of applications.
Explore the left navigation menu to deep dive in the documentation.. | https://docs.abp.io/en/abp/Getting-Started?UI=MVC&DB=EF&Tiered=No | 2020-05-25T07:42:41 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.abp.io |
Step 2: Create. Create a role that AWS Elemental MediaPackage assumes when ingesting source content from Amazon S3.
When you create the role, you choose EC2 as the trusted entity that can assume the role because AWS Elemental MediaPackage isn't available for selection. In Step 3: Modify the Trust Relationship, you change the trusted entity to MediaPackage.
To create the service role for an EC2 trusted entity (IAM console)
Sign in to the AWS Management Console and open the IAM console at
.
In the navigation pane of the IAM console, choose Roles, and then choose Create role.
Choose the AWS service role type, and then choose EC2 trusted entity.
Choose the EC2 use case. Then choose Next: Permissions.
On the Attach permissions policies page, search for and choose the policy that you created in Step 1: Create a Policy. Then choose Next: Tags and Next: Review.
or choose Create policy to open a new browser tab and create a new policy from scratch. For more information, see step 4 in the procedure Creating IAM policies in the IAM User Guide. After you create the policy, close that tab and return to your original tab to select the policy to use for the permissions boundary.
Choose Next: Tags.
(Optional) Add metadata to the user by attaching tags as key-value pairs. For more information about using tags in IAM, see Tagging IAM Entities in the IAM User Guide.
Choose Next: Review.
If possible, enter a role name or role name suffix to help, enter a description for the new role.
Review the role and then choose Create role. | https://docs.aws.amazon.com/mediapackage/latest/ug/setting-up-create-trust-rel-role.html | 2020-05-25T09:24:00 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.aws.amazon.com |
Message-ID: <1017262318.172467.1590396929610.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_172466_73613527.1590396929609" ------=_Part_172466_73613527.1590396929609 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
This page shows all the devices supported via SNMP Discovery, or= ganized in several ways: by Model, Kind, etc... The full list of devices is= also available in xlsx format and can be downloaded from here.=20
Requesting support for a new SNMP device
If you need BMC Discovery to support a new SNMP device, use the Device Capture capability= to download a zipped MIB that you can forward to BMC Customer Support as p= art of a new support issue. BMC Atrium Discovery engineering will aim to in= corporate support for the new device as a new feature in a coming monthly <= a href=3D"/docs/display/Configipedia/Schedule+and+Roadmap">TKU releases= . You are recommended to raise the case reporting the unsupported SNMP devi= ce, including the zipped MIB as early as possible to maximize the chances o= f its inclusion in the earliest release possible.=20
What is NetworkDevice? SNMPManagedDevice?
NetworkDevice is a device that take a part of traffic flow (Routing/Swit= ching/etc...). SNMPManagedDevice is an endpoint device.=20
In the tables below, some devices have a non-unique and generic model na= me whereas they own a unique sysObjectID. That is expected since BMC Atrium= Discovery queries specific SNMP oids to obtain the correct model name valu= e and if not present, will default to the table listed value.=20
XML API vs SNMP
The XML API discovery is only available for Cisco CIMC and HP iLO manage= ment controllers.=20 | https://docs.bmc.com/docs/exportword?pageId=811783793 | 2020-05-25T08:55:29 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.bmc.com |
Setting up the installation environment
This topic describes how to set up the installation environment.
Before you begin
- Ensure that the system meets the hardware and software requirements listed in System requirements.
- If you do not plan to install PostgreSQL database during the installation of BMC Release Process Management, create a PostgreSQL database, an Oracle database, or a Microsoft SQL database before performing the installation.
- If you plan to install BMC Release Lifecycle Management on a remote computer on the Windows platform, you might want to set the required Terminal Server configuration options.
To set the Terminal Server configuration options (optional)
On a Windows computer, to enable running the installation wizard through a Terminal Services connection or a remote desktop session, you must turn off certain Terminal Server configuration options that pertain to temporary folders.
Access the Terminal Services Configuration console by using one of the following methods:
Select Start > Administrative Tools > Terminal Services (Remote Desktop Services) > Terminal Services Configuration (Remote Desktop Session Host Configuration).
- Select Start > Run. Then type tsconfig.msc and press Enter.
Select Server Settings.
- Disable the following options by setting each to No:
- Delete temporary folders on exit
- Use temporary folders per session
- Restart the computer.
To set up the environment for the BMC Release Process Management installation
Note
The additional instructions on BMC Release Process Management 5.0 troubleshooting are provided in the BMC Release Process Management space.
- If you plan to install BMC Release Process Management silently, notice that settings specified in the code block are case-sensitive. Specify the settings in the correct case to avoid installation failure.
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/ReleasePackageDeploy/50/installing/preparing-for-installation/setting-up-the-installation-environment | 2020-05-25T07:32:42 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.bmc.com |
ARCreateSchema
Note
You can continue to use C APIs to customize your application, but C APIs are not enhanced to support new capabilities provided by Java APIs and REST APIs.
Description
Creates a new form with the indicated name on the specified server. The nine required core fields are automatically associated with the new form.
Privileges
BMC Remedy AR System administrator.
Synopsis
#include "ar.h" #include "arerrno.h" #include "arextern.h" #include "arstruct.h" int ARCreateSchema( ARControlStruct *control, ARNameType name, ARCompoundSchema *schema, ARSchem, form to create. The names of all forms on a given server must be unique.
schema
The type of form to create. The information contained in this definition depends on the form type that you specify..
getListFields
A list of zero or more fields that identifies the default query list data for retrieving form entries. The list can include any data fields except diary fields and long character fields. The combined length of all specified fields, including separator characters, can be as many as 128 bytes (limited by
AR_MAX_SDESC_SIZE). The query list displays the Short-Description core field if you specify
NULL for this parameter (or zero fields). Specifying a
getListFields argument when calling the
ARGetListEntry function overrides the default query list data.
sortList
A list of zero or more fields that identifies the default sort order for retrieving form entries. Specifying a
sortList argument when calling the
ARGetListEntry function overrides the default sort order.
indexList
The set of zero or more indexes to create for the form. You can specify from 1 to 16 fields for each index (limited by
AR_MAX_INDEX_FIELDS). Diary fields and character fields larger than 255 bytes cannot be indexed. for the default view.
helpText
The help text associated with the form. This text can be of any length. Specify
NULL for this parameter if you do not want to associate help text with this object.
owner
The owner for the form. The owner defaults to the user performing the operation if you specify
NULL for this parameter.
changeDiary
The initial change diary associated with the form. form, and a list of zero properties is returned when an
ARGetSchemaField, ARDeleteSchema, ARGetSchema, ARGetListEntry, ARGetListAlertUser, ARSetField, ARSetSchema. See FreeAR for:
FreeARCompoundSchema,
FreeAREntryListFieldList,
FreeARIndexList,
FreeARInternalIdList,
FreeARPermissionList,
FreeARPropList,
FreeARSortList,
FreeARStatusList. | https://docs.bmc.com/docs/ars91/en/arcreateschema-609070940.html | 2020-05-25T09:02:00 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.bmc.com |
. event search properties endpoint.
Data access for OnmsEvent objects. | https://docs.opennms.org/opennms-js/branches/jira.HELM.133/opennms-js/classes/eventdao.html | 2020-05-25T09:19:08 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.opennms.org |
The Maps Importer is an extension for the DataGrab module. With this Extension you can insert entries from you CSV into the Maps Fieldtype and Maps Locator module.
Make sure your system meets the minimum requirements:
When you move all the files to the correct location, Datagrab will handle it from there. After you created the task, you will see next to your field (on the field assignment page) the field settings for the Maps Fieldtype or Maps Locator field. | https://docs.reinos.nl/maps-importer/ | 2020-05-25T07:40:24 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.reinos.nl |
Ordering Questions
Ordering Services
If you already have an active Account at Valice, you can order additional services from within your my.valice.com Account. New customers will create an account as the final step in the order: Navigate to “Order New Services“Select a service to order (only one service per domain may be ordered in a single transaction)If your order […]
Order Renewals
Order […] | https://docs.valice.com/category/ordering-questions/ | 2020-05-25T07:02:04 | CC-MAIN-2020-24 | 1590347388012.14 | [] | docs.valice.com |
1. enPortal Solution Architecture
2. enPortal System Components
The primary functions of enPortal are contained within five system components:
- Request Engine
- Business Logic Engine
- Integration Engine
- Web Resource Proxy and Content Filtering
- Object Database
2.1. Request Engine
The Request Engine serves all requests coming from a user via a web browser. In fact, all external communications with an enPortal system are requested through the Request Engine. The Request Engine’s primary responsibilities are to translate HTTP(S) requests into object requests and to dynamically translate the application-specific results into HTML for transmission to the client web browser. The Request Engine executes within a Servlet/JSP engine; Java Servlets and JSPs are the primary components of the Request Engine. The Request Engine also provides an extra level of access security by verifying that the user is logged in to the system before accepting and servicing the request.
2.2. Business Logic Engine
The Business Logic Engine is responsible for the overall business logic of the system, enPortal’s security, and the storage of system objects. These responsibilities pertain to users, roles, domains, virtual directory access, and content management. Business Logic manages and stores system objects to a chosen object repository/database. The Business Logic Engine runs on the same process (Tomcat as the JSP/Servlet Engine) as the Request Engine.
2.3. Integration Engine
The Integration Engine allows new content Channels to be created and integrated into an enPortal system at runtime. The Integration Engine consists of a Channel classification model and a set of Request Handlers that are implemented as Java Servlets or JSPs. Request Handlers are the public web interfaces into enPortal Channels that service the Channel requests being made from web browser clients. The Integration Engine provides an external interface through the Portal Request Engine that allows HTTP(S) requests to be sent to any plugged-in visual Channel. Upon receipt of a request to render a content Channel, the Integration Engine retrieves the specified Channel (if security allows it) from the enPortal Server and calls the specified Request Handler to render the Channel content.
2.4. Web Application Proxy and Content Filtering
The Web Application Proxy and Content Filtering function facilitates the delivery of and interaction with existing HTTP(S)-based content. It is responsible for applying Single Sign-On (SSO) rules to the retrieval of external HTTP(S) requests, and for manipulating the resulting data streams being returned from an integrated application for control and data customization. The HTTP(S) stream manipulation support within enPortal is both extensive and configurable and is available as a Proxy Channel. A potential example of the use of this function is the removal of an image from an HTML stream as enPortal delivers the HTTP(S) stream to the browser client.
2.5. Object Database
The enPortal Database is a JDBC-compliant RDBMS, and enPortal supports numerous databases, including, for example, Microsoft SQL Server, Oracle, MySQL and DB2. (For a complete list of supported databases, see the System Requirements page.) enPortal ships with an embedded H2 database. The enPortal Database handles mapping between the object-based data model used within enPortal and the relational database model that stores the actual content. | http://docs.edge-technologies.com/docs/enportal/5.6/overview/software_components | 2018-12-10T02:02:49 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.edge-technologies.com |
This example shows how to use our build system to create a simple C++
application which depends on kodo-rlnc. The example consists of 3 files:
main.cpp,
wscript and
waf.
main.cpp contains a very limited amount of code:
It’s basically a main function which prints
Hello Kodo! and exits. In this
example, we include a particular RLNC codec defined in the following header
file:
The include is not used however. Its only purpose is to detect whether or not the include paths for the kodo-rlnc library are configured correctly.
The remaining two files are needed to build the executable.
The
waf file is a complete standalone build system,
whereas the
wscript is the recipe used by
waf to build our example.
The
wscript contains information regarding dependencies and build targets.
The simplest way to get started is to copy the
hello_kodo files to a folder
where you want to develop your application, and then run the standard waf
commands in that folder (the
cp command is Unix-only):
cd kodo-rlnc cp -R examples/hello_kodo/ ~/my_app cd my_app python waf configure python waf build
The build system will download all dependencies, compile some static libraries and finally the example. You can find the compiled executable in the waf build folder, which depends on your operating system:
- Linux:
./build/linux
- Mac OSX:
./build/darwin
- Windows:
./build/win32
You can directly run the executable by executing the appropriate command:
build/linux/hello_kodo build/darwin/hello_kodo build\win32\hello_kodo.exe
You can use this as a starting point for the coming examples, or even your own application.
Note that the currently used version of kodo-rlnc is set in the
resolve
function of the
wscript file like this:
ctx.add_dependency( name='kodo-rlnc', resolver='git', method='semver', major=13, sources=['github.com/steinwurf/kodo-rlnc.git'])
When a new major version is released and you want to update, you can just
modify this version number and run
python waf configure again to get
the chosen version. | http://docs.steinwurf.com/kodo-rlnc/master/tutorial/hello_kodo.html | 2018-12-10T02:50:56 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.steinwurf.com |
Consuming Events
You can consume events from channels or from log files. To consume events, you can consume all events or you can specify an XPath expression that identifies the events that you want to consume. To determine the elements and attributes of an event that you can use in your XPath expression, see Event Schema.
Windows Event Log supports a subset of XPath 1.0. For details on the limitations, see XPath 1.0 limitations.
The following examples show simple XPath expressions.
// The following query selects all events from the channel or log file XPath Query: * // The following query selects all the LowOnMemory events from the channel or log file XPath Query: *[UserData/LowOnMemory] // The following query selects all events with a severity level of 1 (Critical) from the channel or log file XPath Query: *[System/Level=1] // The following query shows a compound expression that selects all events from the channel or log file // where the printer's name is MyPrinter and severity level is 1. XPath Query: *[UserData/*/PrinterName="MyPrinter" and System/Level=1] // The following query selects all events from the channel or log file where the severity level is // less than or equal to 3 and the event occurred in the last 24 hour period. XPath Query: *[System[(Level <= 3) and TimeCreated[timediff(@SystemTime) <= 86400000]]]
You can use the XPath expressions directly when calling the EvtQuery or EvtSubscribe functions or you can use a structured XML query that contains the XPath expression. For simple queries that query events from a single source, using an XPath expression is fine. If the XPath expression is a compound expression that contains more than 20 expressions or you are querying for events from multiple sources, then you must use a structured XML query. For details on the elements of a structured XML query, see Query Schema.
A structured query identifies the source of the events and one or more selectors or suppressors. A selector contains an XPath expressions that selects events from the source and a suppressor contains an XPath expression that prevents events from being selected. You can select events from more than one source. If a selector and suppressor identify the same event, the event is not included in the result.
The following shows a structured XML query that specifies a set of selectors and suppressors.
<QueryList> <Query Id="0"> >
The result set from the query does not contain a snapshot of the events at the time of the query. Instead, the result set includes the events at the time of the query and will also contain all new events that are raised that match the query criteria while you are enumerating the results.
Note
The order of the events is preserved for events that are written by the same thread. However, it is possible for events written by separate threads on different processors of a multiple processor computer to appear out of order.
For details on consuming events, see the following topics:
- Querying for Events
- Subscribing to Events
- Rendering Events
- Formatting Event Messages
- Bookmarking Events
The standard end user tools for consuming event are:
- Event Viewer
- The Windows PowerShell Get-WinEvent cmdlet
- WevtUtil
XPath 1.0 limitations
Windows Event Log supports a subset of XPath 1.0. The primary restriction is that only XML elements that represent events can be selected by an event selector. An XPath query that does not select an event is not valid.). Windows Event Log places the following restrictions on the expression:
-, is supported (on leaf nodes only).
- The Band function is supported. The function performs a bitwise AND for two integer number arguments. If the result of the bitwise AND is nonzero, the function evaluates to true; otherwise, the function evaluates to false.
- The timediff function is supported. The function computes the difference between the second argument and the first argument. One of the arguments must be a literal number. The arguments must use FILETIME representation. The result is the number of milliseconds between the two times. The result is positive if the second argument represents a later time; otherwise, it is negative. When the second argument is not provided, the current system time is used. | https://docs.microsoft.com/en-us/windows/desktop/WES/consuming-events | 2018-12-10T02:10:16 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.microsoft.com |
The Density widget displays the density breakdown in charts for the past seven days for a specific resource.
The Density widget produces a graph depicting the concentration of objects in a particular state as a percentage. It compares the ideal consolidation ratio to the actual consolidation ratio. States that are displayed are Unknown state, Critical state, Immediate state, Warning state, and Normal state.
Where You Find the Density. | https://docs.vmware.com/en/vRealize-Operations-Manager/6.3/com.vmware.vcom.core.doc/GUID-A56B5CE3-8120-4D02-B4A1-5FDE42AA3ED3.html | 2018-12-10T02:42:32 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.vmware.com |
Bicrystal GUI¶
This GUI allows to analyze quantitatively slip transmission across grain boundaries for a single bicrystal.
The Matlab function used to run the Bicrystal GUI is : A_gui_plotGB_Bicrystal.m
This includes:
- It is possible to load bicrystal properties (material, phase, Euler angles of both grains, trace angle…) :
- from the EBSD map GUI (by giving GB number and pressing the button ‘PLOT BICRYSTAL’) ;
- from a YAML config. bicrystal (from the menu, by clicking on ‘Bicrystal, and ‘Load Bicrystal config. file’).
Distribution of all slip transmission parameters¶
It is possible to generate a new window, in which all values of the selected slip transmission parameter are plotted in function of selected slip families. | https://stabix.readthedocs.io/en/latest/bicrystal_gui.html | 2018-12-10T02:29:48 | CC-MAIN-2018-51 | 1544376823236.2 | [] | stabix.readthedocs.io |
Custom signals used by django-registration-redux¶
Much of django-registration-redux’s customizability comes through the ability to write and use registration backends implementing different workflows for user registration. However, there are many cases where only a small bit of additional logic needs to be injected into the registration process, and writing a custom backend to support this represents an unnecessary amount of work. A more lightweight customization option is provided through two custom signals which backends are required backends). Provides the following arguments:
sender
- The backend class used to activate the user.
user
- An instance of
django.contrib.auth.models.Userrepresenting the activated account.
request
- The
HttpRequestin which the account was activated.
registration.signals.
user_registered¶
Sent when a new user account is registered. Provides the following arguments:
sender
- The backend class used to register the account.
user
- An instance of
django.contrib.auth.models.Userrepresenting the new account.
request
- The
HttpRequestin which the new account was registered. | https://django-registration-redux.readthedocs.io/en/latest/signals.html | 2018-12-10T02:28:33 | CC-MAIN-2018-51 | 1544376823236.2 | [] | django-registration-redux.readthedocs.io |
schema-name is specified, then the view is created in the specified database. It is an error to specify both a schema-name and the TEMP keyword on a VIEW, unless the schema-name is "temp". If no schema.
If a column-name list follows the view-name, then that list determines the names of the columns for the view..
SQLite is in the Public Domain. | http://docs.w3cub.com/sqlite/lang_createview/ | 2018-12-10T02:00:57 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.w3cub.com |
FIT Player Eligibility Policy
Introduction
- Player eligibility for international sporting competition can be a complex and often emotive issue. The defining elements are at the very heart of global competition and key issues involved in ascertaining eligibility include aspects of nationality, passport, residency, place of birth, playing history, the respective country’s selection process and even a player’s gender, birth date and age.
- A player’s eligibility to represent a country requires some form of direct link between the individual and the represented country. The International Court of Arbitration for Sport (CAS) has ruled that Legal Nationality and Sporting Nationality may be “different”, one defined in Public Law and the other in Private Law.
- The Federation of International Touch (the Federation) therefore has the authority to decide eligibility criteria for participants in the sport in the international arena as governed by the Federation and Member Country National Touch Associations (NTA). Specific eligibility criteria may differ from Legal Nationality and should be subject to any internationally endorsed anti discrimination or equality policies. The focus of this Eligibility Policy is that the criteria are both fair and equitable to most.
- The Federation also agrees that the basis of International Sport should be just that, about individuals competing for their country against individuals from another country and any competing national team should be a Member Country NTA representative national team.
Application
- This policy applies to all Federation Events classified as Tier 1 to Tier 3 as defined in the Classification of Events policy.
- This policy applies to all participants of Member countries competing at any Federation Event. Member countries are responsible to ensure all individuals representing that Member country meet the eligibility criteria prior to participation at a Federation Event.
- Member countries are encouraged to apply this policy to all events that are not afforded Federation Event status.
- This eligibility policy applies to representative players registered in national teams. The policy does not apply to officials, coaches, referees, support staff or to representative players under eighteen (18) years of age.
Definitions
- Birth Certificate: A formal, statutory document recording the place, date and time of birth of an individual. For the purposes of this policy a certified copy of a Birth Certificate or “Birth Extract” is deemed to be a Birth Certificate.
- Citizenship: A process whereby an immigrant individual achieves the requirements to satisfy a country’s criteria for being a legitimate national.
- Country: A geographic region as defined by the 2008 FIT Constitution.
- Driving Licence: A current, photographic, identification card issued by a government department entitling an individual to drive in a specific country or region.
- Federation Event: A tournament at which international competition occurs in accordance with event classification contained in the Federation Event Classification Policy.
- International competition: Competition between two or more Member National Touch Associations. The competing teams are national representative teams.
- International season: The period from April 1 to July 31 each calendar year during which period Tier 1-3 Federation Events may be scheduled.
- Legal National: An individual player who has been formally recognised by the respective government as a person of origin of a country, on the basis of birth, parentage, residency or other legal criteria pertaining to that country.
- Member: National Touch Associations affiliated with the Federation and they may be Full Members, Associate Members or Federation Members as defined from time to time.
- Open (division): A playing division in which there are only gender qualifications.
- Parent: Either a biological (blood) parent or a legal guardian (adopted) of an individual. For the purpose of this policy a step parent or foster parent is not considered a parent unless they are the legal guardian of the player in question.
- Passport: An official document issued by a government department certifying nationality of a country and used by an individual for international travel.
- Place of Birth: The location of birth as recorded on an individual’s Birth Certificate.
- Represented: An individual listed on the player registration sheet for a Member country participating in a Tier 1, Tier 2, or Tier 3 Federation Event is deemed to have represented that country. “Representation” shall be interpreted accordingly.
- Residency: Continuous domicile in a country except for short periods of absence away from the normal place of residence for holidays and the like. The normal, common use definition in the order of “a few weeks but no longer than a few months” is to apply.
- Senior (divisions): Playing divisions that have minimum age eligibility criteria.
Eligibility Criteria
- For an individual to be eligible to represent a Member country in international competition, the individual must be able to prove:
- They are a Legal National (including Citizenship) of the Member country; or
- They have parent (mother or father) who was born in the Member country; or
- They have been a resident of the Member country for three (3) years.
- In addition to the above requirement at paragraph 4.1, to be eligible to represent a Member country in international competition an individual must not have Represented another Member country in international competition in the sport during the previous three (3) international seasons (which includes the periods of time between such international seasons).
- For the avoidance of doubt, clause 4.2 shall never prevent an individual Representing a Member country where an individual intends to Represent a Member country in a Tier 1 event and where the individual’s last Representation was the previous running of the same Tier 1 event.
- In addition to the criteria listed in paragraphs 4.1 and 4.2 above, to be eligible to participate in Federation events, an individual must also meet:
- Membership and eligibility requirements of the Member National Touch Association including, but not limited to, matters relating to registration, insurance and financial status; and
- Gender criteria (male / men’s or female / women’s); and
- Age criteria in Senior aged divisions. It is normal for age-related divisions to specify a minimum age by the year of competition or a minimum year of birth. Tournament regulations will specify the criteria;
- Solely in the event that the individual has previously Represented another Member country and relies upon clauses 4.1.2 or 4.1.3 to satisfy the requirements of clause 4.1, the following additional requirements must have been met during the two (2) years immediately preceding the date of the individual’s application for eligibility in respect of the new Member country:
- The individual must have competed in the national Touch championships (or equivalent pinnacle representative event) of the Member country they intend to Represent; and
- The individual must have been a registered and financial member of an affiliated Touch club within the Member country they intend to Represent.
- The requirements of clause 4.3.4 may be waived by FIT, upon application by an individual, in the event that the individual’s failure to satisfy the requirements were outside of the individual’s control including, but not limited to, situations such as injury or cancellation of events.
- An individual participating in any Federation Event must be able to clearly prove their identity by having available to present one of the following as required:
- Current driving licence; or
- Current passport; or
- Other, suitable photographic evidence.
- The responsibility to prove eligibility rests with both the individual and with the Member country. Participants whose eligibility may be perceived as questionable for a particular Federation Event should ensure that adequate and appropriate justification is at hand.
Multiple Representations
- An individual who has represented a Member country in international competition may seek a clearance to represent a different Member country provided that individual meets the eligibility criteria listed in paragraph 4.1 to 4.3 above, for the different Member country.
- An individual is entitled to multiple changes provided that the following provisions are adhered to:
- For an individual to be granted their first and/or second change of Representation, they must satisfy clauses:
- 4.1;
- 4.2;
- 4.3.1;
- 4.3.2;
- 4.3.3; and
- 4.3.4;
- For an individual to be granted their third or subsequent change of Representation, they must:
- Fulfil all the obligations listed in 5.2.1; and
- Satisfy FIT (exercising its sole discretion) that the change in Representation is required due to an extraordinary set of circumstances resulting in the change of Representation being in the best interests of the sport of Touch and which maintains the integrity of fair sporting competition and the reputation of FIT and the sport of Touch.
- Consideration of the circumstances of any request for a change should be applied and may include reasons of immigration, marriage, employment, development of the sport and personal matters.
- The following clearance procedures applies should an individual wish to change intended representation to a new Member country:
- An individual initially must apply for clearance by completing the Clearance Application Form online, together with any supporting documentation not less than four (4) months prior to the respective Federation Event.
- The Federation Secretary General will then seek written comment from the individual’s current Member Country NTA followed by the individual’s intended Member Country NTA.
- Following consideration of the matter the Federation Secretary General will advise the individual and both Member Country NTA of the decision in writing.
- The Secretary General is to maintain a record of all clearance application requests and associated decisions.
- An individual may appeal the decision of the Secretary General to the Federation Board in accordance with the Federation Judiciary Policy. The Board may appoint a Federation Appeals Panel to determine the appeal on its behalf. Any such appeal must be lodged with the Secretary General within 7 days of notification of the decision.
- Solely where a decision is made by FIT in accordance with clause 5.2.2, the details of such decision shall be published by FIT into the public domain to ensure the transparency of decision making is maintained.
Sanctions for Policy Breaches
- Should a Member country be alleged to have allowed an ineligible individual to represent that Member country and participate in any match during any Federation Event and eligibility is not proven by the individual the Federation may take appropriate action. Penalties that may be imposed include, but are not limited to the following:
- Deduction of competition points; and / or
- Monetary fines; and / or
- Banning of individuals from competitions or remaining matches; and / or
- Banning of teams from competitions or remaining matches; and / or
- Combination of any of the above.
Protests and Appeals
- A Member country may protest to the Secretary General or to his or her appointee, or to any Tournament Judiciary to consider such matters, about an individual either intending to represent or who is representing or has represented another Member country at a Federation Event. Any eligibility protest must relate to the eligibility criteria detailed at paragraphs 4.1 to 4.3. Further details will normally be included in Event Rules and Regulation documents.
- Any eligibility protest must be submitted using the Player Eligibility Protest Form distributed in the event information package and must be submitted within thirty (30) minutes of completion of the game in which the alleged transgression occurred. The protest circumstances are to be clearly advised and available evidence supplied at the time of submission.
- The Secretary General or appointee, through the respective Tournament Judiciary will consider the matter and will advise all associated Member countries and the individual concerned of the decision without delay.
- The complainant Member country may appeal the Secretary General decision to the Federation Board to determine an appeal in accordance with the Federation Judiciary Policy.
- Any matter of interpretation, or matter not provided for in this Policy, will be determined by the Federation Board.
Policy History / Approval/Application
- This policy was approved by the Federation Board on 18th April 2018.
- This policy, and any subsequent amendment of this policy, will take effect immediately upon communication of same to Member countries through the FIT Digital Identity mailbox.
- Member countries are responsible for the appropriate application of this policy.
- The policy is due for review in December 2019. | https://docs.internationaltouch.org/policy/player-eligibility/ | 2018-12-10T02:57:02 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.internationaltouch.org |
6.5.
unicodedata — Unicode Database¶
This module provides access to the Unicode Character Database (UCD) which defines character properties for all Unicode characters. The data contained in this database is compiled from the UCD version 6.3.0.
The module uses the same names and symbols as defined by Unicode Standard Annex #44, “Unicode Character Database”. It defines the following functions:
unicodedata.
lookup(name)¶
Look up character by name. If a character with the given name is found, return the corresponding character. If not found,
KeyErroris raised.
unicodedata.
name(chr[, default])¶
Returns the name assigned to the character chr as a string. If no name is defined, default is returned, or, if not given,
ValueErroris raised.
unicodedata.
decimal(chr[, default])¶
Returns the decimal value assigned to the character chr as integer. If no such value is defined, default is returned, or, if not given,
ValueErroris raised.
unicodedata.
digit(chr[, default])¶
Returns the digit value assigned to the character chr as integer. If no such value is defined, default is returned, or, if not given,
ValueErroris raised.
unicodedata.
numeric(chr[, default])¶
Returns the numeric value assigned to the character chr as float. If no such value is defined, default is returned, or, if not given,
ValueErroris raised.
unicodedata.
bidirectional(chr)¶
Returns the bidirectional class assigned to the character chr as string. If no such value is defined, an empty string is returned.
unicodedata.
combining(chr)¶
Returns the canonical combining class assigned to the character chr as integer. Returns
0if no combining class is defined.
unicodedata.
east_asian_width(chr)¶
Returns the east asian width assigned to the character chr as string.
unicodedata.
mirrored(chr)¶
Returns the mirrored property assigned to the character chr as integer. Returns
1if the character has been identified as a “mirrored” character in bidirectional text,
0otherwise.
unicodedata.
decomposition(chr)¶
Returns the character decomposition mapping assigned to the character chr as string. An empty string is returned in case no such mapping is defined.
unicodedata.
normalize(form, unistr)¶:
unicodedata.
ucd_3_2_0¶
This is an object that has the same methods as the entire module, but uses the Unicode database version 3.2 instead, for applications that require this specific version of the Unicode database (such as IDNA).
Examples:
>>> import unicodedata >>> unicodedata.lookup('LEFT CURLY BRACKET') '{' >>>'
Footnotes | https://docs.python.org/3.4/library/unicodedata.html | 2018-12-10T02:11:46 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.python.org |
When you remove a virtual machine from the inventory, you unregister it from the host and vCenter Server, but you do not delete it from the datastore. Virtual machine files remain at the same storage location and you can re-registered the virtual machine by using the datastore browser at a later time. This capability is useful if you need to unregister a virtual machine to edit the virtual machine's configuration file. The ability to remove a virtual machine and maintain its files is useful when you have reached the maximum number of virtual machines that your license or hardware allows.
Prerequisites
Verify that the virtual machine is turned off.
Procedure
- Right-click the virtual machine, and select .
- To confirm that you want to remove the virtual machine from the inventory, click OK.
Results
vCenter Server removes references to the virtual machine and no longer tracks its condition. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.vm_admin.doc/GUID-27E53D26-F13F-4F94-8866-9C6CFA40471C.html | 2018-12-10T01:44:27 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.vmware.com |
Please read through the Quick Start Guide in the README to get started.
In this video, Jeff Geerling walks through setting up a Drupal 8 website on Windows 10 using Drupal VM 3.
There are a few caveats when using Drupal VM on Windows, and this page will try to identify the main gotchas or optimization tips for those wishing to use Drupal VM on a Windows host.
Windows Subsystem for Linux / Ubuntu bash¶
If you are running Windows 10 (Anniversary edition) or later, you can install the Windows Subsytem for Linux, which allows you to install an Ubuntu-based CLI inside of Windows. With this installed, you can then manage and run Drupal VM inside the Linux-like environment. Follow these steps to use Drupal VM in the WSL:
- Install Vagrant and VirtualBox in Windows (links in the Drupal VM Quick Start Guide).
- Install/Enable the Windows Subsystem for Linux.
- Create an admin account for the Ubuntu Bash environment when prompted.
- In a local copy of Drupal VM (downloaded or Git cloned into a path that's in the Windows filesystem, e.g.
/mnt/c/Users/yourusername/Sites/drupal-vm), run
wrun vagrant up.
If you need to run any other
vagrant commands (with the exception of
vagrant ssh—for now, that must be run in a different environment; see Vagrant: Use Linux Subsystem on Windows), you can do so by prefixing them with
wrun.
Note: using
wrun, interactive prompts don't seem to work (e.g. if you run
vagrant destroywithout
-f, you have to Ctrl-C out of it because it just hangs).
Note 2: that the WSL is still in beta, and tools like
cbwinare still undergoing rapid development, so some of these instructions are subject to change!
Command line environment¶
If you're not on Windows 10, or if you don't want to install the WSL, you can use PowerShell, Git Bash, Git Shell, or other PowerShell-based environments with Drupal VM and Vagrant; however you might want to consider using a more POSIX-like environment so you can more easily work with Drupal VM:
- Cmder includes built-in git and SSH support, so you can do most things that you need without any additional plugins.
- Cygwin allows you to install a large variety of linux packages inside its bash environment, though it can be a little more tricky to manage and is less integrated into the Windows environment.
Troubleshooting Vagrant Synced Folders¶
Most issues have to do synced folders. These are the most common ones:
Read the following to improve the performance of synced folders by using NFS, samba or rsync.
Symbolic Links¶
Creating symbolic links in a shared folder will fail with a permission or protocol error.
There are two parts to this:
- VirtualBox does not allow guest VMs to create symlinks in synced folders by default.
- Windows does not allow the creation of symlinks unless your local policy allows it; see TechNet article. Even if local policy allows it, many users experience problems in the creation of symlinks.
Using Ubuntu bash under Windows 10 can make this easier, but there are still issues when creating and managing symlinks between the bash environment and the guest Vagrant operating system.
Git and File permissions¶
If you're using a synced folder for your project, you should choose to either work only inside the VM, or only on the host machine. Don't commit changes both inside the VM and on the host unless you know what you're doing and have Git configured properly for Unix vs. Windows line endings. File permissions and line endings can be changed in ways that can break your project if you're not careful!
You should probably disable Git's
fileMode option inside the VM and on your host machine if you're running Windows and making changes to a Git repository:
git config core.fileMode false
"Authentication failure" on vagrant up¶
Some Windows users have reported running into an issue where an authentication failure is reported once the VM is booted (e.g.
drupalvm: Warning: Authentication failure. Retrying... — see #170). To fix this, do the following:
-
~/.vagrant.d/insecure_private_key
- Run
vagrant ssh-config
- Restart the VM with
vagrant reload
Windows 7 requires PowerShell upgrade¶
If you are running Windows 7 and
vagrant up hangs, you may need to upgrade PowerShell. Windows 7 ships with PowerShell 2.0, but PowerShell 3.0 or higher is required. For Windows 7, you can upgrade to PowerShell 4.0 which is part of the Windows Management Framework.
Hosts file updates¶
If you install either the
vagrant-hostsupdater (installed by default unless removed from
vagrant_plugins in your
config.yml) or
vagrant-hostmanager plugin, you might get a permissions error when Vagrant tries changing the hosts file. On a macOS or Linux workstation, you're prompted for a sudo password so the change can be made, but on Windows, you have to do one of the following to make sure hostsupdater works correctly:
- Run PowerShell or whatever CLI you use with Vagrant as an administrator. Right click on the application and select 'Run as administrator', then proceed with
vagrantcommands as normal.
- Change the permissions on the hosts file so your account has permission to edit the file (this has security implications, so it's best to use option 1 unless you know what you're doing). To do this, open
%SystemRoot%\system32\drivers\etcin Windows Explorer, right-click the
hostsfile, and under Security, add your account and give yourself full access to the file.
Intel VT-x virtualization support¶
On some laptops, Intel VT-x virtualization (which is built into most modern Intel processors) is enabled by default. This allows VirtualBox to run virtual machines efficiently using the CPU itself instead of software CPU emulation. If you get a message like "VT-x is disabled in the bios for both all cpu modes" or something similar, you may need to enter your computer's BIOS settings and enable this virtualization support. | http://docs.drupalvm.com/en/latest/getting-started/installation-windows/ | 2018-12-10T02:49:21 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.drupalvm.com |
Nodetool ring¶
ring
[<keyspace>] - The nodetool ring command displays the token
ring information. The token ring is responsible for managing the
partitioning of data within the Scylla cluster. This command is
critical if a cluster is facing data consistency issues.
For example:
nodetool ring
This will show all the nodes that are involved in the ‘ring’ and the tokens that are assigned to each one of them. It will also show the status of each of the nodes. | https://docs.scylladb.com/operating-scylla/nodetool-commands/ring/ | 2021-09-17T05:00:09 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scylladb.com |
Date: Wed, 3 Apr 2019 23:34:54 -0700 From: Frank Fenderbender <[email protected]> To: [email protected] Subject: Re: FreeBSD desktop "best-fit" Dell platform suggestions? Message-ID: <[email protected]> In-Reply-To: <33fed198-334b-99b4-aecb-4f5546ec3e11>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
DavidC: Thanks for the project connect links.=20" =20 On 03-April-2019, at 00:40 PM, David Christensen wrote: > On 4/3/19 1:04 AM, Frank Fenderbender wrote: >> Thanks much, David C! >> This type of research, unpacked from personal experience for a = general perusal, salvages from the private pains & pangs, as well as the = joyous successes, encountered while fitting releases to systems, and = vice-[re]versa. >> As payback for the efforts made towards pulling me towards the = success side, I will see if i can create a spreadsheet, which I'll put = up on my = <> web = page. Its goal will roughly be to list issues based on release:hardware. = I' also like to work on a page that helps with add-on dependencies so = that collisions can be avoided and troubleshooting made a bit easier. = Any help on either page is greatly appreciated and open to different = approaches entirely. >> For now, I'll plan to post the additional page title(s) and url(s) = soon... and how to send in verifiable additions to elevate its coverage = (perhaps with links back to contributors for spec-test-verification = details). >> I'm open to how best to approach each mini-project.... >=20 > Finding compatible hardware and software remains a never-ending quest, = especially for FOSS. Rather than building a personal solution, I would = suggest contributing to some community solution. STFW I see: >=20 > >=20 > >=20 >=20 > My wish-list solution would be a USB flash drive live FreeBSD = distribution with a console app that runs a bunch of tests and submits = reports to a central server with a searchable WWW app. The client app = would include local storage and sneaker-net capabilities (for testing = computers without Internet connectivity). Bonus points if one client = image supported multiple architectures (I use i386 and amd64). >=20 >=20 > David > _______________________________________________ > [email protected] mailing list > > To unsubscribe, send any mail=20 Frank [email protected]
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=245295+0+archive/2019/freebsd-questions/20190407.freebsd-questions | 2021-09-17T05:15:55 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Date: 17 Sep 2020 14:02:36 -0400 From: "John Levine" <[email protected]> To: [email protected] Cc: [email protected] Subject: Re: fresh server sendmail or postfix / webmail. Message-ID: <[email protected]> In-Reply-To:
In article <MWHPR06MB3247706BA1E3F1C75810566C9A3E0@MWHPR06MB3247.namprd06.prod.outlook.com> you write: >Dear FreeBSDer, > >Good day/evening. > >I have up and running mail server with some web apps, server running since many years back until today (7-Release, 🙂 ) its >time to move ! It depends on what you want to do. If the web apps are just sending out a few status messages, I'd use dma to send the mail to a smarthost. It's part of the base system and very easy to set up. >if yes, is installing postfix from /ports enough? If you want a full MTA, yes. > >2. >any recommendation for a webmail ? (maybe roundcube) ? >(on my old server openwebmail works great! but unfortunately, its outdated) Everyone uses roundcube. It works pretty well. It does need IMAP and submission servers on the same host or elsewhere for the actual mail service. -- Regards, John Levine, [email protected], Primary Perpetrator of "The Internet for Dummies", Please consider the environment before reading this e-mail.
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=315358+0+archive/2020/freebsd-questions/20200920.freebsd-questions | 2021-09-17T05:09:09 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
App Translator
This application manages the translation of DigitalSuite applications. Authorized users can use it to create translations in different languages using logical dictionaries whose keys are generated automatically.
Glossary of common terms
- Environment: Represents a specific application revision. It is composed of three elements: Project, Version and Web Interface or Process. For each environment you will be able to create one or more dictionaries.
- Dictionary: Represents the translation for each element of an application in an environment in a specific language.
- Designer Dictionary: The dictionary created automatically based on a web interface or process design. This is the default translation. This is the reference for creating new dictionaries in different languages.
- Entry: An element to be translated. It is displayed in a table for each existing dictionary when an environment is selected.
- Reference Dictionary: This is a read only column on the entries table and is used as a guide to ease the translation activities.
- Dictionary to Edit: This is an editable column. It allows the user to modify or add a translation for an entry.
- Empty template: This is a dictionary with no translations for the entries. It can be used as a template when creating a new dictionary.
Overview
When developing an application on the DigitalSuite platform, you can ask yourself : How can I enable my application to be used in different regions by different users who may not speak the same language? In other words: How can I internationalize my application?.
The App Translator application allows you to translate your DigitalSuite applications in many different languages, so that end users can see web interfaces in the language of their choice. This application allows you to manage the existing translations for the different environments of your account, by using dictionaries, which can be displayed, created, edited, deleted or imported for any particular application.
In order to internationalize an application, it is very important to include it in a project version; otherwise it won't belong to an environment.In order to internationalize an application, it is very important to include it in a project version; otherwise it won't belong to an environment.
Once you have created an application and included it in a project version, you can start to create dictionaries for it.
How does it work?
As you may know, the DigitalSuite platform allows you to create web interfaces to launch processes or to be used as a manual task. These web interfaces are composed of configurable widgets. Each time you add widgets to a web form, the DigitalSuite WebModeler will ask you to set its configuration such as an ID, a label, validation messages and so on...Depending on the widget type, specific configuration values may be required.
The app translator application allows those values to be translated in different dictionaries, so that the labels, validation messages and such can be displayed to the end users in different languages. We call this action the internationalization of an application.
In order to internationalize an application, the following requirements are necessary:
- Include the application to internationalize into a project's version.
- Provide the ID for each widget of the application to be translated.
- Have an authorized profile to use the app translator application.
You may also need to internationalize the wording that is not contained in a widget configuration such as a label in a dynamic field and more globally any wording contained in a javascript script. To do this, you will have to use the following freemarker method to create a new key in the designer dictionary:
You may also need to use this Freemarker method for translating emails sent by processes.
When developing an application which uses JavaScript to set labels dynamically or any internationalizable text, it is important to consider the possibility that the translated text has quoted words, which will affect the behavior of the translation. To avoid this problem, the function "P_quoted", is provided for any DigitalSuite application, to allows the JavaScript interpreter to distinguish a quote within the string from the quotes that serve as string delimiters. Here is an example on how to use it:
In the previous example, the French translation for this phrase will be "réglages de l'utilisateur", which will produce an error if you do not use the "P_quoted" function, due to the apostrophe in the phrase. So, it is important to use this statement as a good practice each time you need to internationalize dynamic text using JavaScript.
Choosing an environment and displaying a dictionary
When you open the application, the first thing to do is to choose an environment. To do this, three list will be presented: Project, Version and Web Interface or Process. Only the project list will be available at the beginning. The button bar, that is used to manage the dictionaries, will be disabled until a valid environment is selected.
When you select a project, the Version list will be updated with all the existing project's versions. You can then select a version and the list of web interfaces or processes of the version will appear in the last list.
After choosing a valid value in the 3 lists (Project, Version or Web Interface or Process), the application will display a table with the following 3 columns.
- Reference Dictionary: This column is read only and by default it will display the "Designer Dictionary", the dictionary created automatically from an application design. For Web Interfaces, each entry displayed in this table matches either a widget's configuration field or a key created using the app translator freemarker method. When the application has other dictionaries configured, they will appear in the dropdown list of the column header. You can select one of them in order to visualize it as reference.
- Dictionary to Edit: This column is editable and is used to fill in the translation for each element. When the application already has dictionaries configured, they will appear in the dropdown list of the column header. You can select one of them in order to visualize the entries translation and you will be able to edit them. On the other hand, when the application has no dictionary, you will have to create one before being able to use this column.
- Entry Id: The Entry Id column is a read only column and displays the ID of the entry. It can be used as a second element of reference to make the translation process easier.
- Entry Source: The Entry Source is read only and shows the source of the entry (e.g. Widget, Menu, Basket, Custom, Application Information).
Create a Dictionary
To create a new dictionary, click on the "+" button in the button bar. This button is available when an environment is selected and will display a popup window :
In this window, you will have :
- Dictionary language: A dropdown list with all the available languages that can be used to create a dictionary. Choose one from the list to proceed. If one language has been already chosen for another existing dictionary, it won't appear in the list as an option.
- Dictionary name: You can provide custom names for your dictionaries.
- Template: Here two possibilities are provided:
- Empty: The dictionary will be created with no translations, so for each entry, the value will be empty.
- Copy From: The window will expand to display 4 new fields to choose the source environment from which the dictionary should be copied. The new dictionary will be created by matching entries id from the source dictionary. When no translation is available in the source dictionary, the new dictionary will use the the translation from the designer dictionary. Copying a dictionary is very useful when you are dealing with different revisions of an application using different versions.
Once all the fields are filled in, the Create button is activated allowing the creation of the dictionary. You can create the dictionary and start editing it.
Managing the existing dictionaries
Once an application has dictionaries, many possible features are provided to maintain them by editing, deleting and creating new dictionaries.
Editing Dictionaries: To edit a dictionary, just type in an entry field for a new translation. The border of the entry that you modify will be highlighted and the "Save" button of the button bar will be activated. Changes are not be saved automatically : you will have to save them explicitly.
Saving a Dictionary: To save your modifications, just click on the Save button in the button bar. You will be asked to confirm this action because once the dictionary is modified, the changes cannot be undone.
Deleting a Dictionary: To delete a dictionary, you have to select the environement of the dictionary you want to delete and choose the dictionary in the column dictionary to edit. The Delete button will be activated. When you click on it, you will be asked to confirm the deletion. A dictionary deletion cannot be undone
Reload Environment: This button allows you to reload the dictionaries lists for a selected environment. By doing this, all the changes performed over a dictionary will be reset until the last saved point. To use it, click on the reload button provided in the button bar. If any changes were detected over the deployed dictionary, a confirmation window will appear to confirm the action. If the user proceeds, the dictionaries lists will be reloaded and all the unsaved changes will be lost. This action cannot be undone
Edition tools
In order to make the translation process easier, additional tools are provided:
Find and Replace
This tool allows you to search a given value in the Dictionary to Edit column. You can also search and replace the given value with another. To do this, click on
in the button bar. A new floating window will appear:
From the moment that you provide a value into the find field, the buttons will be enabled and you will be able to :
Find and replace: This will let you execute a search in the Dictionary to Edit column until it finds an occurrence that will be highlighted. If you click again on the Find and Replace button, it will perform a replace action, overwriting the current value with the one provided in the replace field. If no value was provided for the replace field, the current value of the entry, will be replaced by an empty value.
Previous and Next: This button allows you to execute a search ascending or descending the column. It works only as a search functionality and does not take the replace field into account. However, if you click on the replace button, the currently selected entry will be replaced with the value provided in the replace field.
Cancel: This button is used to close the dialog window.
The find function looks for each entry containing the provided text, not just entry containing its exact content.
A checkbox field can be used to replace all the occurrences found without confirmation. It will perform the search through the whole column, and replace the occurrences found for each entry of the dictionary.
Batch Translation Tool
This tool is used to perform batch translations of recurrent wording. It is very useful when you create a dictionary from an empty template. It will help you fill in the translations in the edit dictionary. The idea is mainly to find a given text in the reference dictionary and, whenever an occurrence is found, to set the corresponding translation in the dictionary to edit.
To use it, click on
in the button bar. A new floating window will appear:
The first field (From:) should be filled in with the text to be looked for in the Reference Dictionary column. Contrary to the Find and Replace tool, the batch translation tool will look for the exact value provided. The second field (To: ) should be filled in with the value that you wish to set for the corresponding entry in the dictionary to edit column. An empty value is allowed in this field.
Once you have filled in both fields, the button Translate All will get activated. Clicking on it will perform the batch replacement described above.
Additional features
Expanded View
This tool is present in every entry of the Dictionary to Edit and is used to edit long translations. When you hover over a field in the dictionary to edit column, a little parchment icon will appear at the right of the entry field. When clicking on it a new window opens with a large text box where you can easily edit the value for the entry.
When you are done editing the translation, you can click on either the continue icon to set value to the entry or on the cancel icon to dismiss the modifications.
Please give details of the problem | https://docs.runmyprocess.com/Components/Standard_Portal_Applications/App_Translator/ | 2021-09-17T03:03:08 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_initial_scr.png',
'initial_scr'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_no_dicos.png',
'no_dicos'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_new_dico1.png',
'new_dico1'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_new_dico2.png',
'new_dico2'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_find_replace.png',
'find_replace'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_massive_trans_tool.png',
'massive_trans_tool'], dtype=object)
array(['/images/Components/Standard_Portal_Applications/App_Translator/i18n_expanded_value.png',
'expanded_value'], dtype=object) ] | docs.runmyprocess.com |
Amazon SQS
OverviewOverview.
There is no charge for the Amazon SQS metrics reported in CloudWatch. They're provided as part of the Amazon SQS service.
The Amazon SQS Centreon Plugin uses the Amazon Cloudwatch APIs to collect the related metrics and status.
Plugin-Pack assetsPlugin-Pack assets
Monitored objectsMonitored objects
- SQS Message queues (both standard and FiFo queues are supported) SQS resources:
yum install centreon-plugin-Cloud-Aws-Sqs-Api
- On the Centreon Web interface, install the Amazon SQS Centreon Plugin-Pack on the "Configuration > Plugin Packs > Manager" page
- Install the Centreon Plugin package on every Centreon poller expected to monitor Amazon SQS resources:
yum install centreon-plugin-Cloud-Aws-Sqs-Api
- Install the Centreon Plugin-Pack RPM on the Centreon Central server:
yum install centreon-pack-cloud-aws-sqs.noarch
- On the Centreon Web interface, install the Amazon SQS Centreon Plugin-Pack on the "Configuration > Plugin Packs > Manager" page
ConfigurationConfiguration
HostHost
- Log into Centreon and add a new Host through "Configuration > Hosts".
- Select the Cloud-Aws-Sqs-custom template to apply to the Host.
- Once the template applied, some Macros marked as 'Mandatory' hereafter have to be configured:
ServicesServices
Once the Host template applied, a Sqs-Queues service will be created. This service is generic, meaning that it won't work "as is". The QUEUENAME Service Macro is mandatory for the resources to be monitored properly and has to be set. You can then duplicate the service as much as the different queues to be monitored (the Services names can also be adjusted accordingly with the queues names)._sqs_api.pl \ --plugin=cloud::aws::sqs::plugin \ --mode=queues \ --custommode=awscli \ --aws-secret-key='*******************' \ --aws-access-key='**********' \ --region='eu-west-1' \ --proxyurl='' --statistic=average \ --timeframe='600' \ --period='60' \ --queue-name='my_sqs_queue_1' \ --filter-metric='NumberOfMessagesSent|NumberOfMessagesReceived' \ --critical-messages-sent=1: \ --critical-messages-received=1: \ --verbose
Expected command output is shown below:
OK: 'my_sqs_queue_1' Statistic 'Average' number of messages sent: 45, number of messages received: 32 | 'my_sqs_queue_1~average#sqs.queue.messages.sent.count'=45;;1:;; 'my_sqs_queue_1~average#sqs.queue.messages.received.count'=32;;1:;; SQS Queue'my_sqs_queue_1' Statistic 'Average' number of messages sent: 45, number of messages received: 32
The command above monitors the SQS queue named my_sqs_queue_1 (
--mode=queues --queue-name='my_sqs_queue_1') of an AWS account
identified by the usage of API credentials (
--aws-secret-key='****' --aws-access-key='****').
The calculated metrics are an average of values (
--statistic='average') on a 600 secondes / 10 min period (
--timeframe='600') with one sample per 60s / 1 minute (
--period='60').
In the example above, only the sent and received messages statistics will be returned (
--filter-metric='NumberOfMessagesSent|NumberOfMessagesReceived').
This command would trigger a CRITICAL alert if no messages (less than 1) have been sent or received (
--critical-messages-sent=0:)
during the sample period.
All the metrics that can be filtered on as well as all the available thresholds parameters can be displayed by adding the
parameter to the command:
/usr/lib/centreon/plugins/centreon_aws_sqs_api.pl --plugin=cloud::aws::sqs::plugin --mode=queues -=''. | https://docs.centreon.com/20.10/en/integrations/plugin-packs/procedures/cloud-aws-sqs.html | 2021-09-17T03:13:04 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.centreon.com |
delika Power BI Connector
Preview: This feature in preview state and might change or have limited support.
Requirements
Power BI Desktop
Installation
To use a delika Power BI Connector, put the Delika-0.0a1.pqx file in your [Documents]\Power BI Desktop\Custom Connectors folder.
Adjust the data extension security settings as follows
- In Power BI Desktop, select File > Options and settings > Options > Security.
- Select (Not Recommended) Allow any extension to load without validation or warning.
- Select OK, and then restart Power BI Desktop.
See the official documentation for details.
Usage
- Select (Get data).
- Choose the following delika Power BI Connector in Get Data dialog .
- delika data(Beta)
- delika SQL(Beta)
- delika query results(Beta)
- Select (Connect).
delika data
delika data is a custom connector that reads data from data file. Read data by specifying the following items.
Account: Account name of the account that owns the data
Dataset: Dataset name of the dataset to which the data belongs
Data: Data name
delika SQL
delika SQL is a custom connector that uses sql to retrieve data. Read data by specifying the following items.
Sql: SQL query
Using this connector, delika will execute SQL and get the results. If you have already executed the SQL, then use delika query results.
delika query results
delika query results is a custom connector to retrieve the results of an executed sql query. Read data by specifying the following items.
QueryID: query id | https://docs.delika.io/powerbi/ | 2021-09-17T04:57:30 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.delika.io |
ListExists
Indicates whether an element is present in the list and has a value.
Synopsis
Feedback
ListExists(list,position)
Arguments
Description
The ListExists function returns a value of 1 if the element at the indicated position in the list exists and has a data value. Otherwise ListExists returns zero.
Examples
The following example demonstrates the ListExists function. It defines a six-element list, in which the third and fourth elements do not have a defined value:
Erase Y ' Y is now undefined myList = ListBuild("Red","Blue",,Y,"Yellow","") Println ListExists(myList,0) ' 0: positions are numbered from 1 Println ListExists(myList,1) ' 1: "Red" Println ListExists(myList,2) ' 1: "Blue" Println ListExists(myList,3) ' 0: missing element Println ListExists(myList,4) ' 0: undefined element Println ListExists(myList,5) ' 1: "Yellow" Println ListExists(myList,6) ' 1: empty string OK Println ListExists(myList,7) ' 0: beyond end of list Println ListExists(myList,-1) ' 1: last element in list
See Also
ListFromString function
ListLength function
ListToString function | https://docs.intersystems.com/latest/csp/docbook/Doc.View.cls?KEY=RBAS_flistexists | 2021-09-17T04:39:02 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.intersystems.com |
Comprehensive Genome Analysis Service¶
The Comprehensive Genome Analysis Service is a streamlined analysis “meta-service” that accepts raw genome reads and performs a comprehensive analysis including assembly, annotation, identification of nearest neighbors, a basic comparative analysis that includes a subsystem summary, phylogenetic tree, and the features that distinguish the genome from its nearest neighbors.
Keywords: Genome analysis, Genome assembly, Genome annotation, Genome quality, Similar genomes, Comparative genomics, Phylogenetic tree, Genome Report.
Submitting a Comprehensive Genome Analysis job¶
Finding the service.
a. Click on the Services tab at the top of the page, and then click on Comprehensive Genome Analysis (CGA).
b. This will open the landing page for the service.
Input files: Uploading Reads or Contigs. The default setting is to analyze reads. PATRIC accepts reads ending in .fq, .fastq, .fa, .fasta, .fq.gz, .fastq.gz, .fa.gz, .fasta.gz
Input File-Uploading Paired End Reads.
a. To upload Paired End Reads, click on the Folder icon at the end of the text box underneath Paired Read Library. This will open a pop-up window. Click on the Upload icon in the upper right corner.
b. This will open a new pop-up window. Click on the blue bar that says Select File. This will open a pop-up window that gives access to files on your computer. Select the file of interest and click Open.
c. The name of the selected file will appear below the grey bar that says File Selected. Click on the Start upload button at the bottom right of the window. Follow the progress of the upload by examining the Upload monitor at the bottom right of the PATRIC page. Do not submit the job until the upload progress registers 100%.
d. The name of the selected file will appear in the text box.
{width=”2.9444444444444446in” height=”2.031948818897638in”}{width=”2.9444444444444446in” height=”2.031948818897638in”}
e. Repeat the process to upload the other read pair. Make sure that the NAMES OF THE FILES MATCH!
f. Next move the reads into the Selected libraries box. Click on the arrow above the Paired Read library text boxes. Doing this will move the paired end reads to the Selected libraries box.
{width=”6.5in” height=”1.4833333333333334in”}{width=”6.5in” height=”1.4833333333333334in”}
g. Ensure that the reads are correctly paired by clicking on the Information icon (i) in the Selected libraries text box.
{width=”6.5in” height=”1.9104166666666667in”}{width=”6.5in” height=”1.9104166666666667in”}
h. If a mistake has been made, delete the incorrectly matched pairs by clicking on the Delete icon (x).
{width=”3.4in” height=”2.2547364391951006in”}{width=”3.4in” height=”2.2547364391951006in”}
i. Some sequencing labs run several lanes from the same library during the sequencing process. The duplicate reads could be uploaded as well, if they exist, to create a more robust assembly. If available, repeat the process to upload more reads from the same strain.
Input File-Using previously uploaded Paired End Reads. To upload previously loaded Paired End Reads, click on the down arrow at the end of the text box under Paired Read Library. This will open a drop-down box that shows all the files that have been uploaded and tagged as reads. Clicking on one to select it will load it into the text box.
{width=”6.5in” height=”1.3972222222222221in”}
Input File-Single End Reads
c. To upload Single End Reads, follow the instructions listed above for Paired End reads, but upload the reads in the text box underneath Single Read Library.
Input File-SRR numbers. To upload reads directly from the Sequence Read Archive[1], enter the SRR number into the text box underneath SRA Run Accession and follow the instructions listed above for Paired End reads.
`
Input File – Contigs. To upload Contigs, click on Assembled Contigs in the Start With box above Input File.
d. This will change the display, enabling uploading of contigs to the Input File. To upload Contigs, follow the instructions listed above for Paired End reads, but upload in the text box underneath Contigs.
``
Selecting Parameters for Comprehensive Genome Analysis
Assembly Strategy. PATRIC offers a number of assembly strategies that are listed below.
The auto assembly strategy runs BayesHammer[2] on short reads, followed by three assembly strategies that include Velvet[3], IDBA[4] and Spades[4], each of which is given an assembly score by ARAST, an in-house script.
The fast assembly strategy runs MEGAHIT[6].
The smart strategy can be used for long or short reads. The strategy for short reads when using smart involves running BayesHammer on reads, KmerGenie[7] to choose hash-length for Velvet, followed by the same assembly strategy using Velvet, IDBA and Spades. Assemblies are sorted with an ALE score[8] and the two best assemblies are merged using GAM-NGS[9].
PacBio and Nanopore long reads only work with the auto and smart strategies. In either case, they are automatically assembled using Miniasm[10].
To select one of the strategies, click on the down arrow at the end of the text box under Assembly Strategy.
Domain. Select Bacteria or Archaea for the Domain that reflects the sequenced genome.
Taxonomy Name. Begin typing in the lowest ranked taxonomic name known for the sequenced isolate. It is best to be able to get to Genus, if possible. Once typing begins, a drop-down box will automatically appear with the taxonomic names in PATRIC that match the entered text. Click on the most appropriate name. This will fill the text box under Taxonomy Name with the selected name, and also include the Taxonomy ID. If the Taxonomy ID is known, that can be filled in and the ID and matching taxonomy name will be auto filled.
My Label. Give the genome a unique name by entering text in the box underneath My Label. The name that is entered will appear in the Output Name in the lowest text box.
Genetic Code. Select the appropriate genetic code for the isolate.
Output Folder-Previously named. If a folder has been previously created, start typing the name in the text box underneath Output Folder, which will open a drop-down list of all folders in the workspace that match that text. Click on the appropriate folder.
New Output Folder. To create a new folder for the job, click on the folder icon at the end of the text box underneath Output Folder. This will open a pop-up window. Click on the folder icon at the top right.
a. This will open a new pop-up window. Enter the name of the folder in the text box, and then click the Create Folder button.
b. The original pop-up window will appear. Find the name of the new folder, select it, and then click the OK button at the bottom right of the window. This will fill the name of the selected folder into the text box under Output Folder.
Submitting the Comprehensive Genome Analysis job
When all the parameters are entered correctly, the Submit button at the bottom of the page will turn blue. Click on that button, and the will enter the queue. You can monitor the progress on the Jobs page.
Finding the Comprehensive Genome Analysis job
Click on the Jobs monitor at the bottom right of any PATRIC page.
This will open the page where all jobs submitted to PATRIC are listed. Every Comprehensive Genome Analysis (CGA) job also launches an assembly and annotation job, which can be found imimagestely below the row that list the CGA job. To find out more information about the CGA job, click on the job of interest, and then on the view icon in the vertical green bar.
PATRIC now provides a genome announcement style document for any genome annotated using the Comprehensive Genome Analysis service. To see this document, select the row that contains the FullGenomeReport.html and click on the download icon in the vertical green bar.
Select an appropriate location on your computer and save the document, and then open it. You can view the document in any web browser
The full genome report provides a detailed summary of the genome. It begins with a summary of the genome quality, and then provides information for each step of the service, which includes assembly, annotation, and analysis of specialty genes and functional categories, and a phylogenetic tree of the new genome and its closest high-quality relatives.
The summary will indicate is the genome is of good or poor quality.
Scrolling down to Genome Assembly will summarize the method selected for assembly and provide the statistics of interest. These statistics are those commonly provided when a genome is submitted as part of a publication.
The Genome Annotation section describes the taxonomy of the genome, and genes and their functional divisions.
The Genome Annotation section also includes a circular diagram of the genes, their orientation, homology to AMR genes and virulence factors, and GC content and skew. Genes on the forward and reverse strands are colored based on the subsystem[11] that they belong to. A separate, downloadable svg or png of the circular graph image is available in the jobs list.
PATRIC BLASTs all genes in a new genome against specialty gene databases, including genes known to provide antibiotic resistance, virulence factors, and known drug targets. The CGA service shows the hits in the new genome have to those databases in a tabular form.
In addition, PATRIC provides a k-mer based detection method for antimicrobial resistance genes and shows the number of genes that share these k-mers.
PATRIC’s subsystem analysis identifies genes based on specific biological processes that they are hypothesized to be active in. The full genome report includes a pie chart showing the subsystems superclasses[11], and an indication of the number of subsystems within that superclass (first number) and the number of annotated genes that are part of the superclass (second number).
The CGA service identifies the closest relatives to the selected genome. It picks the closest reference and representative genomes using Mash/MinHash[12], and then takes twenty of PATRIC’s global protein families[13] that are shared across all the selected genomes to build a tree based on the amino acid and nucleotide alignments of those proteins, which are aligned using MUSCLE[14], and RaxML[15] is used to build the tree.. The genome submitted to the CGA is in red.
The newick file (.nwk) for the tree is available in the jobs list and can be used to construct the tree in another tree viewing program like FigTree () or the Interactive Tree of Life ().
Viewing the genome in PATRIC
To view the integrated genome in PATRIC, double click on the row that has the flag icon and annotation in the landing page for the CGA job.
This will open a new page. Click on the view icon at the top right of the table.
This will open the genome landing page for the genome that was assembled, annotated and analyzed using the CGA service.
References
Kodama, Y., M. Shumway, and R. Leinonen, The Sequence Read Archive: explosive growth of sequencing data. Nucleic acids research, 2012. 40(D1): p. D54-D56.
Nikolenko, S.I., A.I. Korobeynikov, and M.A. Alekseyev, BayesHammer: Bayesian clustering for error correction in single-cell sequencing. BMC genomics, 2013. 14(1): p. S7. Annual international conference on research in computational molecular biology. 2010. Springer.
Li, D., et al., MEGAHIT: an ultra-fast single-node solution for large and complex metagenomics assembly via succinct de Bruijn graph. Bioinformatics, 2015. 31(10): p. 1674-1676.
Antipov, D., et al., plasmidSPAdes: assembling plasmids from whole genome sequencing data. bioRxiv, 2016: p. 048942.
Namiki, T., et al., MetaVelvet: an extension of Velvet assembler to de novo metagenome assembly from short sequence reads. Nucleic acids research, 2012. 40(20): p. e155-e155.
Clark,. S6.
Li, H., Minimap and miniasm: fast mapping and de novo assembly for noisy long sequences. Bioinformatics, 2016. 32(14): p. 2103-2110.. | https://docs.patricbrc.org/tutorial/comprehensive-genome-analysis/comprehensive-genome-analysis.html | 2021-09-17T04:50:25 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['../../_images/image312.png', 'image'], dtype=object)
array(['../../_images/image83.png', 'image'], dtype=object)
array(['images/image24.png', 'image'], dtype=object)
array(['images/image25.png', 'image'], dtype=object)
array(['../../_images/image313.png', 'image'], dtype=object)
array(['../../_images/image321.png', 'image'], dtype=object)
array(['../../_images/image331.png', '../../_images/image331.png'],
dtype=object)
array(['../../_images/image341.png', '../../_images/image341.png'],
dtype=object)
array(['../../_images/image351.png', '../../_images/image351.png'],
dtype=object)
array(['../../_images/image361.png', '../../_images/image361.png'],
dtype=object)
array(['../../_images/image371.png', '../../_images/image371.png'],
dtype=object)
array(['../../_images/image381.png', '../../_images/image381.png'],
dtype=object)
array(['../../_images/image412.png', '../../_images/image412.png'],
dtype=object)
array(['../../_images/image431.png', '../../_images/image431.png'],
dtype=object) ] | docs.patricbrc.org |
Physical Variables¶
Once the Lane-Emden equation (12) has been solved, the density in each region can be evaluated by
\[\rho_{i} = \rho_{1,0} \, t_{i} \, \theta_{i}^{n_{i}}.\]
The pressure then follows from the equation-of-state (11) as
\[P_{i} = P_{1,0} \, \frac{n_{1}+1}{n_{i}+1} \, \frac{t_{i}^{2}}{B_{i}} \, \theta_{i}^{n_{i}+1}.\]
The interior mass \(m\) is evaluated by introducing the auxiliary quantity \(\mu\), which is defined in the first region by
\[\mu_{1}(z) = - z^{2} \theta'_{1} (z),\]
and in subsequent regions by
\[\mu_{i}(z) = \mu_{i-1}(z_{i-1/2}) - \frac{t_{i}}{B_{i}} \left[ z^{2} \theta'_{i} (z) - z_{i-1/2}^{2} \theta'_{i} (z_{i-1/2}) \right].\]
The interior mass then follows as
\[M_{r} = M \frac{\mu_{i}}{\mu_{\rm s}},\]
where \(\mu_{\rm s} \equiv \mu_{\nreg}(z_{\rm s})\). | https://gyre.readthedocs.io/en/latest/appendices/comp-ptrope/physical-vars.html | 2021-09-17T03:05:46 | CC-MAIN-2021-39 | 1631780054023.35 | [] | gyre.readthedocs.io |
CreateAssociation
A State Manager association defines the state that you want to maintain on your instances. For example, an association can specify that anti-virus software must be installed and running on your instances, or that certain ports must be closed. For static targets, the association specifies a schedule for when the configuration is reapplied. For dynamic targets, such as an Amazon resource group or an Amazon autoscaling group, State Manager, a capability of Amazon Systems Manager applies the configuration when new instances are added to the group. The association also specifies actions to take when applying the configuration. For example, an association for anti-virus software might run once a day. If the software isn't installed, then State Manager installs it. If the software is installed, but the service isn't running, then the association might instruct State Manager to start the service.
Request Syntax
{ "ApplyOnlyAtCronInterval":
boolean, "AssociationName": "
string", "AutomationTargetParameterName": "
string", "CalendarNames": [ "
string" ], "ComplianceSeverity": "
string", "DocumentVersion": "
string", "InstanceId": " create a new association, the system runs it immediately after it is created and then according to the schedule you specified. Specify this option if you don't want an association to run immediately after you create it. This parameter isn't supported for rate expressions.
Type: Boolean
Required: No
- AssociationName
Specify a descriptive name for the association.
Type: String
Pattern:
^[a-zA-Z0-9_\-.]{3,128}$ to associate with the target(s). Can be a specific version or the default version.
Type: String
Pattern:
([$]LATEST|[$]DEFAULT|^[1-9][0-9]*$) documents (SSM documents) that are shared with you from other Amazon Web Services accounts, you must specify the complete SSM document ARN, in the following format:
arn:partition: Yes
- OutputLocation
An Amazon Simple Storage Service (Amazon S3) bucket where you want to store the output details of the request.
Type: InstanceAssociationOutputLocation object
Required: No
- Parameters
The parameters for the runtime configuration of the document.
Type: String to array of strings map
Required: No
- ScheduleExpression
A cron expression when the association will be applied to the target(s). create an association in multiple Regions and multiple accounts.
Type: Array of TargetLocation objects
Array Members: Minimum number of 1 item. Maximum number of 100 items.
Required: No
- Targets
The targets for the association. You can target instances by using tags, Amazon resource groups, all instances in an Amazon Web Services account, or individual instance IDs. For more information about choosing targets for an association, see Using targets and rate controls with State Manager associations in the Amazon Systems Manager User Guide.
Information about the association.
Type: AssociationDescription object
Errors
For information about the errors that are common to all actions, see Common Errors.
- AssociationAlreadyExists
The specified association already exists.
HTTP Status Code: 400
- AssociationLimitExceeded
You can have at most 2,000 active associations.
HTTP Status Code: 400
- InternalServerError
An error occurred on the server side.
HTTP Status Code: 500
- InvalidDocument
The specified SSM document doesn't exist.
HTTP Status Code: 400
- InvalidDocumentVersion
The document version isn't valid or
- UnsupportedPlatformType
The document doesn't support the platform type of the given instance ID(s). For example, you sent an document for a Windows instance to a Linux instance.
HTTP Status Code: 400
Examples
Example
This example illustrates one usage of CreateAssociation.
Sample Request
POST / HTTP/1.1 Host: ssm.us-east-2.amazonaws.com Accept-Encoding: identity X-Amz-Target: AmazonSSM.CreateAssociation Content-Type: application/x-amz-json-1.1 User-Agent: aws-cli/1.17.12 Python/3.6.8 Darwin/18.7.0 botocore/1.14.12 X-Amz-Date: 20200324T140427: 67 { "Name": "AWS-UpdateSSMAgent", "InstanceId": "i-02573cafcfEXAMPLE" }
Sample Response
{ "AssociationDescription": { "ApplyOnlyAtCronInterval": false, "AssociationId": "f7d193fe-7722-4f2b-ac53-d8736EXAMPLE", "AssociationVersion": "1", "Date": 1585058668.255, "DocumentVersion": "$DEFAULT", "InstanceId": "i-02573cafcfEXAMPLE", "LastUpdateAssociationDate": 1585058668.255, "Name": "AWS-UpdateSSMAgent", "Overview": { "DetailedStatus": "Creating", "Status": "Pending" }, "Status": { "Date": 1585058668.255, "Message": "Associated with AWS-UpdateSSMAgent", "Name": "Associated" }, "Targets": [ { "Key": "InstanceIds", "Values": [ "i-02573cafcfEXAMPLE" ] } ] } }
See Also
For more information about using this API in one of the language-specific Amazon SDKs, see the following: | https://docs.amazonaws.cn/systems-manager/latest/APIReference/API_CreateAssociation.html | 2021-09-17T04:25:00 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.amazonaws.cn |
Date: Sat, 10 Mar 2018 16:36:04 -0800 From: Jim Pazarena <[email protected]> To: FreeBSD Questions <[email protected]> Subject: selecting port versions Message-ID: <[email protected]>
Next in thread | Raw E-Mail | Index | Archive | Help
What is the correct/preferred way to determine which version to use of any given port ? I went to upgrade my MySQL and it doesn't like my perl version I look and there are so many php versions and perl versions, I get lost on selecting the correct or best one. I pick one version of a port, and then another port tells me that I have the wrong version of the previous port. It's a vicious circle of incompatibility. What is the best method to make the correct selection of any given port? Google shows old advice of seemingly out-dated topics. -- Jim Pazarena [email protected]
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=0+0+archive/2018/freebsd-questions/20180318.freebsd-questions | 2021-09-17T05:18:37 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
Interface Utilization reports reveal measurements for network traffic across one or more network interfaces for each device or device group you specify. These reports help you to visualize current and historic flow of network traffic.
If you specify a device group, the report reveals average rate and total volume of traffic for each device within the group. If you specify a switch, this report returns average rate and traffic volume across each LAN segment —a subscript (n) follows the device name to denote the port number.
Important: Capacity utilization figures are typically calculated using an interface speed value returned from the device. For cases where the "nominal" speed (actual operational speed adjusted for other factors such as available bandwidth) for the interface is different than the value reported back from the device, you can edit this value from the Monitors tab in Device Properties (
). For more information, see the Interface Utilization monitor topic.
To display additional measurements, click a column heading (
) for column selection (
). You can also include:
If a single device is selected for display, you can access the Real-Time Performance Monitor by clicking the
icon or the Traffic Analysis Dashboard by clicking the
icon.
Choose Device.
Choose one or more host devices you want Interface Utilization measurements for.
Choose time constraints. (
,
) Choose times for the Interface Utilization. Utilization metrics for devices that already have the appropriate WMI or SNMP credentials. For more information, see Using Credentials.
Most generated Interface Utilization report data can be printed, shared, and exported when selecting Expand (
) from the Dashboard Options (
) menu. After the report has been expanded, select export (
) to access the following options: | https://docs.ipswitch.com/NM/WhatsUpGold2019/03_Help/1033/81658.htm | 2021-09-17T03:13:43 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.ipswitch.com |
What is Azure SQL Managed Instance?
APPLIES TO:
Azure SQL Managed Instance.
If you're new to Azure SQL Managed Instance, check out the Azure SQL Managed Instance video from our in-depth Azure SQL video series:
Important
For a list of regions where SQL Managed Instance is currently available, see Supported regions.
The following diagram outlines key features of SQL Managed Instance:
Azure SQL Managed Instance is designed for customers looking to migrate a large number of apps from an on-premises or IaaS, self-built, or ISV provided environment to a fully managed PaaS cloud environment, with as low a migration effort as possible. Using the fully automated Azure Data Migration Service, customers can lift and shift their existing SQL Server instance to SQL Managed Instance, which offers compatibility with SQL Server and complete isolation of customer instances with native VNet support. For more information on migration options and tools, see Migration overview: SQL Server to Azure SQL Managed Instance. With Software Assurance, you can exchange your existing licenses for discounted rates on SQL Managed Instance using the Azure Hybrid Benefit for SQL Server. SQL Managed Instance is the best migration destination in the cloud for SQL Server instances that require high security and a rich programmability surface.
Tip
How can we make Azure SQL better? Take the survey.
Key features and capabilities
SQL Managed Instance combines the best features that are available both in Azure SQL Database and the SQL Server database engine.
Important
SQL Managed Instance runs with all of the features of the most recent version of SQL Server, including online operations, automatic plan corrections, and other enterprise performance enhancements. A comparison of the features available is explained in Feature comparison: Azure SQL Managed Instance versus SQL Server.
Important
Azure SQL Managed Instance has been certified against a number of compliance standards. For more information, see the Microsoft Azure Compliance Offerings, where you can find the most current list of SQL Managed Instance compliance certifications, listed under SQL Database.
The key features of SQL Managed Instance are shown in the following table:
vCore-based purchasing model
The vCore-based purchasing model for SQL Managed Instance gives you flexibility, control, transparency, and a straightforward way to translate on-premises workload requirements to the cloud. This model allows you to change compute, memory, and storage based upon your workload needs. The vCore model is also eligible for up to 55 percent savings with the Azure Hybrid Benefit for SQL Server.
In the vCore model, you can choose between generations of hardware.
- Gen4 logical CPUs are based on Intel® E5-2673 v3 (Haswell) 2.4 GHz processors, attached SSD, physical cores, 7-GB RAM per core, and compute sizes between 8 and 24 vCores.
- Gen5 logical CPUs are based on Intel® E5-2673 v4 (Broadwell) 2.3 GHz, Intel® SP-8160 (Skylake), and Intel® 8272CL (Cascade Lake) 2.5 GHz processors, fast NVMe SSD, hyper-threaded logical core, and compute sizes between 4 and 80 cores.
Find more information about the difference between hardware generations in SQL Managed Instance resource limits.
Service tiers
SQL Managed Instance is available in two service tiers:
- General purpose: Designed for applications with typical performance and I/O latency requirements.
- Business critical: Designed for applications with low I/O latency requirements and minimal impact of underlying maintenance operations on the workload.
Both service tiers guarantee 99.99% availability and enable you to independently select storage size and compute capacity. For more information on the high availability architecture of Azure SQL Managed Instance, see High availability and Azure SQL Managed Instance.
General Purpose service tier
The following list describes key characteristics of the General Purpose service tier:
- Designed for the majority of business applications with typical performance requirements
- High-performance Azure Blob storage (8 TB)
- Built-in high availability based on reliable Azure Blob storage and Azure Service Fabric
For more information, see Storage layer in the General Purpose tier and Storage performance best practices and considerations for SQL Managed Instance (General Purpose).
Find more information about the difference between service tiers in SQL Managed Instance resource limits.
Business Critical service tier
The Business Critical service tier is built for applications with high I/O requirements. It offers the differences between service tiers in SQL Managed Instance resource limits.
Management operations
Azure SQL Managed Instance provides management operations that you can use to automatically deploy new managed instances, update instance properties, and delete instances when no longer needed. Detailed explanation of management operations can be found on managed instance management operations overview page.
Advanced security and compliance
SQL Managed Instance comes with advanced security features provided by the Azure platform and the SQL Server database engine.
Security isolation
SQL Managed Instance provides additional security isolation from other tenants on the Azure platform. Security isolation includes:
- Native virtual network implementation and connectivity to your on-premises environment using Azure ExpressRoute or VPN Gateway.
- In a default deployment, the instances in the same subnet, wherever that is allowed by your security requirements, as that will bring you additional benefits. Co-locating instances in the same subnet will significantly simplify networking infrastructure maintenance and reduce instance provisioning time, since a long provisioning duration is associated with the cost of deploying the first managed instance in a subnet.
Security features
Azure SQL Managed Instance provides a set of advanced security features that can be used to protect your data.
- SQL Managed Instance auditing tracks database events and writes them to an audit log file placed in your Azure storage account. Auditing can help you maintain regulatory compliance, understand database activity, and gain insight into discrepancies and anomalies that could indicate business concerns or suspected security violations.
- Data encryption in motion - SQL Managed Instance secures your data by providing encryption for data in motion using Transport Layer Security. In addition to Transport Layer Security, SQL Managed Instance offers protection of sensitive data in flight, at rest, and during query processing with Always Encrypted. Always Encrypted offers. (RLS) enables you to control access to rows in a database table based on the characteristics of the user executing a query (such as by group membership or execution context). SQL proven encryption-at-rest technology in SQL Server that is required by many compliance standards to protect against theft of storage media.
Migration of an encrypted database to SQL Managed Instance is supported via Azure Database Migration Service or native restore. If you plan to migrate an encrypted database using native restore, migration of the existing TDE certificate from the SQL Server instance to SQL Managed Instance is a required step. For more information about migration options, see SQL Server to Azure SQL Managed Instance Guide.
Azure Active Directory integration
SQL Managed Instance supports traditional SQL Server database engine logins and logins integrated with Azure AD. Azure AD server principals (logins) (public preview) are an Azure cloud version of on-premises database logins that you are using in your on-premises environment. Azure AD server principals (logins) enable you to specify users and groups from your Azure AD tenant as true instance-scoped principals, capable of performing any instance-level operation, including cross-database queries within the same managed instance.
A new syntax is introduced to create Azure AD server principals (logins), FROM EXTERNAL PROVIDER. For more information on the syntax, see CREATE LOGIN, and review the Provision an Azure Active Directory administrator for SQL Managed Instance article.
Azure Active Directory integration and multi-factor authentication
SQL Managed Instance enables you to centrally manage identities of database users and other Microsoft services with Azure Active Directory integration. This capability simplifies permission management and enhances security. Azure Active Directory supports multi-factor authentication to increase data and application security while supporting a single sign-on process.
Authentication
SQL Managed Instance authentication refers to how users prove their identity when connecting to the database. SQL Managed Instance a database in Azure SQL Managed Instance, and is controlled by your user account's database role memberships and object-level permissions. SQL Managed Instance has the same authorization capabilities as SQL Server 2017.
Database migration
SQL Managed Instance targets user scenarios with mass database migration from on-premises or IaaS database implementations. SQL Managed Instance supports several database migration options that are discussed in the migration guides. See Migration overview: SQL Server to Azure SQL Managed Instance for more information.
Backup and restore
The migration approach leverages SQL backups to Azure Blob storage. Backups stored in an Azure storage blob can be directly restored into a managed instance using the T-SQL RESTORE command.
- For a quickstart showing how to restore the Wide World Importers - Standard database backup file, see Restore a backup file to a managed instance. This quickstart shows that you have to upload a backup file to Azure Blob storage and secure it using a shared access signature (SAS) key.
- For information about restore from URL, see Native RESTORE from URL.
Important
Backups from a managed instance can only be restored to another managed instance. They cannot be restored to a SQL Server instance or to Azure SQL Database.
Database Migration Service
Azure Database Migration Service is a fully managed service designed to enable seamless migrations from multiple database sources to Azure data platforms with minimal downtime. This service streamlines the tasks required to move existing third-party and SQL Server databases to Azure SQL Database, Azure SQL Managed Instance, and SQL Server on Azure VM. See How to migrate your on-premises database to SQL Managed Instance using Database Migration Service.
SQL features supported
SQL Managed Instance aims to deliver close to 100% surface area compatibility with the latest SQL Server version through a staged release plan. For a features and comparison list, see SQL Managed Instance feature comparison, and for a list of T-SQL differences in SQL Managed Instance versus SQL Server, see SQL Managed Instance T-SQL differences from SQL Server.
SQL Managed Instance supports backward compatibility to SQL Server 2008 databases. Direct migration from SQL Server 2005 database servers is supported, and the compatibility level for migrated SQL Server 2005 databases is updated to SQL Server 2008.
The following diagram outlines surface area compatibility in SQL Managed Instance:
Key differences between SQL Server on-premises and SQL Managed Instance
SQL Managed Instance benefits from being always-up-to-date in the cloud, which means that some features in SQL Server may be obsolete, be retired, or have alternatives. There are specific cases when tools need to recognize that a particular feature works in a slightly different way or that the service is running in an environment you do not fully control.
Some key differences:
- High availability is built in and pre-configured using technology similar to Always On availability groups.
- There are only automated backups and point-in-time restore. Customers can initiate
copy-onlybackups that do not interfere with the automatic backup chain.
- Specifying full physical paths is unsupported, so all corresponding scenarios have to be supported differently: RESTORE DB does not support WITH MOVE, CREATE DB doesn't allow physical paths, BULK INSERT works with Azure blobs only, etc.
- SQL Managed Instance supports Azure AD authentication as a cloud alternative to Windows authentication.
- SQL Managed Instance automatically manages XTP filegroups and files for databases containing In-Memory OLTP objects.
- SQL Managed Instance supports SQL Server Integration Services (SSIS) and can host an SSIS catalog (SSISDB) that stores SSIS packages, but they are executed on a managed Azure-SSIS Integration Runtime (IR) in Azure Data Factory. See Create Azure-SSIS IR in Data Factory. To compare the SSIS features, see Compare SQL Database to SQL Managed Instance.
Administration features
SQL Managed Instance enables system administrators to spend less time on administrative tasks because the service either performs them for you or greatly simplifies those tasks. For example, OS/RDBMS installation and patching, dynamic instance resizing and configuration, backups, database replication (including system databases), high availability configuration, and configuration of health and performance monitoring data streams.
For more information, see a list of supported and unsupported SQL Managed Instance features, and T-SQL differences between SQL Managed Instance and SQL Server.
Programmatically identify a managed instance
The following table shows several properties, accessible through Transact-SQL, that you can use to detect that your application is working with SQL Managed Instance and retrieve important properties.
Next steps
- To learn how to create your first managed instance, see Quickstart guide.
- For a features and comparison list, see Database pricing. | https://docs.microsoft.com/en-us/azure/azure-sql/managed-instance/sql-managed-instance-paas-overview?WT.mc_id=AZ-MVP-5003408 | 2021-09-17T05:07:09 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['media/sql-managed-instance-paas-overview/key-features.png',
'Key features'], dtype=object)
array(['media/sql-managed-instance-paas-overview/application-deployment-topologies.png',
'High availability'], dtype=object)
array(['media/sql-managed-instance-paas-overview/migration.png',
'surface area compatibility'], dtype=object) ] | docs.microsoft.com |
Range
Attribute. Parse Limits InInvariant Culture Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
public: property bool ParseLimitsInInvariantCulture { bool get(); void set(bool value); };
public bool ParseLimitsInInvariantCulture { get; set; }
member this.ParseLimitsInInvariantCulture : bool with get, set
Public Property ParseLimitsInInvariantCulture As Boolean | https://docs.microsoft.com/en-us/dotnet/api/system.componentmodel.dataannotations.rangeattribute.parselimitsininvariantculture?view=net-5.0 | 2021-09-17T05:21:46 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.microsoft.com |
Aggregation Pipeline Limits¶
Aggregation operations with the
aggregate command have the
following limitations.
Result Size Restrictions¶
The
aggregate command can either return a cursor or store
the results in a collection. Each document in the result set is subject
to the 16 megabyte BSON Document Size limit. If any single document exceeds the BSON Document Size
limit, the aggregation produces an error. The
limit only applies to the returned documents. During the pipeline
processing, the documents may exceed this size. The
db.collection.aggregate() method returns a cursor by default.
Number of Stages Restrictions¶
Changed in version 5.0: MongoDB 5.0 limits the number of aggregation pipeline stages allowed in a single pipeline to 1000.
Memory Restrictions¶ aggregation stage is not restricted to
100 megabytes of RAM because it runs in a separate process.
Examples of stages that can spill to disk when allowDiskUse is
true are:
pipeline stages exceed
the limit, consider adding a $limit stage.
Starting in MongoDB 4.2, the profiler log messages and diagnostic log
messages includes a
usedDisk
indicator if any aggregation stage wrote data to temporary files due
to memory restrictions. | https://docs.mongodb.com/v5.0/core/aggregation-pipeline-limits/ | 2021-09-17T04:52:24 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.mongodb.com |
Frequently Asked Questions
General
What is Paper?
Paper is a fork of the Spigot server implementation (which is itself a fork of CraftBukkit). Paper strives to bring improved performance, more features, and more APIs for developers to build awesome plugins with.
What do I need to run it?
Paper requires the Java Runtime Environment to run. Specifically, it requires at least Java version 16. Once that is installed you’re all good to go! If you don’t already have a Java 16 Runtime, it’s easy to download and install.
See our docs on starting out: Getting Started
Where do I get it?
Builds of Paper are already available on our site’s download page.
Alternatively, for more automated access, builds are available via a RESTful Downloads API
Server Administrators
What can I expect from switching to Paper?
When migrating your CraftBukkit or Spigot server to Paper, it is not uncommon to see a noticeable performance improvement.
Note
Though you may see an improvement, Paper is not a silver bullet. Ultimately, you are responsible for the performance of your server, good or bad, on any platform. Tailoring your server to best fit your players and gamemodes is ultimately the key to great performance.
Your plugins and worlds will not be changed and both should work just fine after the change.
Will players be able to tell?
That depends. Your players may see a benefit to gameplay because of the performance improvement, assuming you see one. On a properly maintained server, your players may not even be able to tell the difference.
Can I run Bukkit plugins on Paper?
Yep! You absolutely can. Paper takes care to maintain compatibility with Bukkit plugins made by the community.
Can I run Spigot plugins on Paper too?
Yes you can! We don’t like to break things most of the time. Sometimes there are plugin authors who do, but we can usually make things work.
Is there anywhere to get plugins for Paper?
Many plugins that work with, and are made for, Paper are available on the forum’s resource section. Sometimes you’ll also see them elsewhere, you just have to keep your eyes open.
Does Paper support Forge Mods?
No, Paper does not support Forge mods of any kind. While there have been attempts to merge the Forge and Bukkit platforms in the past, it has never been a wonderful experience for developers or administrators.
If this is something you’re after, we’d point you towards the Sponge Project instead.
Developers
What can I do with Paper?
Paper provides additional APIs ontop of Bukkit, exposing new vanilla elements and even some of its own for you to play with.
Does Paper make any breaking changes to the API?
Fortunately, Paper does not make breaking API changes so it can maintain plugin compatibility with upstream Spigot and CraftBukkit. At the same time, this means we are also sometimes limited with what we can do and how we can do it.
It’s a double-edged sword. | https://paper.readthedocs.io/en/latest/about/faq.html | 2021-09-17T03:22:55 | CC-MAIN-2021-39 | 1631780054023.35 | [] | paper.readthedocs.io |
Deploy API to Choreo Connect with Virtual Hosts¶
There are two ways to add an API to Choreo Connect. For more info refer
Info
Before you begin
This guide assumes that you already have a Choreo Connect instance that is up and running. If not, refer to the Quick Start Guide on how to install and run Choreo Connect. To learn more about Choreo Connect, have a look at the Overview of Choreo Connect.
Via API Manager¶
Step 1 - Define Virtual Hosts¶
Let's define virtual hosts (VHosts) in API Manager server instance by editing the
deployment.toml.
Info
Refer Define Custom Hostnames for more information.
- Open
<APIM-HOME>/repository/conf/deployment.tomlfile.
- Add the following config under the Default
[[apim.gateway.environment]]to define the VHost
us.wso2.com.
[[apim.gateway.environment.virtual_host]] ws_endpoint = "ws://us.wso2.com:9099" wss_endpoint = "wss://us.wso2.com:8099" http_endpoint = "" https_endpoint = "" websub_event_receiver_http_endpoint = "" websub_event_receiver_https_endpoint = ""
Step 2 - Configure Choreo Connect with API Manager¶
Refer to documentation on how to configure Choreo Connect with API Manager.
Step 3 - Create an API in API Manager¶
Step 4 - Deploy the API in API Manager¶
The guide here will explain how you can easily deploy the API you just created.
When deploying the API, select the Virtual Host you defined earlier (i.e.
us.wso2.com).
You have successfully deployed the API to Choreo Connect with the VHost
us.wso2.com.
To invoke the API, skip to the steps here.
Via the API Controller (apictl)¶
Follow the all steps except Deploy API in the (Deploy an API via apictl documentation)[].
Before deploying the API project, edit the file
deployment_environments.yaml.
type: deployment_environments version: v4.0.0 data: - displayOnDevportal: true deploymentEnvironment: Default deploymentVhost: us.wso2.com
Info
When configuring multiple Choreo Connect Gateway environments, you have to configure the default VHost of the particular environment.
# default vhosts mapping for standalone mode [[adapter.vhostMapping]] environment = <ENVIRONMENT_NAME> vhost = <DEFAULT_VHOST_OF_ENVIRONMENT>
# default vhosts mapping for standalone mode [[adapter.vhostMapping]] environment = "Default" vhost = "localhost" [[adapter.vhostMapping]] environment = "sg-region" vhost = "sg.wso2.com"
If the VHost is not declared in the API project, the API is deployed with the default VHost of the environment.
For exammple, an API project with following
deployment_environments.yaml will be deployed to the following.
- Environment: Default - Vhost: us.wso2.com
- Environment: sg-region - Vhost: sg.wso2.com
- Environment: uk-region - API is not deployed to this environment becuase it is not configured in
[[adapter.vhostMapping]].
type: deployment_environments version: v4.0.0 data: - displayOnDevportal: true deploymentEnvironment: Default deploymentVhost: us.wso2.com - displayOnDevportal: true deploymentEnvironment: sg-region - displayOnDevportal: true deploymentEnvironment: uk-region
Let's invoke the API.
Invoke the API¶
First we need to add the host entry to
/etc/hosts file in order to access the API Manager publisher and dev portal.
Add the following entry to
/etc/hosts file
127.0.0.1 us.wso2.com
After the APIs are exposed via Choreo Connect, you can invoke an API with a valid token (JWT or opaque access token).
Let's use the following command to generate a JWT to access the API, and set it to the variable
TOKEN.
TOKEN=$(curl -X POST "" -H "Authorization: Basic YWRtaW46YWRtaW4=" -k -v)
Execute the following cURL command to Invoke the API using the JWT.
curl -X GET "" -H "accept: application/xml" -H "Authorization:Bearer $TOKEN" -k
Note
You can also use header to specify the VHost to invoke the API.
curl -X GET "" \ -H "Host: us.wso2.com" -H "accept: application/xml" \ -H "Authorization:Bearer $TOKEN" -k | https://apim.docs.wso2.com/en/latest/deploy-and-publish/deploy-on-gateway/choreo-connect/deploy-api/deploy-api-with-virtual-hosts/ | 2021-09-17T03:36:22 | CC-MAIN-2021-39 | 1631780054023.35 | [] | apim.docs.wso2.com |
Monitor historical trends across a site
The Trends view accesses historical trend information of each site for the following parameters:
- sessions
- connection failures
- machine failures
- logon performance
- load evaluation
- capacity management
- machine usage
- resource utilization
To locate this information, click the Trends menu.
The zoom-in drill down feature lets you navigate through trend charts by zooming in on a time period (clicking.
Note:
-.
- Citrix Virtual Apps and Desktops service supports historical data retention only for 90 days. Hence, one-year trends and reports in Monitor show the last 90 days of data. occurs.
The auto reconnect information helps you view and troubleshoot network connections having interruptions, and to VDAs 1906 or later.
For more information about session reconnections, see Sessions. For more information about policies, see Auto client reconnect policy settings and Session reliability policy settings.
Sometimes, the auto reconnect data might not appear in Monitor for the following reasons:
Workspace app is not sending auto reconnect data to VDA.
VDA is not sending data to monitor service.
Note:
Sometimes, the client IP address might not be obtained correctly if certain Citrix Gateway policies are set.
View trends for connection failures: From the Failures tab, select the connection, machine type, failure type, delivery group, and time period to view a graph containing more detailed information about the user connection failures across your site.
View trends for machine failures: From the Single session OS Machine Failures tab or Multi-session Multi-session OS machines. The filter options for this graph include the delivery group or Multi-session OS machine in a delivery group, Multi-session OS machine (available only if Multi-session OS machine in a delivery group was selected), and range. The Load Evaluator Index is displayed as percentages of Total CPU, Memory, Disk or Sessions and is shown in comparison with the number of connected users in the last interval.
View hosted applications usage:. You can see the predicted peak concurrent application instances values chosen future time period with Application instance prediction. For more information, see the Application instance prediction section.
View single and multi-session OS usage: The Trends view shows the usage of Single session OS by site and by delivery group. When you select site, usage is shown per delivery group. When you select delivery group, usage is shown per User. The Trends view also shows the usage of Multi-session Single session OS Machines or Multi-session OS Machines to obtain a real-time view of your VM usage. The page displays the number of Autoscale enabled Multi-session and Single session OS machines that are powered on for a selected delivery group and time period. Also available is the estimated savings achieved by enabling Autoscale in the selected delivery group, this percentage is calculated using the per machine costs.
The usage trends of Autoscale enabled machines indicate the actual usage of the machines, enabling you to quickly assess your site’s capacity needs.
- Single session OS availability - displays the current state of Single session OS machines (VDIs) by availability for the entire site or a specific delivery group.
- Multi-session OS availability - displays the current state of Multi-session OS machines by availability for the entire site or a specific delivery group.
Note:
The grid below the chart displays the delivery group based machine usage data in real-time. The data includes machine availability of all machines independent of Autoscale enablement. The number of machines displayed in the Available Counter column in the grid includes machines in maintenance mode.
The monitoring data consolidation depends on the time period you select.
- Monitoring data for the one day and one week time periods is consolidated per hour.
- Monitoring data for the one month time period is consolidated per day.
The machine status is read at the time of consolidation and any changes during the period in between is not considered. For the consolidation period, refer to the Monitor API documentation.
For more information on monitoring Autoscale enabled machines see the Autoscale article.
View resource utilization: From the Resource Utilization tab, select Single session OS Machines or Multi-session OS Machines to obtain insight into historical trends data for CPU and memory usage, and IOPS and disk latency for each VDI machine for better capacity planning. This feature requires.
Note:
- application failures: The Application Failures tab displays failures associated with the published applications on the VDAs.
This feature requires VDAs version 7.15 or later. Single session OS VDAs running Windows Vista and later, and Multi-session OS VDAs running Windows Server 2008 and later are supported. For more information, see Historical application failure monitoring.
By default, only application faults from Multi-session OS VDAs are displayed. You can set the monitoring of application failures by using Monitoring policies. For more information, see Monitoring policy settings.
View application probe results: The Application Probe Results tab displays the results of probe for applications that have been configured for probing in the Configuration page. Here, the stage of launch during which the application launch failure occurred is recorded.
This feature requires VDAs version 7.18 or later. For more information see Application probing.
Create customized reports: The Custom Reports tab provides a user interface for generating Custom Reports containing real-time and historical data from the Monitoring database in tabular format.
From the list of previously saved Custom Report queries, you can click Run and download to export the report in CSV format, click Copy OData to copy and share the corresponding OData query, or click Edit to edit the query. You can create a Custom Report query based on machines, connections, sessions, or application instances. Specify filter conditions based on fields such as machine, delivery group, or time period. Specify extra columns required in your Custom Report. Preview displays a sample of the report data. Saving the Custom Report query adds it to the list of saved queries.
You can create a Custom Report query based on a copied OData query. To do this, select the OData Query option and paste the copied OData query. You can save the resultant query for execution later.:
- HDX connection logon data is not collected for VDAs earlier than 7. For earlier VDAs, the chart data is displayed as 0.
- Delivery groups deleted in Citrix Studio are available for selection in the., 1 either the available data does not follow a regular pattern or
- 84 days data for one year’s prediction
Note:
You can export only the historical graph, but not the predicted graph.
| https://docs.citrix.com/en-us/citrix-virtual-apps-desktops-service/monitor/site-analytics/trends.html | 2021-09-17T03:48:44 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/en-us/citrix-virtual-apps-desktops-service/media/director-app-prediction1.png',
'App prediction image'], dtype=object)
array(['/en-us/citrix-virtual-apps-desktops-service/media/director-app-prediction2.png',
'App prediction image'], dtype=object) ] | docs.citrix.com |
Release Notes - 2.5.0
#Release Date (2020-10-28)
#Breaking Changes 💥
N/A
#New Features 🚀
- Add ability to check whether an email is registered (Falcon BigCommerce API/Falcon Shop Data/Demo v2)
- Add checking of Types (yarn check-types) (Falcon Client)
- Add HTTPS option (localhost) (Falcon Client)
- Add IOC container (Falcon Server)
- Add dynamically generated negative spacings for all positive spacings (Falcon UI)
#Bug Fixes 🐛
- Fix sidebar horizontal scrolling bug (Demo v2)
- Fix copying empty
viewsdirectory (Falcon Client)
- Fix initial language configuration (Falcon Client)
- fixed part of minor types issues (Falcon Client)
#Polish 💅
- Add a fixed add to cart bottom bar in the product overview page (Demo v2)
- Added on mobile the ability for the header to transform to a fixed bottom navigation menu (Demo v2)
- Refactor: Update version of
@types/reactand
react-router-dom(Falcon Client)
- Refactor: Changed config of TypeScript from js to
tsconfig.json(Falcon Dev Tools) | https://docs.deity.io/docs/platform/release/2-5-0/ | 2021-09-17T04:55:27 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.deity.io |
Date: Mon, 16 Nov 2009 06:39:53 +0000 From: Matthew Seaman <[email protected]> To: [email protected] Cc: "Ronald F. Guilmette" <[email protected]> Subject: Re: Bad Blocks... Should I RMA? Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]> <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
This is an OpenPGP/MIME signed message (RFC 2440 and 3156) --------------enig1C3AA9122AA101440FF12FCC Content-Type: text/plain; charset=ISO-8859-1; format=flowed Content-Transfer-Encoding: quoted-printable Lowell Gilbert wrote: > "Ronald F. Guilmette" <[email protected]> writes: >=20 >> Nov 15 15:24:17 coredump kernel: ad4: FAILURE - READ_DMA status=3D51<R= EADY,DSC,ERROR> error=3D40<UNCORRECTABLE> LBA=3D256230591 >=20 > subsequentl= y perform=20 perfectly well[*]. Beyond running the manufacturers diagnostics, as the OP has said he has = nothing particularly valuable on the drive, it might be worth running a f= ew passes of dban or similar on the disk --- this will overwrite every part of the platter = and should make it abundantly clear if there is a real and persistent problem= =2E If you can't afford to scrub the disk, then just keep it under observation: = if the problems recur within a few weeks then yes, definitely RMA that drive. Cheers, Matthew =20 [*] If the error messages have disappeared since, then this has probably already happened. --=20 Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: Ramsgate Kent, CT11 9PW --------------enig1C3AA9122AA101440FF12FCC Content-Type: application/pgp-signature; name="signature.asc" Content-Description: OpenPGP digital signature Content-Disposition: attachment; filename="signature.asc" -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.13 (FreeBSD) Comment: Using GnuPG with Mozilla - iEYEAREIAAYFAksA874ACgkQ8Mjk52CukIxukQCaA1i9VJB5FZf3ETbcPUv+V9jo Hg8An25YY0zm+wnuTpt+6bWGjjsrMpC4 =gAus -----END PGP SIGNATURE----- --------------enig1C3AA9122AA101440FF12FCC--
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=190908+0+archive/2009/freebsd-questions/20091122.freebsd-questions | 2021-09-17T03:54:49 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.freebsd.org |
DigitalSuite Studio Monitor Modules
The Monitor category of DigitalSuite Studio modules allows you to monitor DigitalSuite activity.
Monitor your organization's platform Usage using the Usage Report module.
Monitor the processing of Messages for triggering processes and composite APIs.
Monitor the execution of Schedules defined for processes and composite APIs.
In addition to the three modules, you use the Process Console as monitoring tool for process instances.
Please give details of the problem | https://docs.runmyprocess.com/Components/DigitalSuite_Studio/Monitor/ | 2021-09-17T03:03:52 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['/images/DigitalSuite_Studio/Monitor/Monitor_Wheel.png',
'Navigation Wheel'], dtype=object) ] | docs.runmyprocess.com |
Qrack Performance¶
Abstract¶
The Qrack quantum simulator is an open-source C++ high performance, general purpose simulation supporting arbitrary numbers of entangled qubits. While there are a variety of other quantum simulators such as [QSharp], [QHiPSTER], and others listed on [Quantiki], Qrack represents a unique offering suitable for applications across the field.
A selection of performance tests are identified for creating comparisons between various quantum simulators. These metrics are implemented and analyzed for Qrack. These experimentally derived results compare favorably against theoretical boundaries, and out-perform naive implementations for many scenarios.
Introduction¶
There are a growing number of quantum simulators available for research and industry use. Many of them perform quite well for smaller number of qubits, and are suitable for non-rigorous experimental explorations. Fewer projects are suitable as “high performance” candidates in the >32 qubit range. Many rely on the common approach often described as the “Schrödinger method,” doubling RAM usage by a factor of 2 per fully interoperable qubit, or else Feynman path integrals, which can become intractible at arbitrary circuit depth. Attempting to build on the work of IBM’s Breaking the 49-Qubit Barrier in the Simulation of Quantum Circuits [Pednault2017] paper, with more recent attention to potential improvements inspired by Gottesman-Knill stabilizer simulators, Qrack can execute surprisingly general circuits past 32 qubits in width on modest single nodes.
Qrack is an open-source quantum computer simulator option, implemented in C++, supporting integration into other popular compilers and interfaces, suitable for utilization in a wide variety of projects. As such, it is an ideal test-bed for establishing a set of benchmarks useful for comparing performance between various quantum simulators.
Qrack provides a “QEngineCPU” and a “QEngineOCL” that represent non-OpenCL and OpenCL base implementations for Schrödinger method simulation. “QHybrid” switches off between these two types internally for best performance at low qubit widths. “QStabilizerHybrid” switches off internally between Gottesman-Knill “stabilizer” simulation and Schrödinger method. For general use cases, the “QUnit” layer provides explicit Schmidt decomposition on top of another engine type (per [Pednault2017]). “QPager” segments a Schrödinger method simulation into equally sized “pages” that can be run on multiple OpenCL devices or multiple maximum allocation segments of a single device, increasing greatest maximally entangled width. A “QEngine” type is always the base layer, and QUnit, QStabilizerHybrid, and QPager types may be layered over these, and over each other.
This version of the Qrack benchmarks contains comparisons against other publicly available simulators, specifically QCGPU, and Qiskit (each with its default simulator, if multiple were available). Qrack has been incorporated as an optional back end for ProjectQ and plugin for Qiskit, in repositories maintained by the developers of Qrack, and benchmarks for their performance will follow.
Reader Guidance¶
This document is largely targeted towards readers looking for a quantum simulator that desire to establish the expected bounds for various use-cases prior to implementation.
Disclaimers¶
- Your Mileage May Vary - Any performance metrics here are the result of experiments executed with selected compilation and execution parameters on a system with a degree of variability; execute the supplied benchmarks on the desired target system for accurate performance assessments.
- Benchmarking is Hard - While we’ve attempted to perform clean and accurate results, bugs and mistakes do occur. If flaws in process are identified, please let us know!
Method¶
This performance document is meant to be a simple, to-the-point, and preliminary digest of these results. We plan to submit a formal academic report for peer review of these results, in full detail, as soon as we collect sufficient feedback on the preprint. (The originally planned date of submission was in February of 2020, but it seems that COVID-19 has hindered our ability to seek preliminary feedback.) These results were prepared with the generous financial support of the Unitary Fund. However, we offer that our benchmark code is public, largely self-explanatory, and easily reproducible, while we prepare that report. Hence, we release these partial preliminary results now.
100 timed trials of single and parallel gates were run for each qubit count between 4 and 28 qubits. Three tests were performed: the quantum Fourier transform, (“QFT”), random circuits constructed from a universal gate set, and an idealized approximation of Google’s Sycamore chip benchmark, as per [Sycamore]. The benchmarking code is available at. Default build and runtime options were used for all candidates. Notably, this means Qrack ran at single floating point accuracy whereas QCGPU and Qiskit ran at double floating point accuracy.
Among AWS virtual machine instances, we sought to find those systems with the lowest possible cost to run the benchmarks for their respective execution times, at or below for the 28 qubit mark. An AWS g4dn.2xlarge running Ubuntu Server 20.04LTS was selected for GPU benchmarks. Benchmarks were collected from March 4, 2021 through March 7, 2021. These results were combined with single gate, N-width gate benchmarks for Qrack, collected overnight from December 19th, 2018 into the morning of December 20th. (The potential difference since December 2018 in these particular Qrack tests reused from then should be insignificant. We took care to try to report fair tests, within cost limitations, but please let us know if you find anything that appears misrepresentative.)
Comparative benchmarks included QCGPU, the Qiskit-Aer GPU simulator, and Qrack’s default typically optimal “stack” of a “QUnit” layer on top of “QStabilizerHybrid,” on top of “QPager,” on top “QHybrid.” All of these candidates are GPU-based. CPU-based Cirq was considered for presentation here, but a nonexhaustive experiment on AWS CPU instances advertised as low cost-for-performance failed to return Cirq results on rough order of cost-for-performance of any GPU candidate. However, the author does not feel comfortable concluding on this basis that CPU-based simulation cost could not be made competitve with GPU-based simulation, hence we omit Cirq from our graphs to avoid potential misrepresentation.
QFT benchmarks could be implemented in a straightforward manner on all simulators, and were run as such. Qrack appears to be the only candidate considered for which inputs into the QFT can (drastically) affect its execution time, with permutation basis states being run in much shorter time, for example, hence only Qrack required a more general random input, whereas all other simulators were started in the |0> state. For a sufficiently representatively general test, Qrack instead used registers of single separable qubits intialized with uniformly randomly distributed probability between |0> and |1>, and uniformly randomly distributed phase offset between those states.
Random universal circuits carried out layers of single qubit gates on every qubit in the width of the test, followed by layers randomly selected couplings of (2-qubit) CNOT, CZ, and SWAP, or (3-qubit) CCNOT, eliminating each selected bit for the layer. 20 layers of 1-qubit-plus-multi-qubit iterations were carried out, for each qubit width, for the benchmarks presented here.
Sycamore circuits were carried out similarly to random universal circuits and the method of the [Sycamore] paper, interleaving 1-qubit followed by 2-qubit layers, to depth of 20 layers each. Whereas as that original source appears to have randomly fixed its target circuit ahead of any trials, and then carried the same pre-selected circuit out repeatedly for the required number of trials, all benchmarks in the case of this report generated their circuits per-iteration on-the-fly, per the selection criteria as read from the text of [Sycamore]. Qrack easily implemented the original Sycamore circuit exactly. By nature of the Schrödinger method simulation used in each other candidate, atomic “convenience method” 1-qubit and 2-qubit gate definitions could potentially easily be added to other candidates for this test, hence we thought it most representative to make largely performance-irrelevant substitutions of “SWAP” for “iSWAP” for those candidates which did not already define sufficient API convenience methods for “Sycamore” circuits, without nonrepresentatively complicated gate decompositions. We strongly encourage the reader to inspect and independently execute the simple benchmarking code which was already linked in the beginning of this “Method” section, for total specific detail.
Qrack QEngine type heap usage was established as very closely matching theoretical expections, in earlier benchmarks, and this has not fundamentally changed. QUnit type heap usage varies greatly dependent on use case, though not in significant excess of QEngine types. No representative RAM benchmarks have been established for QUnit types, yet. QEngine Heap profiling was carried out with Valgrind Massif. Heap sampling was limited but ultimately sufficient to show statistical confidence.
Results¶
We observed extremely close correspondence with Schrödinger method theoretical complexity and RAM usage considerations for the behavior of QEngine types. QEngineCPU and QEngineOCL require exponential time for a single gate on a coherent unit of N qubits. QUnit types with explicitly separated subsystems as per [Pednault2017] show constant time requirements for the same single gate.
QEngineCPU and QEngineOCL can perform many identical gates in parallel across entangled subsystems for an approximately constant costs, when total qubits in the engine are held fixed as breadth of the parallel gate application is varied. To test this, we can apply parallel gates at once across the full width of a coherent array of qubits. (CNOT is a two bit gate, so \((N-1)/2\) gates are applied to odd numbers of qubits.) Notice in these next graphs how QEngineCPU and QEngineOCL have similar scaling cost as the single gate graphs above, while QUnit types show a linear trend (appearing logarithmic on an exponential axis scale):
Heap sampling supports theoretical expecations to high confidence. Complex numbers are represented as 2 single (32-bit) or 2 double (64-bit) accuracy floating point types, for real and imaginary components. The use of double or single precision is controlled by a compilation flag. There is one complex number per permutation in a separable subsystem of qubits. QUnit explicitly separates subsystems, while QEngine maintains complex amplitudes for all \(2^N\) permutations of \(N\) qubits. QEngines duplicate their state vectors once during many gates, like arithmetic gates, for speed and simplicity where it eases implementation.
QUnit explicitly separates its representation of the quantum state and may operate with much less RAM, but QEngine’s RAM usage represents approximately the worst case for QUnit, of maximal entanglement. OpenCL engine types attempt to use memory on the accelerator device instead of general heap when a QEngineOCL instance can fit a single copy of its state vector in a single allocation on the device. On many modern devices, state vectors up to about 1GB in size can be allocated directly on the accelerator device instead of using general heap. “Paging” with QPager allows multiple such maximum allocation segments to be used for the same single simulation. If the normalization option is on, an auxiliary buffer is allocated for normalization that is half the size of the state vector.
The “quantum” (or “discrete”) Fourier transform (QFT/DFT) is a realistic and important test case for its direct application in day-to-day industrial computing applications, as well as for being a common processing step in many quantum algorithms.
By the 28 qubit level, and at very low qubit widths, Qrack out-performs QCGPU and Qiskit. (Recall that Qrack uses a representatively “hard” initialization on this test, as described above, whereas permutation basis eigenstate inputs, for example, are much more quickly executed.) Qrack is the only candidate tested which exhibits special case performance on the QFT, as for random permutation basis eigenstate initialization, or initialization via permutation basis eigenstates with random “H” gates applied, before QFT.
Similarly, on random universal circuits, defined above and in the benchmark repository, Qrack leads over all other candidates at the high qubit width end.
For “Sycamore” circuits, argued by other authors to establish “quantum supremacy” of native quantum hardware, all simulators tested maintain their general performance trends, as above.
To test new capabilities of the “QPager” layer, a slightly different random universal circuit provided in the Qrack benchmark suite was run on a g4dn.12xlarge with 4 NVIDIA Tesla T4 GPUs, to the maximum qubit width possible, which was 34 qubits. The random gate set selected from is {CCZ, CCNOT, CZ, CNOT} and {H, X, Z} for multi- and single qubit gates.
With the recently improved QPager layer, it is often possible to achieve a 2 qubit greater maximum width on the same GPU hardware as a result of using all 4 maximum allocation segments typical of NVIDIA GPUs. QPager combines “pages” of maximum allocation segment on an OpenCL device, which are typically of a much smaller size than the overall RAM of the GPU. Proceeding to higher factors of 2 times page count, it becomes possible to use general RAM heap without exceeding maximum allocation according to the OpenCL standard, as is demonstrated in the graph above. The threshold to cross from single GPU into multi-GPU is 31 qubits, using 2 GPUs at that level, and the threshold for general heap usage is likely crossed at 33 qubits, using the maximum VRAM of 4 NVIDIA T4 GPUs at 32 qubits.
Discussion¶
Qrack::QUnit succeeds as a novel and fundamentally improved quantum simulation algorithm, over the naive Schrödinger algorithm in special cases. Primarily, QUnit does this by representing its state vector in terms of decomposed subsystems, as well as buffering and commuting Pauli X and Y basis transformations and singly-controlled gates. On user and internal probability checks, QUnit will attempt to separate the representations of independent subsystems by Schmidt decomposition. Further, Qrack will avoid applying phase effects that make no difference to the expectation values of any Hermitian operators, (no difference to “physical observables”). For each bit whose representation is separated this way, we recover a factor of close to or exactly 1/2 the subsystem RAM and gate execution time.
Qrack::QPager, recently, gives several major advantages with or without a Qrack::QUnit layer on top. It usually allows 2 greater maximum qubit width allocation on the same 4-segment GPU RAM store, and it performs surprisingly well for execution speed at high qubit widths. It can also utilize larger system general RAM heap stores than what is available just as GPU RAM.
Qrack has seemingly poor mid-range qubit width performance on the selected g4dn.2xlarge instance, (or, alternatively, good performance at very narrow and very wide ends of the scale, which is not maintained in middle range). As the g4dn.2xlarge only provides 8 “vCPU” units, which is far smaller than a typical PC CPU, mid-range performance might be alleviated somewhat by a more powerful CPU alongside GPU resources. Further, the use of the QPager layer under QUnit might incur a performance penalty at widths too wide for QHybrid optimization with CPU simulation, but too narrow to see returns from the complexity of QPager. While it might be disappointing that the default “layer stack” for Qrack does not perform best across all qubit widths on the selected AWS EC2 instance, good performance at the very wide and very narrow ends of the scale likely still motivates the adoption of Qrack for HPC and PC simulation.
Further Work¶
A formal report of the above and additional benchmark results, in much greater detail and specificity, is planned to be submitted for publication as soon as sufficient preliminary peer opinion can be collected on the preprint, in early to mid 2021, thanks to the generous support of the Unitary Fund.
We will maintain systematic comparisons to published benchmarks of quantum computer simulation standard libraries, as they arise.
Conclusion¶
Per [Pednault2017], and many other attendant and synergistic optimizations engineered specifically in Qrack’s QUnit, explicitly separated subsystems of qubits in QUnit have a significant RAM and speed edge in many cases over the Schrödinger algorithm of most popular quantum computer simulators. With QPager, it is possible to achieve even higher qubit widths and execution speeds. Qrack gives very efficient performance on a single node past 32 qubits, up to the limit of maximal entanglement. | https://vm6502q.readthedocs.io/en/latest/performance.html | 2021-09-17T03:50:21 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['_images/x_single.png', '_images/x_single.png'], dtype=object)
array(['_images/cnot_single.png', '_images/cnot_single.png'], dtype=object)
array(['_images/x_all.png', '_images/x_all.png'], dtype=object)
array(['_images/cnot_all.png', '_images/cnot_all.png'], dtype=object)
array(['_images/qrack_ram.png', '_images/qrack_ram.png'], dtype=object)
array(['_images/qft.png', '_images/qft.png'], dtype=object)
array(['_images/qft_optimization.png', '_images/qft_optimization.png'],
dtype=object)
array(['_images/random_universal.png', '_images/random_universal.png'],
dtype=object)
array(['_images/sycamore.png', '_images/sycamore.png'], dtype=object)
array(['_images/test_ccz_ccx_h_x4.png', '_images/test_ccz_ccx_h_x4.png'],
dtype=object) ] | vm6502q.readthedocs.io |
Theme Sections and description (body), you can insert media (image) also if you want.
The view at homepage:
- Funfact: Just choose the icon, add number and text
3. Skills:
4. Features:
Create Team¶
Continue, you choose Team Member > Add new. Here you can add your team members with their detail information.
- Name, Avatar, Title/Position, Small Introduction
- Social Contact
Final view:
Create Clients¶
From the left-hand side of the Dashboard, select Client > Add New
- Enter client's name (it is not displayed).
- Add client's url
- Upload logo
Create Services¶
You also go to Service in the Dashboard and Add new
Create Pricing Table¶
Add pricing table with details by going to Pricing table > Add new
Interface setting
Display home page
Create Testimonials¶
Just go to Testimonial > Add new and name the Title, upload photo and add quote
Interface setting
Display home page
Create Work / Portfolio¶
To add your portfolio, just go to Portfolio > Add new and set the properties.
- featured image, which will display at home page And this is the final view:
| https://docs.awethemes.com/viska/theme-sections.html | 2021-09-17T03:08:21 | CC-MAIN-2021-39 | 1631780054023.35 | [array(['images/about2.png', None], dtype=object)
array(['images/funfact1.png', None], dtype=object)
array(['images/skill.png', None], dtype=object)
array(['images/feature.png', None], dtype=object)
array(['images/feature1.png', None], dtype=object)
array(['images/settingteamname.png', None], dtype=object)
array(['images/team.png', None], dtype=object)
array(['images/client.png', None], dtype=object)
array(['images/settingservice.png', None], dtype=object)
array(['images/service.png', None], dtype=object)
array(['images/pricing.png', None], dtype=object)
array(['images/testimonial.png', None], dtype=object)
array(['images/work.png', None], dtype=object)
array(['images/work1.png', None], dtype=object)
array(['images/work2.png', None], dtype=object)] | docs.awethemes.com |
Problem:
You started to configure your List Rotator but realized you didn’t have a specific view created for the columns you want to display. You created the new view and then went back to complete the configuration of the web part. Your new view isn’t shown in the Select View drop down selection list. What’s wrong?
Resolution:
Sometimes you need to re-load the list in order for new views to be recognized. To do this, select a different list in the Select List drop down selection list. Once the views load for that list, switch back to your desired list. When the views load for your desired list, you should see all that exist, including the one you just created. | https://docs.bamboosolutions.com/document/my_new_list_view_isnt_shown_in_the_web_part_tool_pane_selection_list/ | 2021-09-17T03:42:44 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.bamboosolutions.com |
Specialty Genes¶
Specialty Genes refers to genes that are of particular interest to infectious disease researchers, such as virulence factors, antibiotic resistance genes, drug targets, and human homologs. For each class, reference genes are collected from reputed external databases or manually curated by the PATRIC team and then mapped to their homologs based on sequence similarity using BLASTP.
We also provide a data summary targeted specifically to Antimicrobial Resistance (AMR).
Antibiotic Resistance¶
Antibiotic Resistance refers to the ability of bacteria to develop resistance to antibiotics through gene mutation or acquisition of antibiotic resistance genes. We have integrated and mapped known antibiotic resistance genes from the following sources:
Drug Targets¶
Drug Targets refer to the proteins being targeted by known/approved/experimental small molecule drugs. We have integrated and mapped such drug targets from the following sources:
Essential Genes¶
Essential genes refer to the genes that are critical for the organism’s survival. We havve conducted a flux-balance analysis for reference and representative genomes in PATRIC and mapped essential genes.
Human Homologs¶
Human Homologs refer to the bacterial proteins that share high sequence similarity with human proteins. We have integrated and mapped proteins from Reference Human Genome at NCBI.
Transporters¶
Transporters refer to proteins that serve the function of moving other materials within an organism. We have integrated and mapped proteins from Transporter Classification Database.
Virulence Factors¶
Virulence factors refer to the gene products that enable bacteria to establish itself on or within a host organism and enhance its potential to cause disease. We have integrated and mapped virulence factor genes from the following sources:
Accessing Specialty Gene Data¶
Specialty genes are accessible from the “Specialty Genes” tab available at the genome and taxon levels on the website. For more information on how to access and use specialty gene data, please visit Specialty Genes Tab User Guide. | https://docs.patricbrc.org/user_guides/data/data_types/specialty_genes.html | 2021-09-17T04:26:12 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.patricbrc.org |
01. To enable boats module login as a admin and navigate to modules option from side manu 02. find the boats module under the extras tab and click on enable option 03. after enabling you will find the boats menu under the side navigation 04. click on boats option and add your boats content 05. also your suppliers can signup to add boats and as a admin you can setup commission for each item 06. the module has no long story to learn its very simple just add your items and setup commission | https://docs.phptravels.com/modules/extras/boats-cruises | 2021-09-17T03:22:11 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.phptravels.com |
Reporting conversions
Setting up automatic conversion reporting helps Rokt Ads clients measure the impact of your campaigns on your business. Rokt offers a variety of ways to integrate your conversion data. For greatest accuracy, we recommend using the Rokt Web SDK or Event API. The instructions for each method are listed below.
Though less accurate, we also support conversion reporting through third-party measurement tools, file transfer, and manual upload.
No-code conversion reporting
#Web SDK
The Rokt Web SDK is a snippet of JavaScript code that lets you automatically report conversions from the frontend of your website.
#Steps
Get your unique snippet from your Rokt account manager, or generate it yourself in One Platform.
To obtain your snippet in One Platform log in and go to Integrations > Set up the snippet.
For Snippet goal, select Record conversions.
Choose a customer identifier so that Rokt can correctly match campaign events to conversions. We recommend using raw or hashed (SHA-256) customer email address as an identifier.
Note: As an alternative to hashed or raw email, you can use the Rokt Tracking ID (
passbackconversiontrackingid) as an identifier. This method takes more work from your development team, but doesn’t require any personally identifiable information.
Add relevant contextual attributes. Contextual data helps Rokt learn more about what campaigns and audiences are most effective for your business. We use these learnings to optimize for acquisition and help your campaigns perform better in the future.
Your snippet will look something like this:
({ //customer identifier - at least one required email: '[email protected]', emailsha256: '', passbackconversiontrackingid:'', //recommended contextual attributes firstname:'', lastname:'', conversiontype: '', amount: '', currency: '', quantity: '', paymenttype: '', margin: '', confirmationref: '' });
caution
If you are copying the above example, ensure
roktAccountid is replaced with your account's unique ID before continuing to the next step. You can get your
roktAccountid from your account manager or from One Platform.
Add the snippet to your confirmation page.
You should place the snippet on any page that immediately follows a conversion event, typically a confirmation or thank you page. Paste the snippet directly into the HTML on the page, between thetags.
Ensure that the snippet records all conversions on your site. Rokt can then handle the attribution process to determine what conversions resulted from a Rokt campaign event.
Tag managers
You can add the Rokt snippet to your site using a tag manager, but it may result in reduced performance due to some conversion events being dropped. For best results, we recommend the direct integration described on this page.
Populate your data attributes. Ensure that at least your customer identifier (email or Rokt Tracking ID) is populated with the correct data. Also configure any contextual attributes.
If you want to use hashed email address as an identifier, you can use Rokt’s pre-built hashing function by adding this line of code to your snippet:
rokt.onLoaded(function (rokt) { const email = "[email protected]"; return rokt.hashString(email).then(function (emailsha256) { rokt.setAttributes({ emailsha256: emailsha256 }); });});
Test that the snippet is engaging and contains the correct data. View testing instructions.
For optimal performance, consider integrating both the Event API and Web SDK. Using both methods creates redundancy and helps identify any anomalies that may occur in the browser or on your server.
#Event API
The Event API offers another options for advertisers looking to integrate conversion data with Rokt. Using the Event API, your backend server can securely connect to Rokt's, transmitting conversion data in real time.
Using the Event API as a standalone integration for conversion data provides multiple benefits:
- Speed. Enables a fully automated, near-real time data exchange, maximizing the potential of Rokt’s automated optimization tools.
- Coverage. Permits integration of events across all channels and devices, resulting in coverage for conversions across web, mobile, and in-store.
- Reliability. As a server-to-server integration, the Event API is not susceptible to any interference by web technologies such as browser or ad blocking. It also supports error handling, ensuring that data is never lost.
#Steps
Make sure you can log in to Rokt's One Platform and obtain your Account ID. If you don't have an account, reach out to your account manager.
Get your App ID and App Secret from One Platform. You need this information in order to authenticate the Event API. You can view steps to retrieve your credentials here.
Assemble your payload for the Event API. You can use the
POST /v1/eventsendpoint to send any type of conversion event to Rokt.
For this use case, always set
eventTypeto
conversionso that Rokt knows to trigger the conversion attribution process.
Include relevant key-value pairs in the
objectData. Providing relevant contextual data helps Rokt better optimize your campaigns in the future.
Rokt requires at least one of email (
passbackconversiontrackingid) so that Rokt can identify the customer. Suggested
objectDatafields are available in the table below.
#Sample
POST /v1/events{ "accountId": "12345", "events": [ { "clientEventId": "ff3bd69c-ca74-4337-af91-4d5d0bd00e38", "eventTime": "2020-05-22T10:21:29.339Z", "eventType": "conversion", "objectData": [ { "name": "email", "value": "[email protected]" }, { "name": "transactionid", "value": "123456789" }, { "name": "amount", "value": "99.80" }, { "name": "currency", "value": "USD" }, { "name": "quantity", "value": "2" }, { "name": "conversiontype", "value": "hotel_booking" }, { "name": "margin", "value": "10" }, ] } ]
Suggested
objectData fields#
The
objectData object is made up of key-value pairs that contain metadata about the event. In order to properly attribute conversions, you must include one of email (
passbackconversiontrackingid) in
objectData so that Rokt can identify the customer.
For the conversion reporting use case, we recommend you include the following attributes.
#Custom attributes
Rokt has a data mapping system that allows us to map provided field names to our internal data fields, however we have some recommended field names and formatting requirements later in this article. If you would like to use alternate field names, let us know, and we will ensure your fields are managed accordingly. Additionally, we can accept any additional fields if you would like to provide them for reporting purposes.
#Premium integration: Combining the Web SDK and Event API
If possible, we recommend setting up conversion reporting through both the Web SDK and Event API. This helps identify any anomalies and provides redundancy if there are any issues on the frontend or backend.
If you choose to set up both integrations, ensure you populate one of
transactionid or
confirmationref in both the Web SDK and Event API integrations. Rokt can then deduplicate incoming your conversion events using these variables, ensuring your reporting is accurate. | https://docs.rokt.com/docs/developers/integration-guides/getting-started/conversion-reporting | 2021-09-17T03:42:58 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.rokt.com |
The multi-tenant mode allows you to create multiple workspaces.
The workspace term is just a label, you can rename it to anything you find appropriate to your business model. Examples are Organizations, Companies, Teams, etc.
Each workspace has its own Users, Audit Logs, Settings, and Entities.
After a new user signs up, they are asked to create a new workspace.
If the user has been invited but didn't come from the invitation email, they will be asked if they want to accept the invitation or create a new workspace.
Users can switch, create, edit or delete workspaces on the Workspaces page that can be accessed via the User's menu. Permission to edit and delete a specific workspace depends on the role the user has on that workspace. | https://docs.scaffoldhub.io/features/tenant/multi-tenant | 2021-09-17T04:42:50 | CC-MAIN-2021-39 | 1631780054023.35 | [] | docs.scaffoldhub.io |
7.8 Depth-Registered Image Data Object
The set of data objects for raster well log depth registration (DepthRegImage) provides a common, industry-standard depth calibration (registration) format that improves on and replaces existing proprietary standards. It allows service companies, data vendors, and customers to more readily associate depth registration information with the correct log and move well logs and registration information between software systems.
The work to design these data objects began with an assessment of current popular, proprietary formats contributed by team members and work done previously by the WITSML SIG. | http://docs.energistics.org/WITSML/WITSML_TOPICS/WITSML-000-092-0-C-sv2000.html | 2019-03-18T18:20:35 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.energistics.org |
Here we will document the sign-in constraints, processes and workflows.
Who Needs to Sign in
What can be done without signing in
What you can do (the scenarios) when you are not signed in
What happens when I sign in
What the redirection proces is when you signin
If you’re account is an ops account when you sign in you are taken to the ops dashboard. If your account has a car listed on it and you are not ops then you are taken to the My Cars page. If you sign in and have a booking in progress then you are taken to page for that booking. Otherwise you are taken search page. The My Cars and My Bookings links in the menu are displayed conditionally. If you have any cars listed then the My Cars link will show. If you have ever made any booking requests on your account then the My Bookings link will be shown.
Roles and Views
Perhaps a section on the commands that are available when youa re signed in as the various roles | http://docs.hourfleet.com/signin.html | 2019-03-18T17:29:27 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.hourfleet.com |
Creating an Application Load Balancer
This section walks you through the process of creating an Application Load Balancer in the AWS Management Console.
Define Your Load Balancer
First, provide some basic configuration information for your load balancer, such as a name, a network, and a listener.
A listener is a process that checks for connection requests. It is configured with a protocol and a port for the frontend (client to load balancer) connections, and protocol and a port for the backend (load balancer to backend instance) connections. In this example, you configure a listener that accepts HTTP requests on port 80 and sends them to the containers in your tasks Application Load Balancer and then choose Continue.
Complete the Configure Load Balancer page as follows:
For Name, type a name for your load balancer.
For Scheme, an internet-facing load balancer routes requests from clients over the internet to targets. An internal load balancer routes requests to targets using private IP addresses.
For IP address type, choose ipv4 to support IPv4 addresses only or dualstack to support both IPv4 and IPv6 addresses.
For Listeners, the default is a listener that accepts HTTP traffic on port 80. You can keep the default listener settings, modify the protocol or port of the listener, or choose Add to add another listener.
Note
If you plan on routing traffic to more than one target group, see ListenerRules for details on how to add host or path-based rules.
For VPC, select the same VPC that you used for the container instances on which you intend to run your service.
For Availability Zones, select the check box for the Availability Zones to enable for your load balancer. If there is one subnet for that Availability Zone, it is selected. If there is more than one subnet for that Availability Zone, select one of the subnets. You can select only one subnet per Availability Zone. Your load balancer subnet configuration must include all Availability Zones that your container instances reside in.
Choose Next: Configure Security Settings.
(Optional) Configure Security Settings
If you created a secure listener in the previous step, complete the Configure Security Settings page as follows; otherwise, choose Next: Configure Security Groups.
To configure security settings
If you have a certificate from AWS Certificate Manager, choose Choose an existing certificate from AWS Certificate Manager (ACM), and then choose the certificate from Certificate name.
If you have already uploaded a certificate using IAM, choose Choose an existing certificate from AWS Identity and Access Management (IAM), and then choose your certificate from Certificate name.
If you have a certificate ready to upload,, choose a predefined security policy. For details on the security policies, see Security Policies.
Choose Next: Configure Security Groups.
Configure Security Groups
You must assign a security group to your load balancer that allows inbound traffic to the ports that you specified for your listeners. Amazon ECS does not automatically update the security groups associated with Elastic Load Balancing load balancers or Amazon ECS container instances. listener to use.
Note
Later in this topic, you create a security group rule for your container instances that allows traffic on all ports coming from the security group created here, so that the Application Load Balancer can route traffic to dynamically assigned host ports on your container instances.
Choose Next: Configure Routing to go to the next page in the wizard.
Configure Routing
In this section, you create a target group for your load balancer and the health check criteria for targets that are registered within that group.
To create a target group and configure health checks
For Target group, keep the default, New target group.
For Name, type a name for the new target group.
Set Protocol and Port as needed.
For Target type, choose whether to register your targets with an instance ID or an IP address.
Important
If your service's task definition uses the
awsvpcnetwork mode (which is required for the Fargate launch type), you must choose
ipas the target type, not
instance. This is because tasks that use the
awsvpcnetwork mode are associated with an elastic network interface, not an Amazon EC2 instance.
For Health checks, keep the default health check settings.
Choose Next: Register Targets.
Register Targets
Your load balancer distributes traffic between the targets that are registered to its target groups. When you associate a target group to an Amazon ECS service, Amazon ECS automatically registers and deregisters containers with your target group. Because Amazon ECS handles target registration, you do not add targets to your target group at this time.
To skip target registration
In the Registered instances section, ensure that no instances are selected for registration.
Choose Next: Review to go to the next page in the wizard.
Review and Create
Review your load balancer and target group configuration and choose Create to create your load balancer.
Create a Security Group Rule for Your Container Instances
After your Application Load Balancer has been created, you must add an inbound rule to your container instance security group that allows traffic from your load balancer to reach the containers.
To allow inbound traffic from your load balancer to your container instances
Open the Amazon EC2 console at.
In the left navigation, choose Security Groups.
Choose the security group that your container instances use. If you created your container instances by using the Amazon ECS first run wizard, this security group may have the description, ECS Allowed Ports.
Choose the Inbound tab, and then choose Edit.
For Type, choose All traffic.
For Source, choose Custom, and then type the name of your Application Load Balancer security group that you created in Configure Security Groups. This rule allows all traffic from your Application Load Balancer to reach the containers in your tasks that are registered with your load balancer.
Choose Save to finish.
Create an Amazon ECS Service. | https://docs.aws.amazon.com/AmazonECS/latest/developerguide/create-application-load-balancer.html | 2019-03-18T18:02:03 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.aws.amazon.com |
Extensions by Extension Key¶
Here you can find documentation of extensions if available and successfully rendered. The url schema is docs.typo3.org/typo3cms/extensions/<EXTKEY>/<VERSION>/ and includes system extensions as well as third party extensions.
System extensions are shipped with the TYPO3 core. See the system extensions section.
Third party extensions are available through the TYPO3 Extension Repository (TER).
Use the following form to search by extension keys.
Type at least three characters in the search field. | https://docs.typo3.org/typo3cms/extensions/ | 2019-03-18T18:55:54 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.typo3.org |
A global window.vingd object is created and initialized upon script (main.min.js) load. To create a “popup opener” function object you should use vingd.popupOpener() function, with arguments describing details of a future popup open process.
For example, you can create orderOpener object once, like this:
var orderOpener = new vingd.popupOpener({ popupURL: "", siteURL: "", lang: "en", onSuccess: function(hwnd, args) { window.location = vingd.buildURL("", {token: args.token}); } });
And use it later in your document, while presenting your user with the “buy” option:
<a href="" onclick="return orderOpener(this);"> buy </a>
When user clicks the link, a popup window will open (redirect user to order link on Vingd frontend), your page will darken, and user shall be presented with an option of buying your order. Upon purchase, popup window will automatically close and your popupParams.onSuccess handler will be called (in the context of your (parent) page - the page that hosted the link, and the vingd object).
The main library method. It constructs and returns a specialized click-on-link handler function that opens popup, interprets the result and calls your event callbacks.
The only required propery of the popupParams object is the popupParams.popupURL. Note however, that your site will seem quite dysfunctional if you also don’t handle at least the onSuccess() event.
The document at popupParams.popupURL location must handle the response from Vingd frontend (response is encoded in GET data). You should use the default handler supplied with the library, popup.html (see Download). The default handler will call your on... callbacks, it will handler error scenarios, zombie popup mode (when user closes the main/site/parent window/tab), and several other dirty details.
Handles popup closing process. Depending on the result of popup operation, user-defined callbacks are executed (within a context of popup opener window).
Return values from Vingd frontend are used to call this popup’s event callback functions: onSuccess(), onCancel(), onError() and onExpand():
- On successful operation (purchase/voucher redeem/login): popupParams.onSuccess function is called. args.token can then be used to verify the purchase in backend via AJAX.
- Upon user clicking “Cancel” button while on Vingd frontend, the popupParams.onCancel function is executed.
- In case of an error, the popupParams.onError handler is called, receiving error description as an argument (args.msg).
- When user requests popup expansion, while on Vingd frontend, the popupParams.onExpand function is called (in a context of parent window) to be able to close the popup window and then open the requested page in parent window. The requested page URL is propagated thru args.expand_url GET parameter.
When appropriate handler is finished executing, this function closes the popup, which in turn triggers calling of popupParams.onClose handler from within vingd.closeListener internal method. (Note: problems have been reported for some Safari versions that do not handle delayed function execution properly.)
A utility method, usually used in onOpen handler, that darkens the screen by overlaying it with a semi-transparent black layer. To use, ensure that no screen element has a z-index at or above 10000. This layer can be suppressed using the vingd.undarkenScreen().
A utility method, usually used in onClose handler, that undarkens the screen (see vingd.darkenScreen()).
A utility function for building a valid URL given base URL and query parameters in a dictionary (JavaScript object).
A utility function for parsing URL GET parameters given a query string (eg. window.location.search).
When a new popup window is opened, its description is stored inside an object pushed into an array. This is done transparently and automatically with the ‘pushWindowData’ function (upon opening the popup).
Popup description includes all event callback function references.
To retrieve a popup data (for example to execute some of the previously defined event handlers), the vingd.getWindowData() function can be used.
Stores event handlers (callbacks) and Vingd frontend parameters for a popup referenced by a window handle hWnd.
Retrieves popup data (including callbacks) for a window referenced by hWnd. To delete popup data, set dequeue to true. | http://docs.vingd.com/libs/popup/0.8/vingd.html | 2019-03-18T18:33:03 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.vingd.com |
Client-to-Server Connection Process
Client-to-Server Connection Process
It is important to understand the sequence of events that occur when the native client connects with a GemFire cache server.
- A native client region is configured in cache.xml or programmatically with a set of server connection endpoints. Server endpoints identify each cache server by specifying the server's name and port number.
Client threads obtain, use, and release a connection to a connection pool that maintains new connections. The number of connections that a client can establish is governed by the pool's min-connections and max-connections settings, either using client XML configuration or programmatically through the CacheFactory::setMinConnections() and CacheFactory::setMaxConnections() APIs. The defaults for min-connections is 1 and max-connections is -1 meaning the connection count can grow to accommodate the number of active threads performing region operations.
This example shows how to use cache.xml to configure a native client region with endpoints set to two cache servers:
<pool name="examplePool" subscription- <server host="java_servername1" port="java_port1" /> <server host="java_servername2" port="java_port2" /> </pool> <region name="NativeClientRegion" refid="CACHING_PROXY"> <region-attributes </region>
TCP connections on the native client are specified at the cache level, or by overriding endpoints for specific regions. The connections are created as the regions are created. In addition, connections can also get created for querying without having any created regions. In this case, when endpoints are defined at the cache level no regions are yet created and a query is fired.
You can configure client-server connections in two ways. Use either the region/cache endpoints or the Pool API. For more information about the pool API, see Using Connection Pools.
- The client announces to the server which entries it wishes to have updated by programmatically registering interest in those entries. See Registering Interest for Entries for more information.
- The client cache.xml file should have the following parameters configured so the client can update the server and the client can receive updates from the server:
- Caching enabled in the client region, by using the CACHING_PROXY RegionShortcut setting in the region attribute refid. A listener could also be defined so event notification occurs. You can use both, but at least one of the two methods must be used by the client to receive event notifications.
- Set subscription-enabled to true so the client receives update notifications from the server for entries to which it has registered interest.
- A native client application calls the C++ or .NET API to connect to a cache server.
- The client and the cache server exchange a handshake over a configured endpoint to create a connection.
- Any create, put, invalidate, and destroy events sent to the server are propagated across the distributed cache so the client can receive the events. | http://gemfire82.docs.pivotal.io/docs-gemfire/gemfire_nativeclient/client-cache/client-to-server-connection.html | 2019-03-18T18:20:20 | CC-MAIN-2019-13 | 1552912201521.60 | [] | gemfire82.docs.pivotal.io |
Creates a customer master key (CMK) in the caller's AWS account.
You can use a CMK to encrypt small amounts of data (4 KiB or less) directly, but CMKs are more commonly used to encrypt data keys, which are used to encrypt raw data. For more information about data keys and the difference between CMKs and data keys, see the following:
If you plan to import key material , use the Origin parameter with a value of EXTERNAL to create a CMK with no key material.
To create a CMK in a custom key store , use, each in a different Availability Zone in the Region.
You cannot use this operation to create a CMK in a different AWS account.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
create-key [--policy <value>] [--description <value>] [--key-usage <value>] [--origin <value>] [--custom-key-store-id <value>] [--bypass-policy-lockout-safety-check | --no-bypass-policy-lockout-safety-check] [--tags <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--policy (string)
The key policy to attach to the CMK.
If you provide a key policy, it must meet the following criteria:
- If you don't set BypassPolicyLockoutSafetyCheck to true, the key policy must allow the principal that is making the CreateKey request a AWS KMS custom key store and creates its key material in the associated AWS CloudHSM cluster. You must also use the CustomKeyStoreId parameter to identify the custom key store.
Possible values:
- AWS_KMS
- EXTERNAL
- AWS_CLOUDHSM
--custom-key-store-id (string).
--bypass-policy-lockout-safety-check | --no-bypass-policy-lockout-safety-check (boolean)
A flag to indicate whether to bypass the key policy lockout safety check.
Warning. is EXTERNAL , otherwise this value is omitted.
KeyManager -> (string)The CMK's manager. CMKs are either customer-managed or AWS-managed. For more information about the difference, see Customer Master Keys in the AWS Key Management Service Developer Guide . | https://docs.aws.amazon.com/cli/latest/reference/kms/create-key.html | 2019-03-18T18:02:49 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.aws.amazon.com |
If.
In order to test the integration between the Bare Metal and Networking services, support has been added to devstack to mimic an external physical switch. Here we include a recommended configuration for devstack to bring up this environment.
Starting with the Pike release, it is also possible to use DevStack for testing booting from Cinder volumes with VMs.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/ironic/latest/contributor/index.html | 2019-03-18T18:27:11 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.openstack.org |
SOLUTION Dimension
Each tenant in iWD can have one or more solutions. A solution can be configured in iWD for testing a new iWD configuration, independent of a production solution. Solution information is stored in the SOLUTION dimension which is populated from iWD GAX Plug-in configuration. Many fact tables in the iWD Data Mart include a SOLUTION_KEY column to join to this dimension.
This page was last modified on November 28, 2017, at 03:28.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IWD/8.5.1/Ref/IWDSOLUTIONDimension | 2019-03-18T18:06:29 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.genesys.com |
Image to be displayed in place of the default loading image
Member of Web Application (PRIM_WEB.Application)
Data Type - PRIM_BMP - Bitmap is an image file in the repository
The LoadingImage property refers to the Loading image or resource defined as part of the Integration features of Web Page By default, this will be a busy spinner
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_web.application_loadingimage.htm | 2019-03-18T17:33:56 | CC-MAIN-2019-13 | 1552912201521.60 | [] | docs.lansa.com |
Coding Standards¶
PHP File Formatting¶
General¶
For files that contain only PHP code, the closing tag (“
?>”) is
never permitted. It is not required by PHP. Not including it prevents
trailing whitespace from being accidentally injected into the output.
Note
Inclusion of arbitrary binary data as permitted by
__HALT_COMPILER() is prohibited from any Doctrine framework
PHP file or files derived from them. Use of this feature is only
permitted for special installation scripts.
Maximum Line Length¶
The target line length is 80 characters, i.e. developers should aim keep code as close to the 80-column boundary as is practical. However, longer lines are acceptable. The maximum length of any line of PHP code is 120 characters.
Line Termination¶
Line termination is the standard way for Unix text files to represent the end of a line. Lines must end only with a linefeed (LF). Linefeeds are represented as ordinal 10, or hexadecimal 0x0A.
You should not use carriage returns (CR) like Macintosh computers (0x0D) and do not use the carriage return/linefeed combination (CRLF) as Windows computers (0x0D, 0x0A).
Naming Conventions¶
Classes¶
The Doctrine ORM Framework uses the same class naming convention as PEAR and Zend framework, where the names of the classes directly map to the directories in which they are stored. The root level directory of the Doctrine Framework is the “Doctrine/” directory, under which all classes are stored hierarchially.
Class names may only contain alphanumeric characters. Numbers are
permitted in class names but are discouraged. Underscores are only
permitted in place of the path separator, eg. the filename
“Doctrine/Table/Exception.php” must map to the class name
“
Doctrine_Table_Exception”.
If a class name is comprised of more than one word, the first letter of each new word must be capitalized. Successive capitalized letters are not allowed, e.g. a class “XML_Reader” is not allowed while “Xml_Reader” is acceptable.
Interfaces¶
Interface classes must follow the same conventions as other classes (see above).
They must also end with the word “Interface” (unless the interface is
approved not to contain it such as
Doctrine_Overloadable). Some
examples:
Examples
Doctrine_Adapter_Interface
Doctrine_EventListener_Interface
Filenames¶:
Doctrine/Adapter/Interface.php
Doctrine/EventListener/Interface
File names must follow the mapping to class names described above.
Functions and Methods¶
Function names may only contain alphanumeric characters and underscores are not permitted. Numbers are permitted in function names but are highly discouraged. They must always start with a lowercase letter and when a function name consists of more than one word, the first letter of each new word must be capitalized. This is commonly called the “studlyCaps” or “camelCaps” method. Verbosity is encouraged and function names should be as verbose as is practical to enhance the understandability of code.
For object-oriented programming, accessors for objects should always be
prefixed with either “get” or “set”. This applies to all classes except
for
Doctrine_Record which has some accessor methods prefixed with
‘obtain’ and ‘assign’. The reason for this is that since all user
defined ActiveRecords inherit
Doctrine_Record, it should populate
the get / set namespace as little as possible.
Note
Functions in the global scope (“floating functions”) are NOT permmitted. All static functions should be wrapped in a static class.
Variables¶
Variable names may only contain alphanumeric characters. Underscores are not permitted. Numbers are permitted in variable names but are discouraged. They. Within the framework certain generic object variables should always use the following names:
There are cases when more descriptive names are more appropriate (for example when multiple objects of the same class are used in same context), in that case it is allowed to use different names than the ones mentioned.
Constants¶
Constants may contain both alphanumeric characters and the underscore.
They must always have all letters capitalized. For readablity reasons,
words in constant names must be separated by underscore characters. For
example,
ATTR_EXC_LOGGING is permitted but
ATTR_EXCLOGGING is
not.Constants must be defined as class members by using the “const”
construct. Defining constants in the global scope with “define” is NOT
permitted.
class Doctrine_SomeClass { const MY_CONSTANT = 'something'; } echo $Doctrine_SomeClass::MY_CONSTANT;
Record Columns¶
All record columns must be in lowercase and usage of underscores(_) are encouraged for columns that consist of more than one word.
class User { public function setTableDefinition() { $this->hasColumn( 'home_address', 'string' ); } }
Foreign key fields must be in format
[table_name]_[column]. The
next example is a field that is a foreign key that points to
user(id):
class Phonenumber extends Doctrine_Record { public function setTableDefinition() { $this->hasColumn( 'user_id', 'integer' ); } }
Coding Style¶
PHP Code Demarcation¶
PHP code must always be delimited by the full-form, standard PHP tags and short tags are never allowed. For files containing only PHP code, the closing tag must always be omitted
Strings¶
When a string is literal (contains no variable substitutions), the apostrophe or “single quote” must always used to demarcate the string:
Literal String¶
$string = 'something';
When a literal string itself contains apostrophes, it is permitted to demarcate the string with quotation marks or “double quotes”. This is especially encouraged for SQL statements:
String Containing Apostrophes¶
$sql = "SELECT id, name FROM people WHERE name = 'Fred' OR name = 'Susan'";
Variable Substitution¶
Variable substitution is permitted using the following form:
// variable substitution $greeting = "Hello $name, welcome back!";
String Concatenation¶
Strings may be concatenated using the ”.” operator. A space must always be added before and after the ”.” operator to improve readability:
$framework = 'Doctrine' . ' ORM ' . 'Framework';
Concatenation Line Breaking¶
When concatenating strings with the ”.” operator, it is permitted to break the statement into multiple lines to improve readability. In these cases, each successive line should be padded with whitespace such that the ”.”; operator is aligned under the “=” operator:
$sql = "SELECT id, name FROM user " . "WHERE name = ? " . "ORDER BY name ASC";
Arrays¶
Negative numbers are not permitted as indices and. When declaring associative arrays with the array construct, it is encouraged to break the statement into multiple lines. In this case, each successive line must be padded with whitespace such that both the keys and the values are aligned:
$sampleArray = array( 'Doctrine', 'ORM', 1, 2, 3 ); $sampleArray = array( 1, 2, 3, $a, $b, $c, 56.44, $d, 500 ); $sampleArray = array( 'first' => 'firstValue', 'second' => 'secondValue' );
Classes¶
Classes must be named by following the naming conventions. The brace is always written next line after the class name (or interface declaration). Every class must have a documentation block that conforms to the PHPDocumentor standard. Any code within a class must be indented four spaces and only one class is permitted per PHP file. Placing additional code in a class file is NOT permitted.
This is an example of an acceptable class declaration:
/** * Documentation here */ class Doctrine_SampleClass { // entire content of class // must be indented four spaces }
Functions and Methods¶
Methods must be named by following the naming conventions and must always declare their visibility by using one of the private, protected, or public constructs. Like classes, the brace is always written next line after the method } public function bar2() { } }
Note
Functions must be separated by only ONE single new line
like is done above between the
bar() and
bar2() methods.
Passing above, 'Framework', 'Doctrine', 56.44, 500 ), 2, 3 );
Control Statements¶ ( $foo != 2 ) { $foo = 2; }
For if statements that include elseif or else, the formatting must be as in these examples:
if ( $foo != 1 ) { $foo = 1; } else { $foo = 3; } if ( $foo != 2 ) { $foo = 2; } elseif ( $foo == 1 ) { $foo = 3; } else { $foo = 11; }
When ! operand is being used it must use the following formatting:
if ( ! $foo ) { } but the breaks must be at the same indentation level as the case statements.
switch ( $case ) { case 1: case 2: break; case 3: break; default: break; }
The construct default may never be omitted from a switch statement.
Inline Documentation¶
Documentation Format:
All documentation blocks (“docblocks”) must be compatible with the
phpDocumentor format. Describing the phpDocumentor format is beyond the
scope of this document. For more information, visit:
Every method,:
/* * Test function * * @throws Doctrine_Exception */ public function test() { throw new Doctrine_Exception('This function did not work'); }
Conclusion¶
This is the last chapter of Doctrine ORM for PHP - Guide to Doctrine for PHP. I really hope that this book was a useful piece of documentation and that you are now comfortable with using Doctrine and will be able to come back to easily reference things as needed.
As always, follow the Doctrine :)
Thanks, Jon | http://doctrine.readthedocs.io/en/latest/en/manual/coding-standards.html | 2017-02-19T18:40:18 | CC-MAIN-2017-09 | 1487501170249.75 | [] | doctrine.readthedocs.io |
base class — bpy_struct
Sky related settings for a sun lamp
Multiplier to convert blender units to physical distance
Extinction scattering contribution factor
Scatter contribution factor
Sky turbidity
Backscattered light
Horizon brightness
Blend factor with sky
Blend mode for combining sun sky with world sky
Color space to use for internal XYZ->RGB color conversion
Strength of sky shading exponential exposure correction
Horizon Spread
Sun brightness
Sun intensity
Sun size
Apply sun effect on atmosphere
Apply sun effect on sky
Inherited Properties
Inherited Functions
References | https://docs.blender.org/api/blender_python_api_2_63_2/bpy.types.LampSkySettings.html | 2017-02-19T18:46:34 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.blender.org |
Using Git submodules
Clone submodules during deployment
Platform.sh allows you to use submodules in your Git repository. They are usually
listed in a
.gitmodules file at the root of your Git repository. When you push via Git,
Platform.sh will try to clone them automatically.
Here is an example of a
.gitmodules file:
[submodule "app/Oro"] path = src/Oro url = [submodule "src/OroPackages/src/Oro/Bundle/EntitySerializedFieldsBundle"] path = src/OroPackages/src/Oro/Bundle/EntitySerializedFieldsBundle url = https:/github.com/orocrm/OroEntitySerializedFieldsBundle.git [submodule "src/OroB2B"] path = src/OroB2B url =
When you run
git push, you can see the output of the log:
Validating submodules. Updated submodule git://github.com/orocommerce/orocommerce: 4 references updated. Updated submodule git://github.com/orocrm/platform: 229 references updated. Updated submodule git://github.com/orocrm/OroEntitySerializedFieldsBundle: 11 references updated.
Error when validating submodules
If you see the following error:
Validating submodules. Found unresolvable links, updating submodules. E: Error validating submodules in tree: - /src/Oro: Exception: commit 03567c6 not found. This might be due to the following errors fetching submodules: - [email protected]:orocommerce/orocommerce.git: HangupException: The remote server unexpectedly closed the connection.
Since the Platform.sh Git server cannot connect to Github via SSH without being granted an SSH key to do so,
you should not be using an SSH URL:
[email protected]:..., but you should use an HTTPS URL instead:.... | https://docs.platform.sh/development/submodules.html | 2017-02-19T18:44:18 | CC-MAIN-2017-09 | 1487501170249.75 | [] | docs.platform.sh |
onzoom
This page has been flagged with the following issues:
High-level issues:
Overview Table
Notes. | http://docs.webplatform.org/wiki/svg/events/onzoom | 2014-04-16T15:58:49 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.webplatform.org |
Message-ID: <44565820.84418.1397664895835.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_84417_1081602525.1397664895835" ------=_Part_84417_1081602525.1397664895835 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The sources of the Groovy project are hosted on Github: =
Additionally, the sources are mirrored on Codehaus' own Git infrastructu=
re as well:
cgi?p=3Dgroovy-git.git
You can learn about the repository details = on the Xircles Codehaus admin interface:
If you're interested in contribu= ting, you can send us GitHub pull requests, or submit patches through JIRA.= Please see our contribution page<= /a> for more.
You can also get the sources for each releases in the form of a zip arch= ive. Please head to our download secti= on to download those source packages.
First of all, you'll need to have Git installed on your machine, whether= through the support of your IDE, or as a command-line tool.
If you want to checkout the source code of Groovy, there are three diffe= rent URLs you can use. From the command-line, you can use the command:
// anonymous access git clone git://github.com/groovy/groovy-core.git // read/write access git clone git clone [email protected]:groovy/groovy-core.git=20
You can checkout different branches, in particular:
masteris the latest Groovy branch, for the upcoming major= version
GROOVY_1_8_Xis the branch of the curret Groovy 1.8.x vers= ions (current stable version)
GROOVY_1_7_Xis the branch for the previous official versi= on of Groovy 1.7.x
For fetching a branch the first time, simply use:
git fetch origin=20
To checkout a particular branch:
git checkout master git checkout GROOVY_1_8_X git checkout GROOVY_1_7_X=20
Use the commit command to commit your changes locally:
git commit -m "Your commit message"=20
Say you have committed your changes on
master and want to m=
erge a particular comming on
GROOVY_1_8_X, you can procede as =
follows:
git checkout master git commit -m "Fixed GROOVY-1234" // this would return a12bc3... as commit number git checkout GROOVY_1_8_X git cherry-pick a12bc3=20
To see what's the status of your source tree, you can call:
git status=20
And if you want to see all the latest commits that you have locally, you= can do:
git log=20
To retrieve the changes that have been pushed to the server, you can do:=
git pull=20
Of more explicitely:
git pull origin master=20
The various commits you've made are done locally, now is the time to sha= re them with the world by pushing your changes to your Github clone, or to = a publicly available Git repository:
git push git push origin master=20
If you're a Groovy despot, you can also push your changes to Codehaus fo= r manual synchronization purpose. But for that, first, you'll have to have = configured an additional remote with:
git remote add codehaus ssh://[email protected]/groovy-git.g= it=20
Then you can push the changes back to GitHub as well:
git push codehaus master=20
To push a local branch to the Codehaus Git repository or on the GitHub m= irror, you can do the following:
git push origin myLocalBranch git push codehaus myLocalBranch git push myRemoteLocation myLocalBranch=20
Contributors might bring their contributions in the form of "pull r=
equests" on our GitHub mirror.
Groovy despots can merge the pull= requests on GitHub through the web interface by following this proposed wo= rkflow:
// create a new branch to test the pull request git checkout -b test_foopatch_onto_1_8 // now you are on branch test_foopatch_onto_1_8 // let's add the git branch of the contributor as a remote git remote add someperson git://github.com/groovy/somepersonsrepo.git // fetch the particular commits git fetch someperson the_branch_with_the_changes // merge it in our test branch git merge FETCH_HEAD // test the changes gradle test or gradle test // if all tests pass, then we can safely use the web based merge UI of Gith= ub=20
If you want to learn more about Git, there are many available resources = online, such as: | http://docs.codehaus.org/exportword?pageId=227050718 | 2014-04-16T16:14:55 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
Draws a shape which geometry is constructed from two other shapes: a start shape and an end shape. The morph property of a morphing shape defines the amount of transformation applied to the start shape to turn it into the end shape. Both shapes must have the same winding rule.
Requires graphicsbuilder-ext-swingx and swingx in classpath. | http://docs.codehaus.org/pages/viewpage.action?pageId=35422287 | 2014-04-16T16:40:10 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
Message-ID: <1283354725.84392.1397664520383.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_84391_326506255.1397664520383" ------=_Part_84391_326506255.1397664520383 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
"The Failsafe Plugin is designed to run integration tests while= the Surefire Plugins is designed to run unit tests." - ugin/
See below for the Failsafe sect= ion.
However, some people are using Surefire for integration testing - not le= ast because Failsafe only came along some time after Surefire ...
Better Builds With Maven describes a single example module containing on= ly integration tests. In the case of a multi-module "reactor" pro= ject, where each module may need some integration testing, you have some ch= oices available to you.
You can, if you wish, create multiple integration test modules, one for = each source module in your project. For example, if you've got two child pr= ojects "foo" and "bar", you could create a "foo-in= tegration-test" module and a "bar-integration-test" module t= o test "foo" and "bar" respectively.
The advantage of doing it this way is that it's very clear which integra= tion tests belong to which project. It's also easy for project owners to kn= ow which integration tests they're responsible for. The big drawback here i= s that you've just doubled the number of POM files in your reactor, which c= an be difficult to manage. Each project will need to be separately referenc= ed in the root POM.
This is obviously much simpler to wire up in a POM file; simplicity is a= virtue in project management.
The disadvantage of doing it this way is that it tends to separate the i= ntegration tests from the code they're attempting to test. As a result, you= may find that no one "owns" the integration tests; typically you= 'll have some one person whose job it is to analyze the integration tests a= nd find bugs. QA is hard, but it's even harder when it's unclear who "= owns" test failures.
If for some reason you can't put the integration tests in a separate mod= ule, here are some ideas.
If you have only integration tests in the same module a= s your webapp, you can configure Surefire to skip the test phase, then run = in the integration-test phase. See this page.
If you need to run both unit and integration tests in the same module, i= t's possible, just not very pretty.
There is only one testSourceDirectory per module, so all of your test cl= asses must reside in one directory structure, usually src/test/java.
In the 'test' phase, configure Surefire to exclude the tests in (for exa=
mple)
**
tes= ts.
If you use this approach, you keep all your tests for a module in the te= stSourceDirectory, e.g. src/test/java. By default the Failsafe Maven Plugin= looks for integration tests matching the patterns */IT.ja= va, **/IT.java and */*ITCase.java. You will notice t= hat>=20
You will then have the following lifecycle bindings
The advantage to using the Maven Failsafe Plugin is that it will not sto=
p the build during the integration-test phase if there are=
test failures. The recommendation is that you do not directly invoke=
the pre-integration-test, integration-test or post-integration-test phases but that instead you ru=
n integration tests by specifying the verify phase, e.g.=
p>
This allows you to set-up your integration test environment during the <=
strong>pre-integration-test
mvn verify
=20
This allows you to set-up your integration test environment during the <= strong>pre-integration-testphase, run your integration tests duri= ng the integration-test phase, cleanly tear-down = your integration test environment during the post-integration-test<= /strong> phase before finally checking the integration test results and fai= ling>=20
Rumor has it that a future version of Maven will support something like = src/it/java in the integration-test phase, in addition to src/test/java in = the test phase. | http://docs.codehaus.org/exportword?pageId=63286 | 2014-04-16T16:08:40 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
The Groovy command line (groovy or groovy.bat) is the easiest way to start using the Groovy Language.
$groovy -help usage: groovy
")'()" | http://docs.codehaus.org/pages/viewpage.action?pageId=8635 | 2014-04-16T16:44:05 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
Administration Guide
Local Navigation
Configuring EAP-TTLS authentication
If your organization implements EAP-TTLS authentication, Wi-Fi® enabled BlackBerry® devices must authenticate to an authentication server so that they can connect to the enterprise Wi-Fi network.
EAP-TTLS authentication requires that BlackBerry devices trust the authentication server certificate. To trust the authentication server certificate, BlackBerry devices must trust the certificate authority that issued the certificate. A certificate authority that the BlackBerry devices and the authentication server trust mutually must generate the authentication server certificate.
Each BlackBerry device stores a list of explicitly trusted certificate authority certificates. BlackBerry devices that use EAP-TTLS authentication require the root certificate for the certificate authority that created the authentication server certificate.
To distribute the root certificate to BlackBerry devices, you can use the certificate synchronization tool in BlackBerry® Desktop Manager or you can enroll the certificate over the wireless network.
For more information about how the BlackBerry® Enterprise Solution supports EAP-TTLS authentication, see the BlackBerry Enterprise Server Security Technical Overview.
Configure EAP-TTLS authentication data, perform the following actions:
- If required, configure the following configuration settings:
- Click Save All.
Configure EAP-TTLS configuration settings in the Wi-Fi profile on a BlackBerry device
- On the BlackBerry device, in the device options, click Wi-Fi Connections.
- Click the Wi-Fi profile that you want to change.
- Click Edit.
- In the Security Type list, select EAP-TTLS.
- Type the user name and password for the messaging server.
- In the CA certificate list, click the root certificate for the certificate authority that created the authentication server certificate.
- In the Inner link security type list, select EAP-MS-CHAPv2.
- If necessary, in the Server subject field, type the server name in the server certificate, in URL format (for example, server1.domain.com or server1.domain.net). If you leave the field blank, the BlackBerry device skips over it during server authentication.
- If necessary, in the Server SAN field, type the alternative name for the server, in URL format (for example, server1.domain.com or server1.domain.net). If you leave the field blank, the BlackBerry device skips over it during server authentication.
- If your organization use.
- Verify that the Allow inter-access point handover option is selected.
- If necessary, select the Notify on authentication failure check box.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/admin/deliverables/25767/Configuring_EAPTTLS_authentication_602755_11.jsp | 2014-04-16T17:16:00 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.blackberry.com |
Overview
We.
Assumptions
There is an assumption that there is only one repository for deployment. This is enforced by the POM currently, and a good idea nonetheless. Mirroring should be external.
Remote Repository.
Artifact Resolution.
Local Repository).
Modifying Download Behaviour
Snapshot Policies.
Snapshot Update Argument.
Ignoring Local Snapshots.
Once per Session
The code shall only check remote repositories for a particular SNAPSHOT once per session (as any deployment also installs locally so is in sync) - this will avoid the overhead of always checking, or doing local file calculations multiple times in a reactor build.
Share timestamp.
Universal Source Directory.. | http://docs.codehaus.org/pages/diffpagesbyversion.action?pageId=22585&selectedPageVersions=18&selectedPageVersions=19 | 2014-04-16T16:41:56 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.