content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Our goal is very simple:
To make developing end to end web applications easy and scalable.
We focus on real use cases too, and this is why we have a demo gallery where you can see how to solve some of the most common requirements using ConanJs.
At the core, we focus on:
Conan Data: With Conan data you can manage you application state, by letting you divide your state into capsules of reusable state which are ultimately plain JS objects
To read more about how we compare to traditional state management in react, have a look at this article:
Conan Runtime: A major part of our runtime is the dependency injection, with ConanJs, in the same framework you can not only manage your state, but also have dependency injection.
Let's look in detail at this two, Conan Data and Conan Runtime.
There are two pillars that build ConanJs
If you have developed web applications before, you have dealt with state.
We consider state all the changing data and metadata that you need to manage to fulfil your business requirements.
We provide you with Conan Data (state and flows), which gives you a unified approach to manage the state
State in ConanJs is built so that you don't have to decide what pattern to use when dealing with state.
Some of the patterns that you might be familiar and that ConanJs unifies are:
Local state via setState / hooks / callbacks
Context state via hooks
Global state via Redux / Redux tool kit / selectors..
You can see this in our scoping section
Ultimately we want to have the state completely decoupled from the views, our end goal would be to enable developers to write their views as if they were stateless.
To help with this, and to help developers decoupling their code, we provide with dependency injection.
We also provide with a Logging system that you can reuse on your app and that is used internally by everything else in ConanJs.
At last, we also provide with ASAPs, our own implementation of Promises which is fully compatible with your currently existing promises.
Still, confused? Don't blame you!
It might help to look at understanding why you should use ConanJs depending on your background.
Have a look at: Why ConanJs...
You might otherwise prefer to start looking at our examples...
... or maybe you prefer reading up, in which case, we would recommend start by reading about ConanData
ConanJs as of v1.0, has a lot to offer, and we think is ready to be considered as an alternative to Redux or vanilla state management.
But we think that there is a lot more that can be done.
In the next releases we will like to:
Add support for other frameworks (Angular and Vue to start with)
Cover more use cases in our Demos
Forms and validations
Layouts
Navigation
CRUD
Allow for even more complex state to be created.
Lists
Streams
Add support for transactions to our ConanData
Commit
Rollback
Undo/Redo...
Reduce even further the amount of boilerplate code needed.
When creating async actions
Flows
Adding reactions
Introduce the concept of resources to encapsulate the logic around fetching data.
Provide with backend implementations to our demos to illustrate end to end scenarios
All this is to support our goal:
To make developing end to end web applications easy and scalable.
As you can see, we have a lot of work ahead of us.
If you like what you see, find below our About us page, and shoot us a message / github star / tweet e.t.c. | https://docs.conanjs.io/ | 2021-04-10T14:25:36 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.conanjs.io |
Change a device role
The ExtraHop system automatically discovers and classifies devices on your network.? | https://docs.extrahop.com/7.7/change-device-role/ | 2021-04-10T14:41:19 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.extrahop.com |
API Data Types and Notes
The following guide applies to our API at runtime and data exchange.
API Success Messages
All successful calls to the FS API will return HTTP status code 200 OK or status code 202 Accepted. You will receive a HTTP 202 when the request you are making has already been previously made, or is identical to a previous request.
API Failure Messages
Error conditions return HTTP 4xx error codes.
Error codes can also include authentication and permissions errors calling the API at runtime (this is distinct from Authentication API errors documented above):
For responses that are 4xx error HTTP codes, where the client is authorized, the X-Error-Status-Message HTTP response HTTP header may contain a string with additional diagnostic information about the failed reasons.
Field Squared will log all failed API calls in the Integration Logs available in App Builder. You can also log all incoming and outgoing API request traffic as raw HTTP traffic for a period of up to 1 hour in the past to aid with debugging.
For a delete requests, the system will return:
- If the record is completed, the system will return HTTP 200 OK
- If the record cannot be deleted because it does not exist, the system will return HTTP 404 Not Found
For create and update requests
- if the record is created, the system will return HTTP 200 OK
- if the record exists, the system will return HTTP 200 OK
- the system will generate HTTP error code 404 Not Found only if the object being created or updated is referencing another object that exist (eg. a Task that references a User that does not exist)
Duplicate requests will be returned as HTTP 202 Accepted. This means we have already previously processed this request (because the Request-ID in the HTTP header has already been processed).
Working with Dates and Times
Date-Time values
Field Squared specifies date times using ISO-8901 dates in UTC time
YYYY-MM-DDTHH:MM:SSZ
For example 2:15pm on November 16 2016 in USA Mountain Time would be: "2016-11-16T21:15:00Z"
Date times will be localized at runtime to the time zone of each device viewing the date and times.
Dates
Dates are specified as an absolute date using year, month and day as a string.
YYYY-MM-DD
For example, November 16, 2016 would be: "2016-11-16". There is no UTC offset for dates and dates will display on all clients the same way regardless of the time zone of each client.
Times
Times in Field Squared are generally specified as absolute times with no UTC offset. The values are displayed the same way on all clients. Times are specified in hours, minutes and seconds.
HH:MM:SS
Using XML
The following sections cover some notes around using XML
Prologs and schemas
For XML, all messages are standard XML. The XML prolog and namespaces are optional. For example
<?xml version="1.0" encoding="UTF-8"?> <Authentication xmlns: <Email>String</Email> <Password>String</Password> </Authentication>
is treated exactly the same as
<Authentication> <Email>String</Email> <Password>String</Password> </Authentication>
XML Elements vs. Attributes
Messages should be sent with values stored in inner XML and not as attributes.
This is valid XML for the Field Squared API:
<Task> <Name>My Task</Name> <Status>Not Started</Status> </Task>
The following XML will not work with the Field Squared API as it relies on XML attributes instead of elements:
<Task name="My Task" status="Not Started"> </Task>
Empty vs. Missing vs. Properties in API Calls
Empty/NULL XML elements work differently in the Field Squared API.
The following payload means "Set Due Date to NULL".
<Task> <Name>My Task</Name> <Status>Not Started</Status> <DueDate></DueDate> </Task>
The following payload would not update the Due Date field, since it's not specified in the XML body.
<Task> <Name>My Task</Name> <Status>Not Started</Status> </Task>
Empty and NULL elements do not mean the same thing in the Field Squared API. | https://docs.fieldsquared.com/knowledge-base/api-usage-notes/ | 2021-04-10T14:57:58 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['https://docs.fieldsquared.com/wp-content/uploads/2017/01/http-success-600x518.png',
None], dtype=object)
array(['https://docs.fieldsquared.com/wp-content/uploads/2017/01/http-fail-1-600x251.png',
None], dtype=object)
array(['https://docs.fieldsquared.com/wp-content/uploads/2017/01/http-fail-2-600x317.png',
None], dtype=object) ] | docs.fieldsquared.com |
.
Controlling Segment Parallelism
The gp_external_max_segs server configuration parameter controls the number of segment instances that can access a single gpfdist instance simultaneously. 64 is the default. You can set the number of segments such that some segments process external data files and some perform other database processing. Set this parameter in the postgresql.conf file of your master instance. array (segments and master):
$ wget
The CREATE EXTERNAL TABLE definition must have the correct host name, port, and file names for gpfdist. Specify file names and paths relative to the directory from which gpfdist serves files (the directory path specified when gpfdist started). See Examples for Creating External Tables.
If you start gpfdist on your system and IPv6 networking is disabled, gpfdist displays this warning message when testing for an IPv6 port.
[WRN gpfdist.c:2050] Creating the socket failed
If the corresponding IPv4 port is available, gpfdist uses that port and the warning for IPv6 port can be ignored. To see information about the ports that gpfdist tests, use the -V option.
For information about IPv6 and IPv4 networking, see your operating system documentation.. aborts after the specified number of seconds, creates a core dump, and sends abort information to the log file.
export GPFDIST_WATCHDOG_TIMER=300 | https://docs.greenplum.org/6-13/admin_guide/external/g-using-the-greenplum-parallel-file-server--gpfdist-.html | 2021-04-10T14:13:30 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../graphics/ext_tables_multinic.jpg', None], dtype=object)
array(['../graphics/ext_tables.jpg', None], dtype=object)] | docs.greenplum.org |
What's in the Release NotesThe release notes cover the following topics:
- About vRealize Suite Lifecycle Manager 1.2
- What's New in vRealize Suite Lifecycle Manager 1.2
- Resolved Issues
- Known Issues
About vRealize Suite Lifecycle Manager 1.2
VMware vRealize™ Suite 2017 1.2 first to simplify your deployment and on-going management of the vRealize products.
What's New in vRealize Suite Lifecycle Manager 1.2
- Install pre-checker allows you to validate the environment create requests before starting the deployment. Pre-validations include user input, infrastructure resource availability, and product-specific checks.
- Content lifecycle management in the product settings and environment creation pages. Product binary mapping, environment creation using the configuration file, and grouping user input fields are some of the product pages that have been revamped.
Limitations
- Day 2 operations fail after you deploy or import an environment by using vRealize Suite Lifecycle Manager
After deploying the vRealize suite of products by using vRealize Suite Lifecycle Manager, if any of the VMs are migrated across a vCenter, those product's VMs will not be managed by vRealize Suite Lifecycle Manager.
- Delete environment or snapshot fails when vCenter credentials are changed after the environment is deployed
When you change vCenter credentials after an environment is imported or deployed successfully in vRealize Suite Lifecycle Manager, delete environment operations and snapshots in the environment fail.
- Snapshot for VMs remain in the "in progress" state in vRealize Suite Lifecycle Manager
Snapshot for VMs that have device backing are not supported and remain in the "in progress" state in vRealize Suite Lifecycle Manager. vCenter does not support the snapshot for VMs that have device backing
- Test and deployment of a content is getting failed in an Azure machine
vRealize Suite Life Cycle Manager does not support an Azure machine in content management for testing and releasing content.
- Content management does not capture fields that are marked as secured while capturing vRealize Automation Content.
If the content has a secure field, like a password that is not present in the plain text, content management does not capture those fields.
- You can see multiple catalog app tiles and application links of vRealize Suite Lifecycle Manager after your registration of VMware Identity Manager
After you register with VMware Identity Manager in vRealize Suite Lifecycle Manager, and you click Update from the Settings > User Management tab multiple times, an equal number of catalog application tiles and application links of vRealize Suite Lifecycle Manager are created.
- vRealize Operations upgrade failed at application upgrade task after completing the OS upgrade task and the cluster does not come online
Cassandra failed in one of the vRealize Operations nodes and caused the vRealize Operations cluster to not come online.
- Test and deployment of XAAS blueprint "Azure Machine" shipped by default with vRealize Automation fails
XaaS blueprint "Azure Machine" is shipped by default with vRealize Automation. However, transfer of XaaS blueprint between vRealize Automation environments is not supported.
Resolved Issues
- In vCenter, if a duplicate cluster is found inside a data center, only the first cluster is listed in the vRealize Suite Lifecycle Manager install wizard
In vCenter, if a duplicate cluster is found inside a data center, after vCenter data collection completes, only the first cluster appears in the wizard.
- If the upgrade of vRealize Suite Lifecycle Manager from 1.0 to 1.1 fails, no error message appears and you must restore to the last known good state from a snapshot.
Create a snapshot of the vRealize Suite Lifecycle Manager 1.0 working state before you upgrade. If a failure occurs, no indication appears in the UI.
- Products upgraded in vRealize Lifecycle Manager 1.1 that are installed in vRealize Lifecycle Manager 1.0 do not get the upgraded version in the View Details page
vRealize Suite Product versions are not updated in View Details page if the products are installed in 1.0 and upgraded in 1.1.
Known Issues
- [New] vRealize Automation install a database task fails even when a database is already present
vRealize Automation install a database task fails even though you have given the database user name and password with a useExistingdb flag set as true and useWindowsAuthentication flag set as false in the config file.
Workaround: If you are creating the environment using the configuration file, if there are any advanced properties values present in the config file, you have to edit the advanced properties value in the install wizard and validate the useExistingdb and useWindowsAuthentication checkbox.
- [New] When adding a local path, the Product Binaries downloads are switching to My VMware Downloads.
The local path tries to discover binaries with Base Location starting with folder name '/data/myvmware/' and the radio button My VMware Downloads is selected after you select Local option for Location Type.
Workaround: Do not save binaries in the folder starting with /data/myvmware when you select Local as a location type.
- [New] You cannot setup MyVMware account using LDAP based authentication mechanism
vRealize Suite Life Cycle Manager does not support the Proxy configuration with LDAP based authentication. Therefore, you cannot setup MyVMware account.
Do not use LDAP based proxy authentication when using LCM as it is not supported currently.
- Users other than local vIDM admin users can not be used for vIDM registration in vRealize Suite Lifecycle Manager
When registering an external vIDM into vRealize Suite Lifecycle Manager, the credentials of users other than local vIDM admin users does not lead to a successful registration.
Workaround: Use local vIDM admin user credentials for registration of vIDM to vRealize Suite Lifecycle Manager.
- Upgrade of vRealize Business for Cloud with vRealize Suite Lifecycle Manager does not upgrade remote data collectors
When upgrading vRealize Business for Cloud in the cross-region environment with vRealize Suite Lifecycle Manager, the vRealize Business for Cloud remote data collector appliances are not upgraded.
- Add Products and Scale Out actions fail when you configure the certificate for a product
When you use the Add Products or Scale Out actions to modify an environment, the product can fail if the new product host names or the components are not present in the SAN certificate provided when you create the environment for the first time.
Workaround: Generate a single SAN certificate with all the product or management virtual host names or a wild card certificate and provide this certificate when you create the environment for the first time. This ensures support for post provisioning actions such as Add Products and Scale Out.
- Full /data disk causes log bundle downloads, OVA mappings, and product deployments to fail
All data in the vRealize Suite Lifecycle Manager virtual appliance, such as product binaries and product support log bundles, are stored in the
/datafolder. The default size of the
/datafolder is 100 GB. When the size of
/datareaches 100 GB, log bundle downloads, OVA mappings, product deployments fail.
Workaround: Increase the size of
/datain the vRealize Suite Lifecycle Manager virtual appliance.
- Power off the vRealize Site Lifecycle Manager virtual appliance in vCenter Server.
- Edit the vRealize Suite Lifecycle Manager virtual appliance in vCenter to increase the size of the second disk from 100 GB to a higher value.
- Power on the vRealize Suite Lifecycle Manager virtual appliance.
- Log in to the vRealize Suite Lifecycle Manager virtual appliance.
- Run the command
df -hto verify that the disk size of
/datais increased to the value you specified.
- Remediation does not modify the property Configure ESXi Hosts in vRealize Log Insight vSphere integration
When you trigger remediation in vRealize Suite Lifecycle Manager, the process does not modify the vRealize Log Insight vSphere integration property Configure ESXi Hosts.
Workaround: Modify the property Configure ESXi Hosts in vRealize Suite Lifecycle Manager vSphere integration endpoint configuration.
- Admin logged into vRealize Operations has Read-Only Access
If VIDM is integrated with vRealize Suite Lifecycle Manager, when vRealize Operations is deployed from vRealize Suite Lifecycle Manager, administrative privilege are not assigned to the VIDM Admin.
Workaround: After vRealize Operations is successfully deployed from vRealize Suite Lifecycle Manager:
1. Log in to the vRealize Operations UI by accessing https://
/ui and by using the local Administratoraccount.The default user name is 'admin' and password is same as the default password provided in vRealize Suite Lifecycle Manager during the vRealize Operations deployment. 2. Navigate to Administration > Access Control.The necessary additional privileges might be granted to the VIDM Admin. For detailed information on setting up access control in vRealize Operations, see:
- vRO package name cannot contain special characters and the package is not validated test and deploy.
Workaround: None
- vRealize Suite Life Cycle Manager Cloud Admin can view the Content Management tab but does not have access to the vRealize Suite Life Cycle Manager UI
vRealize Suite Life Cycle Manager UI is not accessible whereas the content management tab is visible for a cloud admim.
Workaround: Add a new role as a developer or a release manager.
- Inconsistent pre-stub and post-stub test execution in the content pipeline with multiple endpoints
When multiple endpoints are involved in a test or release, the deploy workflow runs pre-stubs once per endpoint and post-stub once for the whole job.
None
- vRealize Business for Cloud integration with vRealize Automation works only when vRealize Automation is added to a private cloud environment before vRealize Business for Cloud
Fresh deployment of vRealize Business for Cloud integration with vRealize Automation works only when vRealize Automation is added to a private cloud environment before vRealize Business for Cloud.
To integrate vRealize Buisness for Cloud with vRealize Automation, add vRealize Automation to the private cloud environment before or at the same time you add vRealize Business for Cloud.
- vRSLCM Marketplace displays an error while installing the content
When some of the market place content is tagged incorrectly, then an error while installation appears. For example, following content is incorrectly tagged to vRealize Automation:
- vRealize Orchestrator OpenStack Plug-in
- Linux Guest OS Script Execution Blueprint
- vRealize Automation and Storage Policy
- LAM + Java Stack Blueprint
- LAMP Stack Blueprint
- Microsoft SharePoint 2013
- Three-Tier Service Pattern
- Microsoft SQL Server 2014
- Add User to User Group in AD
- Add DNS Record Blueprint
- Change User Password in AD
- Create Simple NSX Security Group
- Create User Group in AD Blueprint
- Create User in AD Blueprint
- Delete DNS Record Blueprint
Workaround: None
- You cannot enter details for User DN if LCM is upgraded from 1.0 or 1.1 with pre-integrated VMware Identity Manager
This issue occurs because the User DN field is introduced only from LCM version 1.2. This field was not present in earlier versions of LCM and hence you cannot edit the AD details in LCM after attaching a vIDM. For more information, see KB article 56336. | https://docs.vmware.com/en/vRealize-Suite/2017/rn/vRealize-Suite-Lifecycle-Manager-12-Release-Notes.html | 2021-04-10T15:09:26 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.vmware.com |
htmlfill¶
Contents
Introduction¶
formencode.htmlfill is a library to fill out forms, both with default
values and error messages. It’s like a template library, but more
limited, and it can be used with the output from other templates. It
has no prerequesites, and can be used without any other parts of
FormEncode.
Usage¶
The basic usage is something like this:
>>> from formencode import htmlfill >>>>> defaults = {'fname': 'Joe'} >>> htmlfill.render(form, defaults) '<input type="text" name="fname" value="Joe">'
The parser looks for HTML input elements (including
select and
textarea) and fills in the defaults. The quintessential way to
use this would be with a form submission that had errors – you can
return the form to the user with the values they entered, in addition
to errors.
See
formencode.htmlfill.render() for more.
Errors¶attribute.
The default formatters available to you:
default:
- HTML-quotes the error and wraps it in
<span class="error-message">
none:
- Puts the error message in with no quoting of any kind. This allows you to put HTML in the error message, but might also expose you to cross-site scripting vulnerabilities.
escape:
- HTML-quotes the error, but doesn’t wrap it in anything.
escapenl:
- HTML-quotes the error, and translates newlines to
<br>
ignore:
- Swallows the error, emitting nothing. You can use this when you never want an error for a field to display.
Valid form templates¶. | https://formencode.readthedocs.io/en/stable/htmlfill.html | 2021-04-10T15:34:42 | CC-MAIN-2021-17 | 1618038057142.4 | [] | formencode.readthedocs.io |
How to Create a Not Spam Report Button in Outlook
Introduction
When you receive an e-mail that you believe should not had been marked as [SUSPECTED SPAM], you can typically forward that e-mail as an attachment to our team to [email protected]. This usually involves selecting the e-mail in question, clicking on More in the Respond Outlook ribbon toolbar, selecting Forward as Attachment (Figure 1) filling out the To field with [email protected] and clicking Send (Figure 2).
Figure 1
Figure 2
While this is not very time consuming, you can simplify this process by creating a NOT SPAM button in Outlook to speed up the process.
Create a Not Spam Report Button
In the Outlook Quick Steps ribbon toolbar, click on Create New (Figure 3).
Figure 3
In the Edit Quick Step window, set the following:
Set the Name: field to NOT SPAM - DEEZTEK (Figure 4).
Figure 4
Click the Choose an Action drop-down and select Forward message as an attachment (Figure 5).
Figure 5
In the To... field enter [email protected] (Figure 6).
Figure 6
Click on Show Options, in the Subject: field enter [REPORT HAM]: <subject> and check the Automatically send after 1 minute delay checkbox (Figure 7).
Figure 7
Finally, click the Finish button (Figure 8).
Figure 8
Figure 10 | https://docs.deeztek.com/books/hosted/page/how-to-create-a-not-spam-report-button-in-outlook | 2021-04-10T14:02:45 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.deeztek.com |
There are several ways to navigate through documentation:
F1 Search
Search
Contents
F1 provides context help for the active window, dialog box. It always returns no more than one page.
Use Search to return all documents that match any specified term or a set of terms.
The table of contents shows all documentation topics in a hierarchical tree view structure and provides a convenient way to browse help. If you opened topic from index or search or from the link and you want to know where this topic is located, use the Locate button or switch to the Contents tab. | https://docs.devart.com/data-generator-for-oracle/user-interface-concepts/using-documentation.html | 2021-04-10T14:43:39 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.devart.com |
Note
Data Import Wizard pages can slightly differ due to the product you have been using.
Map the Source columns to the Target ones. If you are importing the data into a new table, Data Pump in the top and the Source columns at the bottom of the wizard page. Click Source column fields and select required columns from the drop-down list. (For more information about mapping, go to Mapping, Data Import Wizard topic.)
Note. For more information, go to Saving and Using Templates topic. | https://docs.devart.com/data-pump/importing-data/migrating-data-from-other-servers.html | 2021-04-10T15:09:41 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.devart.com |
Step By Step Videos to Integrate with Autoresponders/Payment Gateways
To find all our step by step videos please visit:
For Payment Processors, Marketing Emails, Content Management and WordPress Plugin section, there are more videos than can be shown initially on the menu. Please click VIEW MORE to find the one you are wanting. | https://docs.promotelabs.com/article/765-step-by-step-integration | 2021-04-10T13:54:38 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5d3027430428634786750e2e/file-zpOyeqSenA.png',
None], dtype=object) ] | docs.promotelabs.com |
Removing a Palette
You can remove palettes from your Palette list if they're not needed in your scene. The actual palette file will not be deleted, so you can add it back to your palette list later if you need it.
NOTE
If you are using Harmony Server, make sure you have the rights to modify the palette list by doing one of the following:
- From the top menu, open the Edit menu and ensure the Edit Palette List Mode option is checked
- Right-click on the palette list and select Get Rights to Modify Palette List.
- From the Colour view menu
, select Palettes > Get Rights to Modify Palette List
><<. | https://docs.toonboom.com/help/harmony-20/premium/colour/remove-colour-palette.html | 2021-04-10T15:04:43 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Trad_Anim/004_Colour/HAR11_remove_palette.png',
None], dtype=object)
array(['../Resources/Images/HAR/Stage/Colours/Steps/018_deletepalette.png',
None], dtype=object) ] | docs.toonboom.com |
Pivotal Container Service (PKS) enables enterprises and service providers to simplify the deployment and operations of Kubernetes-based container services.
Using PKS containers offers the following key features:
- High availability
- PKS has a built-in fault tolerance complete with routine health checks and self-correcting capabilities for Kubernetes clusters.
- Advanced networking and security
- PKS is deeply integrated with NSX-T for advanced container networking including micro-segmentation, load balancing, and security policies.
- Streamlined operations
- PKS provides cluster deployment and lifecycle management of Kubernetes.
- Multi-tenancy
- PKS supports multi-tenancy for workload isolation and privacy within enterprise and cloud service. | https://docs.vmware.com/en/vRealize-Automation/7.5/com.vmware.vra.prepare.use.doc/GUID-F7A4248A-DDB1-4971-A8C9-50E64816C14A.html | 2021-04-10T15:28:15 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.vmware.com |
token_embedder
TokenEmbedder¶
class TokenEmbedder(torch.nn.Module, Registrable)
A
TokenEmbedder is a
Module that takes as input a tensor with integer ids that have
been output from a
TokenIndexer and outputs
a vector per token in the input. The input typically has shape
(batch_size, num_tokens)
or
(batch_size, num_tokens, num_characters), and the output is of shape
(batch_size, num_tokens,
output_dim). The simplest
TokenEmbedder is just an embedding layer, but for
character-level input, it could also be some kind of character encoder.
We add a single method to the basic
Module API:
get_output_dim(). This lets us
more easily compute output dimensions for the
TextFieldEmbedder,
which we might need when defining model parameters such as LSTMs or linear layers, which need
to know their input dimension before the layers are called.
default_implementation¶
class TokenEmbedder(torch.nn.Module, Registrable): | ... | default_implementation = "embedding"
get_output_dim¶
class TokenEmbedder(torch.nn.Module, Registrable): | ... | def get_output_dim(self) -> int
Returns the final output dimension that this
TokenEmbedder uses to represent each
token. This is
not the shape of the returned tensor, but the last element of that shape. | https://docs.allennlp.org/main/api/modules/token_embedders/token_embedder/ | 2021-04-10T14:44:03 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.allennlp.org |
DeleteEndpoint
Deletes the endpoint for a device and mobile app from Amazon SNS. This action is idempotent. For more information, see Using Amazon SNS Mobile Push Notifications.
When you delete an endpoint that is also subscribed to a topic, then you must also unsubscribe the endpoint from the topic.
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
- EndpointArn
EndpointArn of endpoint to delete.
Type: String
Required: Yes
Examples
The structure of
AUTHPARAMS depends on the signature of the API request.
For more information, see Examples
of Signed Signature Version 4 Requests in the Amazon Web Services General Reference.
Example
This example illustrates one usage of DeleteEndpoint.
<DeleteEndpointResponse xmlns=""> <ResponseMetadata> <RequestId>c1d2b191-353c-5a5f-8969-fbdd3900afa8</RequestId> </ResponseMetadata> </DeleteEndpointResponse>
See Also
For more information about using this API in one of the language-specific AWS SDKs, see the following: | https://docs.aws.amazon.com/sns/latest/api/API_DeleteEndpoint.html | 2021-04-10T15:40:09 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.aws.amazon.com |
Biz before its final version is released.
There are three main zones:
- the header. This is the location users can customize for branding purposes,
- the navigation bar (on the left on a light blue background) which displays all views and activities. This is the entry point to most features of the portal,
- the content where users will interact with data. The main page displays a quick summary of importan BAM concepts.
The header:
The header displays branding information (which can be customized for your company), the contextual help link and the current location. On the screenshot below, I am on the Home page. You can always come back to the home page by clicking on the "Home" icon at the left of the header.
The navigation bar:
The navigation bar displays all views / activities which can be accessed by the current user. On the picture below, I have access to one view (SalesManagerView) under which there is currently one activity ("PurchaseOrder").
I can search for instances of activity "PurchaseOrder" by clicking on "PurchaseOrder" under "Activity Search". I can view aggregated data by clicking on one of the entries under "Aggregations". Finally, I can manage alerts by clicking on "PurchaseOrder" under the "Alert Manager" node. The navigation bar is the main entry point to most BAM Portal features. It will always be displayed on the left of the window.
Join me tomorow as I explore the Activity Search, Aggregations and the Alert Manager. | https://docs.microsoft.com/en-us/archive/blogs/gzunino/biztalk-server-2006-what-is-new-with-business-activity-monitoring | 2021-04-10T14:31:00 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['http://frenchgilles.members.winisp.net/blog/BAMPortal/BAMPortal_Header.jpg',
'BAM Portal Header'], dtype=object)
array(['http://frenchgilles.members.winisp.net/blog/BAMPortal/BAMPortal_NavBar.jpg',
'Navigation Bar'], dtype=object) ] | docs.microsoft.com |
- Docs
- Positive WordPress Theme
-
- Tips / Guide
-
- Language Translations
Language Translations
Estimated reading : 1 minute save. | https://docs.droitthemes.com/docs/positive-wp/tips-guide/language-translations/ | 2021-04-10T14:25:12 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['https://docs.droitthemes.com/wp-content/themes/ddoc/assets/images/Still_Stuck.png',
'Still_Stuck'], dtype=object) ] | docs.droitthemes.com |
How Apps Become Available to Your Customers
The list of apps available to a customer depends on various factors, such as the service plan settings, the Application Vault configuration, and so on. Moreover, service providers are able to forbid accessing apps for all Plesk users. If you do not adjust the app availability, your customers will see all apps from Application Catalog and all apps you uploaded to the Vault.
To view the list of apps available to a certain customer, go to the Applications section > All Available Applications. Before an app becomes available in the app list of a certain customer, it passes through a series of filters. The app is filtered on the following levels:
- Application Vault. Plesk lets you toggle the availability of APS packages you have uploaded to the Vault. Note that this works only for your own packages: There is no way to control the availability of apps downloaded from the Catalog. Learn more about apps management in the Managing Apps with the Application Vault section.
- Service plan. Plesk allows you to specify what apps to include in a certain service plan. The filter affects all customers with this service plan.
- Subscription. If you want to select the apps available to a particular customer, update the apps list in the respective subscription.
The resulting app list is available to your customers. | https://docs.plesk.com/en-US/onyx/administrator-guide/server-administration/web-applications/how-apps-become-available-to-your-customers.68949/ | 2021-04-10T15:11:22 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.plesk.com |
Welcome to ElectrumSV’s documentation!¶
ElectrumSV is a wallet application for Bitcoin SV, a peer to peer form of electronic cash. As a wallet application it allows you to track, receive and spend bitcoin whenever you need to. But that’s just the basics, as it manages and secures your keys it also helps you to do many other things.
Important
ElectrumSV can only be downloaded from electrumsv.io.
Getting started¶
Before you can send and receive payments, you need to first create a wallet, and then create at least one account within it.
- How do you create a wallet?
Your wallet is a standalone container for all your bitcoin-related data. You should be able to create as many accounts as you need within it, each account containing separated funds much like a bank account. Read more about creating a wallet.
- How do you create an account?
Each account in your wallet is much like a bank account, with the funds in each separated from the others. Read more about creating an account.
- How do you receive a payment from someone else?
Each account has the ability to provide countless unique and private receiving addresses and by giving a different one of these out to each person who will send you coins, allows you to receive funds from them. Read more about receiving a payment.
- How do you make a payment to someone else?
By obtaining an address from another person, if you have coins in one of your accounts, you should be able to send some or all of those coins to that address. Read more about making a payment.
Problem solving¶
- Why doesn’t my hardware wallet work?
Hardware wallet makers do not provide anywhere near enough support for their devices, and some have a history of making breaking changes that stop them working in ElectrumSV. If your hardware wallet does not work then this is where you should look for some pointers, whether the device is a Trezor, a Ledger, a Keepkey or a Bitbox. Read more about hardware wallet issues.
- How do I split my coins?
If you have coins you have not touched since before Bitcoin SV and Bitcoin Cash split from each other, you might want to make sure that you can send one of these without accidentally sending the other. Read more about coin splitting.
Building on ElectrumSV¶
- How can I access my wallet using the REST API?
For most users, accessing their wallet with the user interface will be fine. But if you have a minimal amount of development skill the availability of the REST API gives you a lot more flexibility. The REST API allows a variety of actions among them loading multiple wallets, accessing different accounts, obtaining payment destinations or scripts from any of the accounts. Perhaps you want to add your own interface for your wallet or maybe automate how you use it. Read more about the REST API.
- How would I extend ElectrumSV as a customised wallet server?
The REST API is limited in what it can do by nature. Getting the ElectrumSV development team to add what you want to it, is not guaranteed to happen, may not even be possible and if it was who knows how long it would take. An alternative is to build your own “daemon application” which is a way of extending ElectrumSV from the inside. Read more about customised wallet servers.
- Do I have to develop against the existing public blockchains?
ElectrumSV provides a way for developers to do offline or local development. customised wallet servers.
The ElectrumSV project¶
Perhaps you are a developer who already helps out on the ElectrumSV project, or you who would like to get involved in some way, or you are just curious about the processes and information related to project management and development. If so, this is the information you want.
- How can you contribute?
There are many ways that you can help the ElectrumSV project improve. If you want something to work in a different way, you can work on making it different and offer us the changes. If you feel the documentation could be better, you can improve it and offer us the changes. If you want ElectrumSV or anything related to it in your native language, you can offer to do the work to translate it. And that’s just a few of the possibilities. Read more about contributing.
- Where is the continuous integration and how is it used?
We use Microsoft’s Azure DevOps services for continuous integration. Microsoft provide generous levels of free usage to open source projects hosted on Github. This is used to do a range of activities for every change we make to the source code, from running the unit tests against each change on each supported operating system, to creating a packaged release for each system that can be manually tested. Read more about our use of continuous integration.
- What is the process of releasing a new version?
Because we generate packaged releases for every change we make, with a bit of extra work we can generate properly prepared public releases. This involves changing the source code so that the release has the content changes required for new version, and also publishing the release and updating the web site to have the content changes required to offer it for download. Read more about the release process. | https://electrumsv.readthedocs.io/en/latest/ | 2021-04-10T14:53:36 | CC-MAIN-2021-17 | 1618038057142.4 | [] | electrumsv.readthedocs.io |
- Manage Deployments >
- MongoDB Processes >
- Suspend or Resume Automation for a Process
Suspend or Resume Automation for a Process¶
On this page
Overview¶
You can suspend Automation’s control over a MongoDB process so that you can shut down the process for manual maintenance, without Automation starting the process back up again. Automation ignores the process until you return control.
When you resume Automation for a process, Cloud Manager applies any changes that occurred while Automation was suspended.
If you wish to permanently remove a process from automation, see: Disable Automation for a Deployment. | https://docs.cloudmanager.mongodb.com/tutorial/suspend-automation/ | 2018-06-18T01:49:01 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.cloudmanager.mongodb.com |
Any application, regardless of the programming language it is written in, requires a series of similar steps for being hosted. This instruction is devoted to the specifics of .NET projects’ deployment and running inside the Jelastic Cloud.
Although .NET projects currently cannot be imported from the GIT/SVN remote repositories due to the beta hosting stage, below we’ll examine a way of immediate direct deployment from the most popular .NET development IDE - Microsoft Visual Studio, as well as the process of local archive creation for manual deployment.So, log into your Jelastic account and let’s get started. | https://docs.jelastic.com/ru/deploy-dotnet-archive-url | 2018-06-18T02:16:30 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.jelastic.com |
noinit
Any data with the
noinit attribute will not be initialised by the C runtime startup code, or the program loader. Not initialising data in this way can reduce program startup times.
persistent
Any variable with the
persistent attribute will not be initialised by the C runtime startup code. Instead its value will be set once, when the application is loaded, and then never initialised again, even if the processor is reset or the program restarts. Persistent data is intended to be placed into FLASH RAM, where its value will be retained across resets. The linker script being used to create the application should ensure that persistent data is correctly placed.
lower
upper
either
These attributes are the same as the MSP430 function attributes of the same name (see MSP430 Function Attributes). These attributes can be applied to both functions and variables.
© Free Software Foundation
Licensed under the GNU Free Documentation License, Version 1.3. | http://docs.w3cub.com/gcc~7/msp430-variable-attributes/ | 2018-06-18T01:53:52 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.w3cub.com |
Run an import You can manually run an import to immediately import data. Procedure From a Transform Map, click Transform. When the import is done, you'll see a link to go straight to the target table containing your imported records. The amount of time that it takes to run an import varies depending on the number of record to be imported and may take as long as several hours for very large import operations (tens of thousands of records). [optional] Click on the link View the imported data to see the loaded import set table. [optional] Click on the link Create transform map to create a new transform map to transform the data in the import set table to its target table. [optional] Click on the link Run import to execute an existing transform map for the loaded data. Result Three things to note at this point: The spreadsheet was imported, and a new table was created to hold the data. Within that table, the imported records are designated with their own "Set" value. A new module was created in the System Import Sets application for the new table. | https://docs.servicenow.com/bundle/istanbul-platform-administration/page/administer/import-sets/task/t_RunImport.html | 2018-06-18T02:15:03 | CC-MAIN-2018-26 | 1529267859923.59 | [] | docs.servicenow.com |
Open Data Schema Map¶
This module provides a flexible way to expose your Drupal content via APIs following specific Open Data schemas. Currently, the CKAN, Project Open Data and DCAT-AP schemas are provided, but new schemas can be easily added through your own modules. A user interface is in place to create endpoints and map fields from the chosen schema to Drupal content using tokens.
This module was developed as part of the DKAN project, but will work on an Drupal 7 site. A separate module exists for DKAN-specific implementation.
Note that serious performance issues can result if you do not follow recommendations in the ODSM File Cache section.
Basic concepts¶
Schema¶
A schema is a list of field definitions, usually representing a community specification for presenting machine-readable data. The core Open Data Schema Map module does not include any schemas; they are provided by additional modules. A schema module includes:
- a standard Drupal .module file – with an implementation of
hook_open_data_schema()to expose the schema to the core Open Data Schema Map module, plus _alter functions for any needed modifications of the UI form or the data output itself.
- the schema itself, expressed as a .json file. For instance, see the Project Open Data schema file to see how these schema are defined in JSON
API¶
An API in this module is a configuration set that exposes a specific set of machine-readable data at a specific URL (known as the API’s endpoint). This module allows you to create multiple APIs that you save as database records and/or export using Features. An API record will contain:
- an endpoint URL
- a schema (chosen from the available schemas provided by the additional modules as described above)
- a mapping of fields defined in that schema to Drupal tokens (usually referencing fields from a node)
- optionally, one or more arguments passed through the URL to filter the result set
Usage¶
Installation¶
Enable the main Open Data Schema Map module as usual, and additionally enable any schema modules you will need to create your API.
Creating APIs¶
Navigate to admin/config/services/odsm and click “Add API.”
Give the API a title, machine name, choose which entity type (usually node) and bundle (in DKAN, this is usually Dataset).
You will need to create the API record before adding arguments and mappings.
Arguments¶
The results of the API call can be filtered by a particular field via arguments in the URL. To add an argument, first choose the schema field then, if you are filtering by a custom field API field (ie, a field whose machine name begins with “field_”), identify the database column that would contain the actual argument value. Leave off the field name prefix; for instance, if filtering by a DKAN tag (a term reference field), the correct column is field_tags_tid, so you would enter “tid”. Which Drupal field to use will be extrapolated from the token you map to that schema field.
Field Mapping¶
The API form presents you with a field for each field in your schema. Map the fields using Drupal’s token system. Note: using more than one token in a single field may produce unexpected results and is not recommended.
Multi-value fields¶
For Drupal multi-value entity reference fields, the schema can use an array to instruct the API to iterate over each value and map the referenced data to multiple schema fields. For instance, in the CKAN schema, tags are described like this in schema_ckan.json:
"tags": { "title":"Tags", "description":"", "anyOf": [ { "type": "array", "items": { "type": "object", "properties": { "id": { "title": "UUID", "type": "string" }, "vocabulary_id": { "title": "Vocaulary ID", "type": "string" }, "name": { "title": "Name", "type": "string" }, "revision_timestamp": { "title": "Revision Timestamp", "type": "string" }, "state": { "title": "state", "description": "", "type": "string", "enum": ["uncomplete", "complete", "active"] } } } } ] },
You can choose which of the available multivalue fields on your selected bundle to map to the “tags” array, exposing all of the referenced “tag” entities (taxonomy terms in this example) to use as the context for your token mappings on the schema fields within that array. First, simply choose the multivalue field, leaving the individual field mappings blank, and save the form.
When you return to the tags section of the form after saving, you will now see a special token navigator you can use to find tokens that will work with this iterative approach (using “Nth” in place of the standard delta value in the token):
Customizing¶
Adding new schemas¶
You are not limited by the schemas included with this module; any Open Data schema may be defined in a custom module. Use the open_data_schema_ckan module as a model to get started.
A Note on XML Output¶
Open Data Schema Map provides an XML output format. This is provided via a separate submodule in the
modules/ folder for historical reasons, but should be refactored into the main ODSM module in a future release.
XML endpoints still require a schema defined in JSON. Defining your own XML endpoint may be less than intuitive for the time beind, but take a look at the DCAT schema module for a model.
The ODSM File Cache¶
Open Data Schema Map endpoints that list a large number of entities – Project Open Data (
data.json), the CKAN Package List (
/api/3/action/package_list) and DCAT-AP Catalog (
catalog.xml) – perform a full entity load for each record listed in order to perform the token replacements. This can cause a major performance hit each time any of these URLs is hit on a site with more than a few dozen datasets, and on a site with thousands the response time can be two minutes or more.
Open Data Schema Map includes a file caching function to save a snapshot of any endpoint as a static file to be served up quickly, with very few hits to the database.
File caches can be generated either via a Drush command, or an admin UI. The recommended usage on a production website is to set up a cron job or use a task runner like Jenkins to regenerate the file caches for your performance-intensive endpoints daily (usin the drush command), at whatever time your site experiences the least amount of traffic. The trade-off of course is that any additions or changes to your site will not be reflected on these endpoints until they are regenerated.
Drush Use¶
The Drush command supplied by Open Data Schema Map is
odsm-filecache (also available simply as the alias
odsmfc). This command takes as its argument the machine name for an ODSM endpoint. For example:
drush odsm-filecache data_json_1_1
This will render the full
data_json_1_1 endpoint (which is the
data.json implementation that ships with DKAN) to the filesystem, saving it to:
public://odsm_cache_data_json_1_1
Now a hit to
/data.json will be routed to this file, which in most cases will actually live at
/sites/default/files/odsm_cache_data_json_1_1.
UI Use¶
An administrative UI to regenerate file caches manually is also included. This interface is useful in cases where manual creation of the cache files is sufficient.
To use, navigate to admin/config/services/odsm where there is a column called “Cache” with links to the individual admin pages for specific enpoint caches. If there is no cache the link is labled “none”, otherwise the link is labled with the age of the cache in hours. From the cache admin pages you can create, delete or regenerate the cache.
Schema Validation¶
Both the Project Open Data and DCAT-AP schemas ship with validation tools you can access from the Drupal admin menu. More documentation on this feature coming soon...
Community¶
We are accepting issues for Open Data Schema Map in the DKAN issue queue only. Please label your issue as “Component: ODSM” after submitting so we can identify problems and feature requests faster.
If submitting a pull request to this project, please try to link your PR to the corresponding issue in the DKAN issue thread. | http://docs.getdkan.com/en/latest/components/open-data-schema.html | 2018-06-18T01:38:24 | CC-MAIN-2018-26 | 1529267859923.59 | [array(['../_images/c7ff24e6-0b8c-11e4-92c3-9ba2e163bf56.png',
'screen shot 2014-07-14 at 3 24 03 pm'], dtype=object)
array(['../_images/b3e6ea90-0b8f-11e4-9d9e-33b4515310f0.png',
'screen shot 2014-07-14 at 3 46 39 pm'], dtype=object)
array(['../_images/992d1138-7ac6-11e4-8e7b-bcaefa733648.png',
'Screen Shot 2014-07-14 at 3.55.49 PM.png | uploaded via ZenHub'],
dtype=object)
array(['../_images/c3ca9cd4-0c9f-11e4-8fd0-1ea7c3c8b2b3.png',
'screen shot 2014-07-16 at 12 14 29 am'], dtype=object)
array(['../_images/ad5e3eac-7ac6-11e4-8c7d-91076527c84d.png',
'screen shot 2014-07-16 at 12 22 00 am'], dtype=object)] | docs.getdkan.com |
Quickbooks Desktop Integration with BluBilling
>
Full Synchronization - QuickBooks Export
If you have chosen the Full Synchronization mode, you are looking to export customer level information regarding invoices and payments. In order to sync up to your QuickBooks application, the BluBilling system requires the Item list from QuickBooks to be mapped to the individual charges in BluBilling.Details below:
Step 1: SetUp
From
"Billing Cycles>>Accounting/ERP Integration"
click on
Items.
You will
be prompted to either
Create a New Entry
or to
Import from ERP. Fig 1
Fig 1: Items
If you select the Import from ERP option you will be prompted to enter Start and End Dates.(Fig 2 below) Be sure to enter Start Dates from about the start of your company or some such date in the past so as to include all Items created.Start Import.
Fig 2: Enter the Date Range for Item Import
At this point,
you will be prompted to Start your Web connector Application and Update the BluBilling Integration.(Fig 3 below)
Fig 3: Review items queued for Import/Export
Your next step would be to start the Web Connector from
"Start>> Programs>>QuickBooks>>WebConnector",
select the application and then click "Update Selected" see (Fig 4 below)
Fig 4: Select app and Update Selected for Import of Items
After this step has been completed, in your BluBilling site you will be able to see all the Items that were just imported. From
"Billing Cycles>>Accounting/ERP Integration>>Items"
You could also add all Items manually by clicking on the Create a New Entry (Fig 5 below).
Fig 5: Manually Create an Item
Step 2: Importing Customers or Invoices
Please import customers by following the exact same process as Import Items as explained above. You will begin by clicking on the
"Import Customer"
button from
"Billing Cycles >> Accounting/ERP Integration >> Import Customers".
You will be asked to provide a date range for the import, if you wish to import all active customers from QuickBooks, then simply enter a date range from the past to the current date.
If you also wish to import invoices, (typically as carry forward invoices when switching over to BluBilling for the 1st time), then first perform Step 3 below and then come back and click on the "Import Invoices" button.
Before importing invoices, ensure that (a) all customers have been imported correctly and (b) the Plans & Charges have been mapped to the QuickBooks Items correctly.
Step 3: Mapping the Items imported to the Charges in BluBilling
Next, from
"Plans and Charges>>Plans>>Charges"
Select the Charge and click on the Edit Charge.(NOTE : You will go through the following process for all the charges you would like mapped to QuickBooks) In the
Edit Charge page
, enter the
Item Name
that corresponds with the Charge. This should be entered text box by the text "
Price Code"
(See Fig 7)The
Item Name
can be chosen from the list of Items that you imported from QuickBooks. From
"Billing cycles>>Accounting/ERP Integration>>Items"
(See fig 6 below) Then Save.
Fig 6: The text under "Name" is what needs to be entered in the Edit Charge page under "Price Code"
NOTE
:
Whe
n en
tering the Price Code please make sure that it is entered Exactly as it appears under the Name column in the Item List.
Fig 7: Enter the item name in the text box by the Price Code.
This process should be repeated until all the charges have been mapped with the Item Names.
Step 4: Export of Invoices/Payments/Customers
Once
new customers have been created
and then
Invoices and payments are present, you are then ready to export these to your QuickBooks application. Go to
"Billing cycles>>Accounting/ERP Integration"
and enter the Date range see (Fig 8 below). The date range controls the invoices/payments and any new customers created in BluBilling that you would like to export out to QuickBooks. All invoices and payments and new customers created that fall within the date range will be exported depending on if the Export Invoices/payments/customers check box has been checked.
Fig 8: Enter Date Range for Export of Invoices, Payments and New Customers
At this point,you will be prompted to Review the items that are queued for Export (refer to Fig 3 above) and prompted to start the Web connect. Once Web Connect is running, select the BluBilling app and click
"Update selected"
(refer to fig 4 above).
Once you see the Complete status you can review your QuickBooks application.
NOTE
:
You can have the web connect, connect to the BluBilling app at regular intervals of your choice in order to avoid manually following these steps. We recommend you follow the manual process the first few times before you set it up to run automatically.
In the Web Connector,check the AutoRun box by the Blubilling application and set the interval that you would like the Web connect to run.(see Fig 9 below).
|
Report Abuse
|
Print Page
|
Google Sites | http://docs.blusynergy.com/quickbooks-integration-with-blubilling/full-synchronization-with-quickbooks | 2020-01-17T19:20:22 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.blusynergy.com |
I am having issues, where do I leave a bug report?
Please leave bug reports either on Juno.jl GitHub repository or at Julia's discussion forum under the
Tooling ▶ Juno category.
Juno could not be started.
Go to
Packages > Juno > Settings and change
Julia Path to point to the Julia binary.
The installation of some Atom packages fails. What can I do?
It is possible that your Antivirus Software prevents certain files to be downloaded or executed that are necessary for Juno to function. Consider disabling antivirus software's real time monitoring for the duration of the installation. For certain scanners (Avast and McAffee) it might also be necessary to exclude
C:\Users\you\.atom\packages\julia-client\node_modules\node-pty-prebuilt\build\Release\winpty-agent.exe
from the real time monitoring after installation. This is an upstream issue which should hopefully be resolved soon.
Juno doesn't work properly after an Atom update. What do I do?
Check whether you have a little red bug symbol in the status bar (lower right):
If so, click on it and then click on
Rebuild Packages:
Restart Atom and you should be good to go!
Juno doesn't work properly after some Atom packages were updated. What do I do?
There's a chance the update of julia-client failed. To get a clean re-install while preserving any setting you might have changed, try the following steps:
- Close all Atom instances.
- Start a terminal (e.g.
cmdon Windows or the Terminal App on MacOS)
- Execute
apm uninstall julia-client.
- Execute
apm install julia-client.
- Start Juno. Everything should work again.
The integrated REPL/terminal is unbearably slow. How do I fix it?
Enable the
Fallback Renderer option in the
Terminal Options in julia-client's settings and restart Atom for good measure. Installation Instructions Installation Instructions. Note that this will require you to be on master for the Julia and Atom packages, so things will be changing likely before documentation changes.
How do I execute code on Juno startup?
Much like Julia has its
~/.julia/config/startup.jl file for executing code on startup, Juno will execute code contained in
~/.julia/config/juno_startup.jl after Julia has been booted and a connection with the editor is established. This allows running code on startup that queries the frontend, e.g.
Juno.syntaxcolors. | http://docs.junolab.org/latest/man/faq/ | 2020-01-17T19:57:49 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['../../assets/native_bug.png', 'native bug'], dtype=object)
array(['../../assets/native_update.png', 'native update'], dtype=object)] | docs.junolab.org |
Conda¶
Conda is an alternative package manger to PyPi. It comes with many features that PyPi packaging does not handle well such as including compiled libraries and c dependencies.
While traditional Python packages are stored in pypi.org conda python packages are stored at anaconda.org. These steps do not require that you have already deployed a package to PyPi.
First create an account through
<>. Unlike PyPi there is no test repo to submit
your package to. Anaconda takes a different philosophy where each user
has a collection of packages and jupyter notebooks in their repo. The
approach I will show you does not require that you have
conda
installed on your machine. If you would like to experiment with the
build tool I would recommend pulling the continuum
conda build
docker container continuumio/miniconda3. The default
continuumio/anaconda3 docker environment is over 3.5 GB
unzipped. Why are the continuum docker containers so large?
docker pull continuumio/miniconda3 docker run -i -t continuumio/miniconda3 /bin/bash
Once you start the docker container you can do the following steps for
package deployment to conda. These steps will be automated later with
a Gitlab build script. In order to upload packages you will either
need to login to your account via
anaconda login or create an
account token will all account access. I would recommend creating an
account token so that you can revoke access at any time. To create an
account token go to
settings->access on anaconda.org when you are
logged in.
- conda install anaconda-client setuptools conda-build -y
- python setup.py bdist_conda
- anaconda -t $ANACONDA_TOKEN upload -u $ANACONDA_USERNAME /opt/conda/conda-bld/linux-64/<package>-<version>-<pyversion>.tar.bz2
The first step ensures that all packages are the right version and we
have the command line anaconda tool. Anaconda has it hidden in their
documentation that they have a convenient build tool for python
packages
that does not require a recipe. When running in a conda environment
they have overridden setuptools to include bdist_conda for building
conda packages. The build command will build the package, run tests,
and check that each command created exits. After your package is built
you can now upload to conda. If you are building within a docker
container chances are that their is only one conda build so you can
shorten the upload command to anaconda upload
/opt/conda/conda-bld/linux-64/<package>*.tar.bz2. Otherwise you will
have to chose the build that is provided at the end of the
python
setup.py bdist_conda output.
From some of my initial tests I was surprised that many packages available on PyPi are not available on conda and thus made the builds fail. These errors are most likely due to me know understanding the conda tools well. If your build succeeded you should see the package listed on<username>.
Since we are all about automation lets make this process automatic on Gitlab!
variables: TWINE_USERNAME: SECURE TWINE_PASSWORD: SECURE TWINE_REPOSITORY_URL: ANACONDA_USERNAME: SECURE ANACONDA_TOKEN: SECURE stages: - deploy deploy_conda: image: continuumio/miniconda3:latest stage: deploy script: - conda install anaconda-client setuptools conda-build -y - python setup.py bdist_conda - anaconda -t $ANACONDA_TOKEN upload -u $ANACONDA_USERNAME /opt/conda/conda-bld/linux-64/pypkgtemp*.tar.bz2 only: - /^v\d+\.\d+\.\d+([abc]\d*)?$/ # PEP-440 compliant version (tags) | https://costrouc-python-package-template.readthedocs.io/en/latest/conda.html | 2020-01-17T18:37:29 | CC-MAIN-2020-05 | 1579250590107.3 | [] | costrouc-python-package-template.readthedocs.io |
Fluent Bit is distributed as td-agent-bit package and is available for the latest stable Ubuntu system: Xenial Xerus. This stable Fluent Bit distribution package is maintained by Treasure Data, Inc.
The first step is to add our server GPG key to your keyring, on that way you can get our signed packages:
$ wget -qO - | sudo apt-key add -
On Ubuntu, you need to add our APT server entry to your sources lists, please add the following content at bottom of your /etc/apt/sources.list file:
deb bionic main
deb xenial main
Now let your system update the apt database:
$ sudo apt-get update
Using the following apt-get command you are able now to install the latest td-agent-bit:
$ sudo apt-get install td-agent-bit
Now the following step is to instruct systemd to enable the service:
$ sudo service td-agent-bit start
If you do a status check, you should see a similar output like this:
sudo service td-agent-bit status● td-agent-bit.service - TD Agent BitLoaded: loaded (/lib/systemd/system/td-agent-bit.service; disabled; vendor preset: enabled)Active: active (running) since mié 2016-07-06 16:58:25 CST; 2h 45min agoMain PID: 6739 (td-agent-bit)Tasks: 1Memory: 656.0KCPU: 1.393sCGroup: /system.slice/td-agent-bit.service└─6739 /opt/td-agent-bit/bin/td-agent-bit -c /etc/td-agent-bit/td-agent-bit.conf...
The default configuration of td-agent-bit is collecting metrics of CPU usage and sending the records to the standard output, you can see the outgoing data in your /var/log/syslog file. | https://docs.fluentbit.io/manual/installation/ubuntu | 2020-01-17T18:31:00 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.fluentbit.io |
It is possible to create Render Textures where each pixel contains a high-precision Depth value. This is mostly used when some effects need the Scene’s Depth to be available (for example, soft particles, screen space ambient occlusion and translucency would all need the Scene’s Depth). Image Effects often use Depth Textures too.: would render depth of its GameObjects: = UnityObjectToClipPos(v.vertex); UNITY_TRANSFER_DEPTH(o.depth); return o; } half4 frag(v2f i) : SV_Target { UNITY_OUTPUT_DEPTH(i.depth); } ENDCG } } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2018.1/Documentation/Manual/SL-DepthTextures.html | 2020-01-17T20:19:34 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.unity3d.com |
The definition of a single UV coordinate set asset.
The uvs array is assumed to be given in the same order as the vertices in any associated geometry. Any vertices that have more than one UV coordinate associated with them (e.g. along a UV boundary) should appear in the polygon_vertex_indices array. For vertices that appear in the polygon_vertex_indices array, their entry in the uvs array should correspond to a “primary” value for the UV's for that vertex. The “primary” value may be arbitrary for most applications.
For a geometry with unique UV's at each vertex, the uvs array size will match the vertex array size of the corresponding geometry. If the geometry has vertices with more than one UV associated with them, there will be an entry in the polygon_vertex_indices array for each additional UV associated with each vertex.
{ "id" : "default", "name" : "default", "label" : "Default UVs", "vertex_count" : 266, "uvs" : [ [ 0.02083333, 0 ], [ 0.02083333, 1 ], [ 0, 0.08333334 ], [ 0, 0.1666667 ], [ 0, 0.25 ], [ 0, 0.3333333 ], [ 0, 0.4166667 ], [ 0, 0.5 ], [ 0, 0.5833333 ], [ 0, 0.6666667 ], [ 0, 0.75 ], [ 0, 0.8333333 ], [ 0, 0.9166667 ] ], "polygon_vertex_indices" : [ [ 12, 0, 266 ], [ 24, 0, 4 ], [ 36, 0, 7 ], [ 48, 0, 9 ], [ 60, 0, 2 ] ] } | http://docs.daz3d.com/doku.php/public/dson_spec/object_definitions/uv_set/start | 2020-01-17T20:10:28 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.daz3d.com |
.
Check to see if your server is preventing the wp-cron.php from being triggered.
Check your import file in the media library by clicking on it. It should show what the file size is and the current offset. The offset is where the background process is shown while importing the file. If it is shown as “zero” then your WordPress Uploads directory for the media library has been set to non-standard restricted access meaning the background location import process is not allowed to open the file after it was uploaded.
How it works
With SLP /Power 4.9 (or higher) the import function is a 3-part process.
- Step 1 : This is the ONLY interactive portion with the browser where the file is read from your system and uploaded your WordPress site
- Step 2: This is when a background process, using WordPress Cron, reads the uploaded file and starts importing the locations WITHOUT geocoding them (that is next).
- Step 3: Once the import files are loaded and populated in the location list, another background WordPress Cron begins geocoding the locations.
Process improvement
This import approach was created to prevent the import of locations process stopping before it gets all your locations populated to the database.
The 3 step process was done to allow large file uploads (10,000+ locations) on smaller hosts or even moderate size imports (600+ locations) and on severely under-powered hosts. If you are regularly importing thousands of locations , you may want to opt for a faster server or a dedicated server.
The improved functionality of the import process described above should alleviate most issues found when utilizing under powered servers or shared host servers, thus eliminating “half-imported” files as a result of servers timing out before the job was finished.
| https://docs.storelocatorplus.com/power-import-and-wpslp/ | 2020-01-17T19:42:19 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['https://i1.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/11/2017-11-30_wpenvironment.jpg?resize=281%2C223&ssl=1',
None], dtype=object)
array(['https://i1.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/11/2017-11-30_upload-media-library.jpg?resize=150%2C150&ssl=1',
None], dtype=object)
array(['https://i0.wp.com/docs.storelocatorplus.com/wp-content/uploads/2017/11/2017-11-30-import-messages-and-csv-file-headers.jpg?resize=300%2C149&ssl=1',
None], dtype=object) ] | docs.storelocatorplus.com |
Both Cascade and Niagara can be used to make visual effects (VFX) inside of Unreal Engine (UE4), but the way you use Niagara to create and adjust VFX is very different from how you use Cascade.
Niagara is our next-generation VFX system. With Niagara, the technical artist has the ability to create additional functionality on their own, without the assistance of a programmer. We have made the system as adaptable and flexible as possible, while making it easy to use and understand.
Core Niagara Components
In the Niagara VFX system, there are four core components:
Systems
Emitters
Modules
Parameters
Systems
Niagara systems are containers for multiple emitters, all combined into one effect. For example if you are making a firework effect, you might want multiple bursts in your firework�so you would create multiple emitters and place them all into a Niagara system called Firework. In the Niagara System Editor, you can modify or overwrite anything in the emitters or modules that are in the system. The Timeline panel in the System Editor shows which emitters are contained in the system, and can be used to manage those emitters. The Emitter Editor and System Editor are mostly the same. For more information on the System Editor interface, see the Niagara System and Emitter Editor Reference. UI elements that act differently in the System Editor are indicated in each section.
Emitters
Niagara emitters are containers for modules. They are single purpose, but they are also re-usable. One unique thing about Niagara emitters is that you can create a simulation using the module stack, and then render that simulation multiple ways in the same emitter. Continuing our firework effect example, you could create one emitter that had a sprite renderer for the for the spark, and a ribbon renderer for the stream of light following the spark. For more information on the Niagara Emitter Editor interface, see the Niagara System and Emitter Editor Reference.
Modules
Niagara modules are the base level of Niagara VFX. Modules are the equivalent of Cascade's behaviors. Modules speak to common data, encapsulate behaviors, stack with other modules, and write functions. Modules are built using High-Level Shading Language (HLSL), but can be built visually in a Graph using nodes. You can create functions, include inputs, or write to a value or parameter map. You can even write HLSL code inline, using the CustomHLSL node in the Graph. For more information on the default modules available in a Niagara emitter, see the Niagara System and Emitter Module Reference.
Parameters and Parameter Types
Parameters are an abstraction of data in a Niagara simulation. Parameter types are assigned to a parameter to define the data that parameter represents. There are four types of parameters:
Primitive: This type of parameter defines numeric data of varying precision and channel widths.
Enum: This type of parameter defines a fixed set of named values, and assumes one of the named values.
Struct: This type of parameter defines a combined set of Primitive and Enum types.
Data Interfaces: This type of parameter defines functions that provide data from external data sources. This can be data from other parts of UE4, or data from an outside application.
You can add a custom parameter module to an emitter by clicking the Plus button (+) and selecting Create New Parameter. You can also add a custom Set Variable module to an emitter by clicking the Plus button (+) and selecting Set Specific Parameter. For more information on these and other emitter modules, see the Niagara System and Emitter Module Reference.
A module is an item, but an item is not a module. Modules are editable assets a user can create. Items refer to parts of a system or emitter that the user cannot create. Examples of items are system properties, emitter properties, and renderers.
Niagara Stack Model and Stack Groups
Particle simulation in Niagara operates as a stack�simulation flows from the top of the stack to the bottom, executing programmable code blocks called modules in order. Crucially, every module is assigned to a group that describes when the module is executed.
Modules that are part of the System groups (such as System Spawn or System Update) execute first, handling behavior that is shared with every emitter. Then, modules and items in the Emitter groups (such as Emitter Spawn or Emitter Update) execute for each unique emitter. Following this, parameters in the Particle groups execute for each unique particle in an individual emitter. Finally, Renderer group items describe how to render each emitter�s simulated particle data to the screen.
Templates and Wizards
When you first create a Niagara emitter or Niagara system, a dialog displays offering several options for what kind of emitter or system you want to create. One of these options is to choose a template. These templates are based on some common base effects, and have various modules placed into the stack already. You can change any of the parameters in the template. You can add, modify or delete any of the modules. In a system template, you can also add, modify or delete any of the emitters. Templates are just there to jumpstart your creativity and give you something that you can work with immediately.
Emitter Wizard
The Emitter Wizard offers the following options for creating a new emitter:
Create a new emitter from an emitter template: If you select this option, you can choose from a list of templates that present several types of commonly used effects. In a large development studio, art leads or creative directors can curate the list of templates, ensuring that the company�s best practices are baked into the templates. These templates also offer a great starting place if you are new to UE4.
Click image for full size.
Copy an existing emitter from your project content: If you select this option, you can create a new emitter that is a copy of an emitter you already created. This can be useful if you need to create several similar emitters. Type the name of the emitter you want to copy into the Search bar, then choose from the list of results.
Click image for full size.
Inherit from an existing emitter in your project content: If you select this option, you can create a new emitter that inherits properties from an existing emitter. This option makes the new emitter a child of the existing emitter you selected. If you need many emitters that all have certain properties in common, this is a good option to choose. You can make changes to the parent emitter, and all child emitters will reflect those changes.
Click image for full size.
Create an empty emitter with no modules or renderers (Advanced): If you select this option, the new emitter will have no modules, items or renderers included. You might find this option useful if you want to build something from the ground up. However, using this option requires that you understand how the Niagara system works, and have a clear vision for what you want the emitter to do.
Click image for full size.
System Wizard
The System Wizard offers the following options for creating a new system:
Create a new system from a system template: If you select this option, you can choose from a list of templates that present several commonly used effect systems. As with emitter templates, this list can be curated by art leads or creative directors. If you are new to UE4, this option will give you an example of how FX systems are built in Niagara.
Click image for full size.
Create a new system from a set of selected emitters: If you select this option, a list of available emitters displays below the Search bar. Select the emitters you want to include in the new system, and click the green Plus (+) button to add them. Use this option if you�ve already created all the emitters you need for the system.
Click image for full size.
Copy an existing system from your project content: If you select this option, a list of existing systems displays below the Search bar. Choose one of them to copy.
Click image for full size.
Create an empty system with no emitters: If you select this option, your system is completely empty. This option is useful if you want to create a system that is totally different from your other systems.
Click image for full size.
Niagara VFX Workflow
Create a Module
Module function flow
Incoming parameters in the parameter map.
Module action.
Write work back into the parameter map.
Module output.
Modules accumulate to a temporary namespace, then you can stack more modules together. As long as they contribute to the same attribute, the modules will stack and accumulate properly.
When writing a module, there are many functions available for you to use:
Boolean operators
Math expressions
Trigonometry expressions
Customized functions
Nodes that make boilerplate functions easier
Once you create a module, anyone else can use it.
Modules all use HLSL. The logic flow is as follows:
Graph logic
HLSL Cross-compiler
Target GPU Platform
SIMD Optimized Virtual Machine (CPU)
Create Emitters
When you create your emitters, you place modules in the stack that define how that effect will look, what actions it will take, and so on. In the Emitter Spawn section, place modules that define what will happen when the emitter first spawns. In the Emitter Update section, place modules that affect the emitter over time. In the Particle Spawn section, place modules that define what will happen when particles spawn from the emitter. In the Particle Update section, place modules that affect particles over time. In the Event Handlers section, you can create events that determine how particles interact with each other, or how they react to information from another emitter or system. You can also create listening events, which another emitter or system can react to.
Create Systems
Combine individual emitters into one system, which displays the entire visual effect you are trying to create. There are modules specific to systems, and some elements of the editor act differently when you are editing a system instead of an emitter. In the System Editor, you can change or override any modules in any emitter included in the system. You can also manage timing for the included emitters using the System Editor's Timeline panel.
Niagara Paradigms
Inheritance
With a flat hierarchy, you cannot effectively locate and use the assets you already have in your library, which leads to people recreating those assets. Duplication of effort lowers efficiency and increases costs.
Hierarchical inheritance increases discoverability and enables effective reuse of existing assets.
Anything inherited can be overridden for a child emitter in a system.
Modules can be added, or can be reverted back to the parent value.
This is also true with emitter-level behaviors such as spawning, lifetime, looping, bursts, and so on.
Dynamic Inputs
Dynamic inputs are built the same way modules are built.
Dynamic inputs give users infinite extensibility for inheritance.
Instead of acting on a parameter map, dynamic inputs act on a value type.
Any value can be driven by Graph logic and user-facing values.
Dynamic inputs have almost the same power as creating modules, but can be selected and dropped into the stack without actually creating new modules.
Micro Expressions
Any inline value can be converted into an HLSL expression snippet.
Users can access any variable in the particle, emitter, or system, as well as any HLSL or VM function.
This works well for small, one-off features that do not need a new module.
Events
Events are a way to communicate between elements (such as particles, emitters, and systems).
Events can be any kind of data, packed into a payload (such as a struct) and sent. Then anything else can listen for that event and take action.
Options you can use:
Run the event directly on a particle by using Particle.ID.
Run the event on every particle in a System.
Set particles to spawn in response to the event, then take some action on those particles.
Events are a special node in the graph (structs). How to use the Event node:
Name the event.
Add whatever data you want to it.
Add an Event Handler into the Emitter stack.
Set the options for the Event Handler.
There is a separate execution stack for events.
You can put elaborate graph logic into Event Handlers.
You can have a whole particle system set up, with complex logic, and then have a whole separate set of behaviors that occur when the event triggers.
Data Interfaces
There is an extensible system to allow access to arbitrary data.
Arbitrary data includes mesh data, audio, external DDC information, code objects, and text containers.
Data interfaces can be written as plugins for greater extensibility moving forward.
Users can get any data associated with a skeletal mesh by using a skeletal mesh data interface.
Houdini
Using Houdini, you can calculate split points, spawn locations, impact positions, impact velocity, normals and so on.
You can then export that data from Houdini to a common container format (CSV).
You can import that CSV into Niagara in your UE4 project. | https://docs.unrealengine.com/en-US/Engine/Niagara/Overview/index.html | 2020-01-17T18:18:41 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['./../../../../Images/Engine/Niagara/Overview/CreateAModule.jpg',
'Module Function Flow'], dtype=object)
array(['./../../../../Images/Engine/Niagara/Overview/HLSLFlow.jpg',
'HLSL Logic Flow'], dtype=object) ] | docs.unrealengine.com |
Should you purchase a single key/subscription, or 5-site, and later want to buy more, you can do this and pay only the difference between packages.
How to upgrade your subscription/key ↑ Back to top
1/ Find your key at My Subscriptions via your WooCommerce.com Account page.
2/ Go to the extension or payment gateway page you’d like to upgrade, and you’ll see a link in the Subscription Options product box that says UPGRADE.
3/ Select Upgrade Your Existing License to enter your current site key.
4/ Choose the subscription you wish to upgrade to, and Add to Cart.
5/ Pay only the difference between subscription packages in checkout.
Questions ↑ Back to top
Need some assistance? Get in touch with a Happiness Engineer via the Help Desk. | https://docs.woocommerce.com/document/upgrading-keys-and-subscriptions/ | 2020-01-17T19:51:43 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['https://docs.woocommerce.com/wp-content/uploads/2013/02/upgrade.png',
'Upgrade'], dtype=object) ] | docs.woocommerce.com |
Discos from Scratch¶
Discos is the control software produced for the Sardinia Radio Telescope, Medicina and Noto. It is a distributed system based on ACS (ALMA Common Software), commanding all the devices of the telescope and allowing the user to perform single-dish observations.
If the system has some problems that cannot be resolved with the help of the previous section, you probably need to restart Discos.
Before restarting Discos, you have to follow the procedure of shutdown of Discos. | https://srt-procedures.readthedocs.io/en/latest/NuragheScratch/index.html | 2020-01-17T18:26:11 | CC-MAIN-2020-05 | 1579250590107.3 | [] | srt-procedures.readthedocs.io |
Reporters Module¶
After the analysis process, for each of the violations detected, we know the detail information of the node, the rule, and sometimes the diagnostic message. Reporters will take the information, and convert them into human readable reporters.
Note
In some cases, the outputs are not directly readable for people, instead they will be rendered by other tools for better representations. For example, continuous integration systems can understand a specific type of output, and convert it into graphic interface on its dashboard. | http://docs.oclint.org/en/stable/internals/reporters.html | 2020-01-17T19:00:25 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.oclint.org |
Gets a command to change the subscript formatting of characters in a selected range.
readonly changeFontSubscript: ChangeFontSubscriptCommand
Call the execute method to invoke the command. The method checks the command state (obtained via the getState method) to determine whether the action can be performed.
Usage examples:
richEdit.commands.changeFontSubscript.execute(); richEdit.commands.changeFontSubscript.execute(true);
This command applies settings to all characters in the selection. If it is collapsed (the selected range's length equals 0) - the settings are applied to the nearest word. | https://docs.devexpress.com/AspNet/js-RichEditCommands.changeFontSubscript | 2020-01-17T18:57:59 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.devexpress.com |
A powerful way to drive traffic to an e-commerce site is to run an associate scheme (also known as an affiliate scheme). Whilst there are third-party systems which provide this service, you have the most control if you run the scheme yourself through the site.
The essence of an associate scheme is that you get people (associates) to place links on their website which link to your site. If people follow those links, and then make a purchase, then the associate is rewarded with a percentage of the purchase price as their commission.
Their commission builds up in an account, managed by the site, and periodically, if the amount has risen above a threshold, you pay out their commission.
The associate scheme described here automates everything apart from the paying out, that part being something you will generally want to manage manually anyway.
See navigation in right-column >> | https://docs.neatcomponents.com/associates-system | 2020-01-17T19:00:13 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.neatcomponents.com |
$ sudo /usr/local/bin/etcd-snapshot-backup.sh ./assets/backup/snapshot.db.
Follow these steps to back up etcd data by creating a snapshot. This snapshot can be saved and used at a later time if you need to restore etcd.
You should only save a snapshot from a single master host. You do not need a snapshot from each master host in the cluster.
SSH access to a master host.
Access a master host as the root user.
Run the
etcd-snapshot-backup.sh script and pass in the location to save the etcd snapshot to.
$ sudo /usr/local/bin/etcd-snapshot-backup.sh ./assets/backup/snapshot.db
In this example, the snapshot is saved to
./assets/backup/snapshot.db on the master host. | https://docs.openshift.com/container-platform/4.1/backup_and_restore/backing-up-etcd.html | 2020-01-17T19:49:00 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.openshift.com |
If you are writing/using an application (PHP or via fcgi) that needs the Authorization header, you will run into some problems. Apache will, by default, strip the Authorization header from requests before passing them to PHP or to an (f)cgi script. In order to receive the Authorization header, you will have to add an .htaccess file to the webroot with the following contents:
CGIPassAuth on
If the project already has an .htaccess file, just append it.
More information can be found on | https://docs.ulyssis.org/index.php?title=Accessing_the_Authorization_header&oldid=654 | 2020-01-17T20:06:52 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.ulyssis.org |
Recommended ↑ Back to top
The first step in setting up your WooCommerce-powered online store is to install WordPress and the WooCommerce plugin version less than 7.2+ and MySQL version less than 5.6 no longer receive active support and many versions are at End Of Life and are therefore no longer maintained. As such, using outdated and unsupported versions of MySQL and PHP may expose your site to security vulnerabilities.. | https://docs.woocommerce.com/document/server-requirements/?utm_source=wp%20org%20repo%20listing&utm_content=3.6 | 2020-01-17T18:36:56 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.woocommerce.com |
NIRCam Imaging Sensitivity
The exposure times vs. flux estimates at signal-to-noise ratio = 10 in NIRCam images presented here have been obtained using the Pandeia JWST Exposure Time Calculator (ETC) Python engine.
On this page
The Pandeia Exposure Time Calculator should be used for all observation planning. This article provides ETC results determined using the Python engine to loop through many calculations of signal-to-noise ratio (SNR) for various readout patterns and exposure times, given the assumptions detailed below. These calculations are then interpolated to determine depth (SNR = 10) vs. exposure time. The script is available on Github.
Depth vs. exposure time
The NIRCam Imaging overview and NIRCam Sensitivity articles show SNR = 10 sensitivity estimates for imaging in all NIRCam filters given a total exposure time of 10 ks (166.7 minutes = 2.78 hours). Figure 1 shows similar estimates for a range of exposure times in seven wide filters and one medium filter.
Sensitivity estimates can vary significantly depending on the background and the assumed photometric aperture sizes as discussed below. Please use the Pandeia Exposure Time Calculator to plan your observations.
Background
JWST's background model varies with the target coordinates (RA, Dec) and time of year. These calculations assume the fiducial "1.2 × MinZodi" (1.2 times the minimum zodiacal light) background at RA = 17:26:44, Dec = -73:19:56 on June 19, 2019, as used in the NIRCam Imaging article. The background model for these observations must be generated using the online ETC GUI and then imported into the Python ETC engine.
Exposure time
Recommended readout patterns and exposure times used for the calculations on this page are described below. They are based on ETC calculations that show these to yield optimal signal-to-noise ratios for a given exposure time. RAPID 1, BRIGHT2, SHALLOW4, MEDIUM8, and DEEP8 yield high signal-to-noise ratio most efficiently and are preferred to maximize depth. The other readout patterns (BRIGHT1, SHALLOW2, MEDIUM2, DEEP2) may be preferred in some cases, for example, to provide a greater dynamic range (with a shorter first group) to sample bright stars before saturation.
Note we do not generally recommend integration times greater than 1000 seconds. After 1000 seconds, the majority of pixels would likely be affected by cosmic rays. See discussion in MIRI Cross-Mode Recommended Strategies.
Table 1. Recommended readout specifications for maximal depth in a given exposure time
1Bold italics style indicates words that are also parameters or buttons in software tools (like the APT and ETC). Similarly, a bold style represents menu items and panels. | https://jwst-docs.stsci.edu/near-infrared-camera/nircam-predicted-performance/nircam-imaging-sensitivity | 2020-01-17T18:18:38 | CC-MAIN-2020-05 | 1579250590107.3 | [] | jwst-docs.stsci.edu |
Getting started¶
To start using wagtailtrans in your project, take the following steps:
Installation¶
- Install Wagtailtrans via
pip
$ pip install wagtailtrans
- Add
wagtailtrans,
wagtail.contrib.modeladminand if you’re using languages per site
wagtail.contrib.settingsto your
INSTALLED_APPS:
INSTALLED_APPS = [ # ... 'wagtail.contrib.modeladmin', 'wagtail.contrib.settings', # Only required when WAGTAILTRANS_LANGUAGES_PER_SITE=True 'wagtailtrans', # ... ]
Note
As of Wagtailtrans 1.0.3 the custom Language management views are replaced with with
wagtail.contrib.modeladmin
This needs to be added to
INSTALLED_APPS as well.
- Add
wagtailtrans.middleware.TranslationMiddlewareto your
MIDDLEWARE_CLASSES:
MIDDLEWARE_CLASSES = [ # ... 'django.contrib.sessions.middleware.SessionMiddleware', 'wagtail.core.middleware.SiteMiddleware', 'wagtailtrans.middleware.TranslationMiddleware', 'django.middleware.common.CommonMiddleware', # ... ]
Note
Keep in mind
wagtailtrans.middleware.TranslationMiddleware is a replacement for
django.middleware.locale.LocaleMiddleware.
Note
It relies on
wagtail.core.middleware.SiteMiddleware, which should come before it.
See for more information.
Configuration¶
Before we start incorporating wagtailtrans in your project, you’ll need to configure wagtailtrans for the behavior that best suits the need of your project. The required settings to consider here are:
-
WAGTAILTRANS_SYNC_TREE
-
WAGTAILTRANS_LANGUAGES_PER_SITE
Both settings are mandatory but provided with a default value, so if you want synchronized trees and no languages per site, you’re good to go from here.
Incorporating¶
To start using wagtailtrans we first need to create a translation home page. This page will route the requests to the homepage in the right language. We can create a translation site root page by creating the
wagtailtrans.models.TranslatableSiteRootPage as the first page under the root page.
In this example we will also make a
HomePage which will be translatable. This is done by implementing the
wagtailtrans.models.TranslatablePage instead of Wagtail’s
Page
from wagtailtrans.models import TranslatablePage class HomePage(TranslatablePage): body = RichTextField(blank=True, default="") image = models.ForeignKey('wagtailimages.Image', null=True, blank=True, on_delete=models.SET_NULL, related_name='+') content_panels = Page.content_panels + [ FieldPanel('body'), ImageChooserPanel('image') ] subpage_types = [ # Your subpage types. ]
This will create our first translatable page. To start using it we first need to migrate our database
$ python manage.py makemigrations $ python manage.py migrate
Now run the server and under the page
Root create a
TranslatableSiteRootPage (MySite).
Next we need to create a site and point it’s
root_page to our
TranslatableSiteRootPage (MySite).
We now have the basics for a Translatable Site. | https://wagtailtrans.readthedocs.io/en/2.0.6/getting_started.html | 2020-01-17T19:50:43 | CC-MAIN-2020-05 | 1579250590107.3 | [array(['_images/site.png',
'Create your site and select ``MySite`` as root page.'],
dtype=object) ] | wagtailtrans.readthedocs.io |
Table of Contents
Product Index
The finest clothes for David 5 and Genesis. Suitable for royal weddings, tea with the Queen or just looking awesome. Comes with 6 materials for the clothes and 6 for the TopHat. Fits David 5 and most of Genesis shapes, even the females!
Visit our site for technical support questions or concerns. | http://docs.daz3d.com/doku.php/public/read_me/index/16057/start | 2020-01-17T18:48:52 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.daz3d.com |
Product Index
Grow your business! This is a modular set of Four industrial buildings complete with interior spaces and working doors, two company signs, wall section prop and industrial alley prop. Also include is a pavement expansion prop and two quick load presets. Works great with the Forgotten Factory set or on it's. | http://docs.daz3d.com/doku.php/public/read_me/index/52439/start | 2020-01-17T20:42:17 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.daz3d.com |
Deploying the DaRT Recovery Image
After you have created the International Organization for Standardization (ISO) file that contains the Microsoft Diagnostics and Recovery Toolset (DaRT) 10 recovery image, you can deploy the DaRT 10 recovery image throughout your enterprise so that it is available to end users and help desk workers. There are four supported methods that you can use to deploy the DaRT recovery image. To review the advantages and disadvantages of each method, see Planning How to Save and Deploy the DaRT 10 Recovery Image.
Burn the ISO image file to a CD or DVD by using the DaRT Recovery Image wizard
Save the contents of the ISO image file to a USB Flash Drive (UFD) by using the DaRT Recovery Image wizard
Extract the boot.wim file from the ISO image and deploy as a remote partition that is available to end-user computers
Extract the boot.wim file from the ISO image and deploy in the recovery partition of a new Windows 10 installation
Important The DaRT Recovery Image Wizard provides the option to burn the image to a CD, DVD or UFD, but the other methods of saving and deploying the recovery image require additional steps that involve tools that are not included in DaRT. Some guidance and links for these other methods are provided in this section.
Deploy the DaRT recovery image as part of a recovery partition
After you have finished running the DaRT Recovery Image wizard and created the recovery image, you can extract the boot.wim file from the ISO image file and deploy it as a recovery partition in a Windows 10 image.
How to Deploy the DaRT Recovery Image as Part of a Recovery Partition
Deploy the DaRT recovery image as a remote partition
You can host the recovery image on a central network boot server, such as Windows Deployment Services, and allow users or support staff to stream the image to computers on demand.
How to Deploy the DaRT Recovery Image as a Remote Partition
Other resources for deploying the DaRT recovery image
Feedback | https://docs.microsoft.com/en-us/microsoft-desktop-optimization-pack/dart-v10/deploying-the-dart-recovery-image-dart-10 | 2020-01-17T20:16:23 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.microsoft.com |
Once you've installed the Datasmith Exporter plugin for Revit, you'll have a new option available in the Add-Ins ribbon that you can use to export a selected 3D view to a .udatasmith file.
Follow the steps below in Revit to export your scene using this file type.
In the Project Browser, select the 3D View that you want to export.
The Datasmith Exporter plugin uses the visibility settings defined for the current 3D View to determine what parts of the scene it should export. For details, see Using Datasmith with Revit.
Open the Add-Ins ribbon. In the Unreal Datasmith section, click Export 3D View.
In the Export 3D View to Unreal Datasmith window, browse to the location you want to save your .udatasmith file, and use the File Name box to give your new file a name.
Click Save.
End Result
You should now be ready to try importing your new .udatasmith file into the Unreal Editor. See Importing Datasmith Content into Unreal Engine 4 and Reimporting Datasmith Content.. | https://docs.unrealengine.com/en-US/Engine/Content/Importing/Datasmith/SoftwareInteropGuides/Revit/ExportingDatasmithContentfromRevit/index.html | 2020-01-17T19:19:15 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.unrealengine.com |
TOPICS×
Create Schema from the acroform
The next step is to create a schema from the Acroform created in the earlier step. A sample application is provided to create the schema as part of this tutorial. To create the schema, please follow the following instructions:
- Login to CRXDE Lite
- Change the saveLocation to an appropriate folder on your hard drive. Make sure the folder you are saving to is already created.
- Point your browser to Create XSD page hosted on AEM.
- Drag and drop the Acroform.
- Check the folder specified in Step 3. The schema file is saved to this location.
Upload the Acroform
For this demo to work on your system, you will need to create a folder called acroforms in AEM Assets. Upload the Acroform into this acroforms folder.
The sample code looks for the acroform in this folder. The acroform is needed to merge the adaptive form's submitted data. | https://docs.adobe.com/content/help/en/experience-manager-learn/forms/acroform-aem-forms-sign/part2.html | 2020-01-17T19:01:08 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.adobe.com |
This document will show you how to use SQL Server Management Studio to copy a DriveWorks 6 group ready for DriveWorks 7 migration.
To perform the steps in this document you will need one of the SQL Server Management Studio editions mentioned in the Applies To section. This guide uses the SQL Server 2005 Management Studio Express edition, which is freely available from the link given in the Applies To section.
There are a number of ways of copying a database in SQL Server. The method used in this document is to backup the database, and restore it with a new name. This method works across all SQL Server editions and is simple to perform.
If you are working with a database on a server managed by an IT department, make sure you consult your IT department before following any of these steps. | https://docs.driveworkspro.com/Topic/HowToCopyaDriveWorks6GroupInSQLServer | 2020-01-17T20:43:25 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.driveworkspro.com |
System Manager includes several features that help you to create, edit, or delete quotas. You can create a user, group, or tree quota and you can specify quota limits at the disk and file levels. All quotas are established on a per-volume basis.
After creating a quota, you can perform the following tasks: | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-960/GUID-564CCE3E-73E5-421D-B790-531A3AF93746.html | 2020-01-17T18:41:56 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
POST /oauth2/session validate the user and fetch information about the user from the OAuthProvider. If Synapse can match the user's information to a Synapse user then a session token for the user will be returned. Note: If Synapse cannot match the user's information to an existing Synapse user, then a status code of 404 (not found) will be returned. The user should be prompted to create an account.
Resource URL | https://docs.synapse.org/rest/POST/oauth2/session.html | 2020-01-17T19:58:52 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.synapse.org |
Amazon VPC Limits
The following tables list the limits for Amazon VPC resources per region for your AWS account. Unless indicated otherwise, you can request an increase for these limits using the Amazon VPC limits form. For some of these limits, you can view your current limit using the Limits page of the Amazon EC2 console.
If you request a limit increase that applies per resource, we increase the limit for all resources in the region. For example, the limit for security groups per VPC applies to all VPCs in the region.
VPC and Subnets
DNS
For more information, see DNS Limits. | https://docs.aws.amazon.com/vpc/latest/userguide/amazon-vpc-limits.html | 2018-09-18T19:44:54 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.aws.amazon.com |
Deploying Client Security
Applies To: Forefront Client Security
This section provides steps for deploying Client Security. The section contains the following topics:
Overview: deploying Client Security
Preparing your network for deployment
Approving the client components in WSUS
Configuring automatic updates
Disabling or uninstalling other antivirus or antispyware protection
Deploying Client Security to the client computers
Verifying your Client Security deployment
Deploying manually to each client computer | https://docs.microsoft.com/en-us/previous-versions/tn-archive/bb404255(v=technet.10) | 2018-09-18T19:26:27 | CC-MAIN-2018-39 | 1537267155676.21 | [] | docs.microsoft.com |
.
. However, because the majority of local storage devices do not support multiple connections, you cannot use multiple paths to access local storage.
ESXi supports a variety of local storage devices, including SCSI, IDE, SATA, USB, and SAS storage systems. Regardless of the type of storage you use, your host hides a physical storage layer from virtual machines.
You cannot use IDE/ATA or USB drives to store virtual machines.
Local storage does not support sharing across multiple hosts. Only one host has access to a datastore on a local storage device. As a result, although you can use local storage to create virtual machines, it prevents you from using VMware features that require shared storage, such as HA and vMotion.
However, if you use a cluster of hosts that have just local storage devices, you can implement Virtual SAN. Virtual SAN transforms local storage resources into software-defined shared storage and allows you to use features that require shared storage. For details, see the Administering VMware Virtual SAN documentation. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.storage.doc/GUID-5F08F7A7-6D8E-45A6-B408-278B3A4C7D4C.html | 2018-09-18T19:17:54 | CC-MAIN-2018-39 | 1537267155676.21 | [array(['images/GUID-F547F7A0-5711-4329-9B2F-1C4C3C488A49-high.png',
'A host accesses local storage.'], dtype=object) ] | docs.vmware.com |
Systeminfo Binding
System information Binding provides operating system and hardware information including:
- Operating system name, version and manufacturer;
- CPU average recent load and load for last 1, 5, 15 minutes, name, description, number of physical and logical cores, running threads number, system uptime;
- Free, total and available memory;
- Free, total and available swap memory;
- Hard drive name, model and serial number;
- Free, total, available storage space and storage type (NTSFS, FAT32 ..);
- Battery information - estimated remaining time, capacity, name;
- Sensors information - CPU voltage and temperature, fan speeds;
- Display information;
- Network IP,name and adapter name, mac, data sent and received, packages sent and received;
- Process information - size of RAM memory used, CPU load, process name, path, number of threads.
The binding uses OSHI API to access this information regardless of the underlying platform and does not need any native parts.
Supported Things
The binding supports only one thing type - computer. This thing represents a system with one storage volume, one display device and one network adapter.
The thing has the following properties:
cpu_logicalCores- Number of CPU logical cores
cpu_physicalCores- Number of CPU physical cores
os_manufacturer- The manufacturer of the operating system
os_version- The version of the operating system
os_family- The family of the operating system
If multiple storage or display devices support is needed, new thing type has to be defined. This is workaround until [this issue] () is resolved and it is possible to add dynamically channels to DSL defined thing.
Discovery
The discovery service implementation tries to resolve the computer name. If the resolving process fails, the computer name is set to “Unknown”. In both cases it creates a Discovery Result with thing type computer.
When [this issue] () is resolved it will be possible to implement creation of dynamic channels (e.g. the binding will scan how much storage devices are present and create channel groups for them). At the moment this is not supported.
Binding configuration
No binding configuration required.
Thing configuration
The configuration of the Thing gives the user the possibility to update channels at different intervals.
The thing has two configuration parameters:
- interval_high - refresh interval in seconds for channels with ‘High’ priority configuration. Default value is 1 s.
- interval_medium - refresh interval in seconds for channels with ‘Medium’ priority configuration. Default value is 60s.
That means that by default configuration:
- channels with priority set to ‘High’ are updated every second
- channels with priority set to ‘Medium’ - every minute
- channels with priority set to ‘Low’ only at initializing or at Refresh command.
For more info see channel configuration
Channels
The binding support several channel group. Each channel group, contains one or more channels. In the list below, you can find, how are channel group and channels id`s related.
thing
computer
- group
memorychannel
available, total, used, available_percent
- group
swapchannel
available, total, used, available_percent
- group
storage(deviceIndex) channel
available, total, used, available_percent, name, description, type
- group
drive(deviceIndex) channel
name, model, serial
- group
display(deviceIndex) channel
information
- group
battery(deviceIndex) channel
name, remainingCapacity, remainingTime
- group
cpuchannel
name, description, load, load1, load5, load15, uptime
- group
sensorschannel
cpuTemp, cpuVoltage, fanSpeed
- group
network(deviceIndex) channel
ip, mac, networkDisplayName, networkName, packagesSent, packagesReceived, dataSent, dataReceived
- group
process(pid) channel
load, used, name, threads, path
The groups marked with “deviceIndex” may have device index attached to the Channel Group.
- channel ::= chnanel_group & (deviceIndex) & # channel_id
- deviceIndex ::= number > 0
- (e.g. storage1#available)
The group
process is using a configuration parameter “pid” instead of “deviceIndex”. This makes possible to changed the tracked process at runtime.
The binding uses this index to get information about a specific device from a list of devices. (e.g on a single computer could be installed several local disks with names C:\, D:\, E:\ - the first will have deviceIndex=0, the second deviceIndex=1 ant etc). If device with this index is not existing, the binding will display an error message on the console.
In the table is shown more detailed information about each Channel type. The binding introduces the following channels:
Channel configuration
All channels can change its configuration parameters at runtime. The binding will trigger the necessary changes (reduce or increase the refresh time, change channel priority or the process that is being tracked).
Each of the channels has a default configuration parameter - priority. It has the following options:
- High
- Medium
- Low
Channels from group ‘‘process’’ have additional configuration parameter - PID (Process identifier). This parameter is used as ‘deviceIndex’ and defines which process is tracked from the channel. This makes the channels from this groups very flexible - they can change its PID dynamically.
Parameter PID has a default value 0 - this is the PID of the System Idle process in Windows OS.
Reporting issues
As already mentioned this binding depends heavily on the OSHI API to provide the operating system and hardware information.
Take a look at the console for an ERROR log message.
If you find an issue with a support for a specific hardware or software architecture please take a look at the OSHI issues, your problem might have be already reported and solved! Feel free to open a new issue there with the log message and the and information about your software or hardware configuration.
After the issue is resolved the binding has to be updated.
For a general problem with the binding report the issue directly to openHAB.
Updating this binding
OSHI project has a good support and regularly updates the library with fixes to issues and new features.
In order to update the version used in the binding, follow these easy steps:
- Go to the OSHI github repo and download the newest version available of the module oshi-core or download the jar from the Maven Central. Check if the versions of the OSHI dependencies as well (jna and jna-platform) are changed;
- Replace the jars in lib folder;
- Modify the .classpath file with the new versions of the jars;
- Modify the header Bundle-ClassPath in the META-INF/MANIFEST.MF.
Full example
Things:
systeminfo:computer:work [interval_high=3, interval_medium=60]
Items:
/* Network information*/ String Network_AdapterName { channel="systeminfo:computer:work:network#networkDisplayName" } String Network_Name { channel="systeminfo:computer:work:network#networkName" } String Network_IP { channel="systeminfo:computer:work:network#ip" } String Network_Mac { channel="systeminfo:computer:work:network#mac" } Number Network_DataSent { channel="systeminfo:computer:work:network#dataSent" } Number Network_DataRecevied { channel="systeminfo:computer:work:network#dataReceived" } Number Network_PackagesSent { channel="systeminfo:computer:work:network#packagesSent" } Number Network_PackagesRecevied { channel="systeminfo:computer:work:network#packagesReceived" } /* CPU information*/ String CPU_Name { channel="systeminfo:computer:work:cpu#name" } String CPU_Description { channel="systeminfo:computer:work:cpu#description" } Number CPU_Load { channel="systeminfo:computer:work:cpu#load"} Number CPU_Load1 { channel="systeminfo:computer:work:cpu#load1" } Number CPU_Load5 { channel="systeminfo:computer:work:cpu#load5" } Number CPU_Load15 { channel="systeminfo:computer:work:cpu#load15" } Number CPU_Threads { channel="systeminfo:computer:work:cpu#threads" } Number CPU_Uptime { channel="systeminfo:computer:work:cpu#uptime" } /* Drive information*/ String Drive_Name { channel="systeminfo:computer:work:drive#name" } String Drive_Model { channel="systeminfo:computer:work:drive#model" } String Drive_Serial { channel="systeminfo:computer:work:drive#serial" } /* Storage information*/ String Storage_Name { channel="systeminfo:computer:work:storage#name" } String Storage_Type { channel="systeminfo:computer:work:storage#type" } String Storage_Description { channel="systeminfo:computer:work:storage#description" } Number Storage_Available { channel="systeminfo:computer:work:storage#available" } Number Storage_Used { channel="systeminfo:computer:work:storage#used" } Number Storage_Total { channel="systeminfo:computer:work:storage#total" } Number Storage_Available_Percent { channel="systeminfo:computer:work:storage#availablePercent" } /* Memory information*/ Number Memory_Available { channel="systeminfo:computer:work:memory#available" } Number Memory_Used { channel="systeminfo:computer:work:memory#used" } Number Memory_Total { channel="systeminfo:computer:work:memory#total" } Number Memory_Available_Percent { channel="systeminfo:computer:work:memory#availablePercent" } /* Swap memory information*/ Number Swap_Available { channel="systeminfo:computer:work:swap#available" } Number Swap_Used { channel="systeminfo:computer:work:swap#used" } Number Swap_Total { channel="systeminfo:computer:work:swap#total" } Number Swap_Available_Percent { channel="systeminfo:computer:work:swap#availablePercent" } /* Battery information*/ String Battery_Name { channel="systeminfo:computer:work:battery#name" } Number Battery_RemainingCapacity { channel="systeminfo:computer:work:battery#remainingCapacity" } Number Battery_RemainingTime { channel="systeminfo:computer:work:battery#remainingTime" } /* Display information*/ String Display_Description { channel="systeminfo:computer:work:display#information" } /* Sensors information*/ Number Sensor_CPUTemp { channel="systeminfo:computer:work:sensors#cpuTemp" } Number Sensor_CPUVoltage { channel="systeminfo:computer:work:sensors#cpuVoltage" } Number Sensor_FanSpeed { channel="systeminfo:computer:work:sensors#fanSpeed" } /* Process information*/ Number Process_load { channel="systeminfo:computer:SvilenV-L540:process#load" } Number Process_used { channel="systeminfo:computer:SvilenV-L540:process#used" } String Process_name { channel="systeminfo:computer:SvilenV-L540:process#name" } Number Process_threads { channel="systeminfo:computer:SvilenV-L540:process#threads" } String Process_path { channel="systeminfo:computer:SvilenV-L540:process#path" } | http://docs.openhab.org/addons/bindings/systeminfo/readme.html | 2017-05-22T23:33:05 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.openhab.org |
Troubleshooting PCF on Azure
Page last updated:
This topic describes how to troubleshoot known issues when deploying Pivotal Cloud Foundry (PCF) on Azure.
Troubleshoot Installation
Because HAProxy sits between the ALB and the router, you can resolve the problem by setting the HAProxy timeout value to be greater than the timeout value of the ALB. Perform the following steps:
- Target your BOSH Director by following the steps in the Log into BOSH section of the Advanced Troubleshooting with the BOSH CLI topic.
- If necessary, download your PCF manifest:
$ bosh download manifest DEPLOYMENT-NAMEIf you do not know your deployment name, run
bosh deploymentsto display a list of available deployments.
- Set your current deployment to your PCF manifest:
$ bosh deployment PATH-TO-PCF-MANIFEST
- Edit your deployment:
$ bosh edit deployment
- Locate the
haproxysection and add the following property:
request_timeout_in_seconds: 180
- Save and exit the edited manifest.
- Redeploy:
$ bosh deploy
Cannot Copy the Ops Manager Image
Symptom
Cannot copy the Ops Manager image into your storage account when completing Step 2: Copy Ops Manager Image of the Launching an Ops Manager Director Instance with an ARM Template topic or Step 4: Boot Ops Manager of the Launching an Ops Manager Director Instance on Azure without an ARM Template topic.
Explanation
You have an outdated version of the Azure CLI. You need the Azure CLI version 0.9.18 or greater. Run
azure --version from the command line to display your current Azure CLI version.
Solution
Upgrade your Azure CLI to the current version by reinstalling the new version. Run
npm update -g azure-cli from the command line, or follow the procedures below for your operating system:
- Mac OS X: Download and run the Mac OS X installer.
- Windows: Download and run the Windows installer. Install the Azure CLI on Windows 10. Use the command line, not the PowerShell, to run the Azure CLI.
- Linux: Download the Linux tar file. Install the Azure CLI on Ubuntu Server 14.04 LTS. To install the Azure CLI on Linux, you must first install Node.js and npm, and then run
sudo npm install -g PATH-TO-TAR-FILE.
Deployment Fails at “bosh-init”
Symptom
After clicking Apply Changes to install Ops Manager and Elastic Runtime, the deployment fails at
bosh-init-init deploy /var/tempest/workspaces/default/deployments/bosh.yml"; Duration: 328s; Exit Status: 1 Exited with 1.
Explanation
You provided a passphrase when creating your key pair in the Step 2: Copy Ops Manager Image section of the Launching an Ops Manager Director Instance with an ARM Template topic or Step 4: Boot Ops Manager section of the Launching an Ops Manager Director Instance on Azure without an ARM Template topic.
Solution
Create a new key pair with no passphrase and redo the installation, beginning with the step for creating a VM against the Ops Manager image in the Step 2: Copy Ops Manager Image section of the Launching an Ops Manager Director Instance with an ARM Template topic or the Step 4: Boot Ops Manager section of the Launching an Ops Manager Director Instance on Azure without an ARM Template topic. | http://docs.pivotal.io/pivotalcf/1-10/customizing/azure-troubleshooting.html | 2017-05-22T23:21:10 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.pivotal.io |
Constributing to Abilian Core¶
Project on GitHub¶
The project is hosted on GitHub at:.
Participation in the development of Abilian is welcome and encouraged, through the various mechanisms provided by GitHub:
License and copyright¶
The Abilian code is copyrighted by Abilian SAS, a french company.
It is licenced under the LGPL (Lesser General Public License), which means you can reuse the product as a library
If you contribute to Abilian, we ask you to transfer your rights to your contribution to us.
In case you have questions, you’re welcome to contact us.
Build Status¶
We give a great deal of care to the quality of our software, and try to use all the tools that are at our disposal to make it rock-solid.
This includes:
- Having an exhaustive test suite.
- Using continuous integration (CI) servers to run the test suite on every commit.
- Running tests.
- Using our products daily.
You can check the build status:
You can also check the coverage reports:
Releasing¶
We’re now using setuptools_scm to manage version numbers.
It comes with some conventions on its own when it comes to releasing.
Here’s what you should do to make a new release on PyPI:
- Check that the CHANGES.rst file is correct.
- Commit.
- Tag (ex: git tag 0.3.0), using numbers that are consistent with semantic versionning.
- Run python setup.py sdist upload. | http://abilian-core.readthedocs.io/en/latest/contributing.html | 2017-05-22T23:09:27 | CC-MAIN-2017-22 | 1495463607242.32 | [] | abilian-core.readthedocs.io |
Logging in openHAB
This article describes the logging functionality in openHAB 2. Ths includes how to access logging information and configure logging for user-defined rules.
There are two ways to check log entries:
- Through files stored on the file system
- During runtime in the Karaf Console
File
The Karaf console allows to monitor the log in realtime.
The log shell comes with the following commands:
log:clear: clear the log
log:display: display the last log entries
log:exception-display: display the last exception from the log
log:get: show the log levels
log:set: set the log levels
log:tail: continuous display of the log entries
For example, following command enables the realtime'.
An useful functionality is that also
The config file for logging is
org.ops4j.pax.logging.cfg located in the
userdata/etc folder (manual setup) or in
/var/lib/openhab2/etc (apt/deb-based setup).
Defining what to log
In order to see the messages, logging needs to activated defining what should be logged and in which detail. This can be done in Karaf using the following console command:
log:set LEVEL package.subpackage
The what is defined by
package.subpackage and is in most cases a binding (like org.openhab.binding.sonos)
The detail of logging is defined by one of the following levels:
- DEFAULT
- OFF
- ERROR
- WARN
- INFO
- DEBUG
- TRACE
The levels build a hierarchy with ERROR logging critical messages only and DEBUG logging nearly everything. DEBUG combineds all logs from levels 3 to 6, while TRACE adds further messages in addition to what DEBUG displays. not persistent and will be lost upon restart. To configure those in a persistent way, the commands have to be added to the configuration file.
Create Log Entries in Rules
It is also possible to create own log entries in rules. This is especially useful for debugging purposes.
For each log level there is an corresponding command for creating log entries. These commands require two parameters: the subpackage (here:
Demo) and the text which should appear in the log:
logError("Demo","This is a log entry of type Error!") logWarn("Demo","This is a log entry of type Warn!") logInfo("Demo","This is a log entry of type Info!") logDebug("Demo","This is a log entry of type Debug!")
In order to see the messages, logging for the message class has to be activated. The main package is predefined (
org.eclipse.smarthome.model.script) and the subpackage needs to be concatenated:
log:set DEBUG org.eclipse.smarthome.model.script.Demo
The output for the above log statement of type DEBUG is:
2016-06-04 16:28:39.482 [DEBUG] [.eclipse.smarthome.model.script.Demo] - This is a log entry of type DEBUG!
Logging:
# Logger - Demo.log log4j.logger.org.eclipse.smarthome.model.script.Demo = DEBUG, Demo
New file appender:
# File appender - Demo.log log4j.appender.Demo=org.apache.log4j.RollingFileAppender log4j.appender.Demo.layout=org.apache.log4j.PatternLayout log4j.appender.Demo.layout.ConversionPattern=%d{yyyy-MM-dd HH:mm:ss.SSS} [%-5.5p] [%-36.36c] - %m%n log4j.appender.Demo.file=${openhab.logdir}/Demo.log log4j.appender.Demo.append=true log4j.appender.Demo.maxFileSize=10MB log4j.appender.Demo.maxBackupIndex=10 | http://docs.openhab.org/administration/logging.html | 2017-05-22T23:33:09 | CC-MAIN-2017-22 | 1495463607242.32 | [] | docs.openhab.org |
Relative Size
_20<<
_22<<
Harmony includes a field chart in its Drawing. | http://docs.toonboom.com/help/harmony-12/advanced-network/Content/_CORE/_Workflow/015_CharacterDesign/005_H2_Relative_Size.html | 2017-05-22T23:18:20 | CC-MAIN-2017-22 | 1495463607242.32 | [array(['../../../Resources/Images/_ICONS/Home_Icon.png', None],
dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePremium.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageAdvanced.png',
'Toon Boom Harmony 12 Stage Advanced Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stageEssentials.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/controlcenter.png',
'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/scan.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/_Skins/Activation.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/_ICONS/download.png', None],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/HAR11_RelativeSize.png',
'Relative Size - Nautilus Gava Productions Relative Size - Nautilus Gava Productions'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/HAR11_Character_LineUp.png',
'Nautilus - Gava Productions - Character Line Up Nautilus - Gava Productions - Character Line Up'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/HAR11_FieldChart.png',
'Field Chart Field Chart'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/HAR11_RepositionAllDrawings_Translate.png',
'Reposition All Drawings - Translate Reposition All Drawings - Translate'],
dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/HAR11_RepositionAllDrawings_Scale.png',
'Reposition All Drawings - Scale Reposition All Drawings - Scale'],
dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif',
'Closed'], dtype=object)
array(['../../../Resources/Images/HAR/Stage/Character_Design/ToolPresets/HAR11_BrushPreset_Window.png',
'Brush Preset Window Brush Preset Window'], dtype=object)
array(['../../../Resources/Images/HAR/Sketch/HAR11/Sketch_toolPreset.png',
None], dtype=object) ] | docs.toonboom.com |
Repository locking¶
Kallithea has a repository locking feature, disabled by default. When enabled, every initial clone and every pull gives users (with write permission) the exclusive right to do a push.
When repository locking is enabled, repositories get a
locked flag.
The hg/git commands
hg/git clone,
hg/git pull,
and
hg/git push influence this state:
- A
cloneor
pullaction locks the target repository if the user has write/admin permissions on this repository.
- Kallithea will remember the user who locked the repository so only this specific user can unlock the repo by performing a
pushcommand.
- Every other command on a locked repository from this user and every command from any other user will result in an HTTP return code 423 (Locked). Additionally, the HTTP error will mention the user that locked the repository (e.g., “repository <repo> locked by user <user>”).
Each repository can be manually unlocked by an administrator from the repository settings menu. | http://kallithea.readthedocs.io/en/stable/usage/locking.html | 2017-05-22T23:12:24 | CC-MAIN-2017-22 | 1495463607242.32 | [] | kallithea.readthedocs.io |
Changing Resource Structure With Evolution¶
As you develop your software and make changes to structures, your existing content will be in an old state. Whether in production or during development, you need a facility to correct out-of-date data.
Evolution provides a rich facility for "evolving" your resources to match changes during development. Substance D's evolution facility gives Substance D developers full control over the data updating process:
- Write scripts for each package that get called during an update
- Set revision markers in the data to indicate the revision level a database is at
- Console script and SDI GUI that can be run to "evolve" a database
Running an Evolution from the Command Line¶
Substance D applications generate a console script at
bin/sdi_evolve. Running this without arguments displays some help:
$ bin/sd_evolve Requires a config_uri as an argument sd_evolve [--latest] [--dry-run] [--mark-finished=stepname] [--mark-unfinished=stepname] config_uri Evolves new database with changes from scripts in evolve packages - with no arguments, evolve displays finished and unfinished steps - with the --latest argument, evolve runs scripts as necessary - with the --dry-run argument, evolve runs scripts but does not issue any commits - with the --mark-finished argument, marks the stepname as finished - with the --mark-unfinished argument, marks the stepname as unfinished e.g. sd_evolve --latest etc/development.ini
Running with your INI file, as explained in the help, shows information about the version numbers of various packages:
$ bin/sd_evolve etc/development.ini Finished steps: 2013-06-14 13:01:28 substanced.evolution.legacy_to_new Unfinished steps:
This shows that one evolution step has already been run and that there are no unfinished evolution steps.
Running an Evolution from the SDI¶
The Evolution section of the
Database tab of the Substance D root object
allows you to do what you might have otherwise done using the
sd_evolve
console script described above.
In some circumstances when Substance D itself needs to be upgraded, you may
need to use the
sd_evolve script rather than the GUI. For example, if the
way that Substance D
Folder objects work is changed and folder objects need
to be evolved, it may be impossible to view the evolution GUI, and you may need
to use the console script.
Autoevolve¶
If you add
substanced.autoevolve = true within your application .ini file,
all pending evolution upgrade steps will be run when your application starts.
Alternately you can use the
SUBSTANCED_AUTOEVOLVE evnironment variable
(e.g.
export SUBSTANCED_AUTOEVOLVE=true) to do the same thing.
Adding Evolution Support To a Package¶
Let's say we have been developing an
sdidemo package and,
with content already in the database, we want to add evolution support.
Our
sdidemo package is designed to be included into a site,
so we have the traditional Pyramid
includeme support. In there we
add the following:
import logging logger = logging.getLogger('evolution') def evolve_stuff(root, registry): logger.info('Stuff evolved.') def includeme(config): config.add_evolution_step(evolve_stuff)
We've used the
substanced.evolution.add_evolution_step() API to add an
evolution step in this package's
includeme function.
Running
sd_evolve without
--latest (meaning,
without performing an evolution) shows that Substance D's evolution now
knows about our package:
$ bin/sd_evolve etc/development.ini Finished steps: 2013-06-14 13:01:28 substanced.evolution.legacy_to_new Unfinished steps: sdidemo.evolve_stuff
Let's now run
sd_evolve "for real". This will cause the evolution step to
be executed and marked as finished.
$ bin/sd_evolve --latest etc/development.ini 2013-06-14 13:22:51,475 INFO [evolution][MainThread] Stuff evolved. Evolution steps executed: substanced.evolution.evolve_stuff
This examples shows a number of points:
- Each package can easily add evolution support via the
config.add_evolution_step()directive. You can learn more about this directive by reading its API documentation at
substanced.evolution.add_evolution_step().
- Substance D's evolution service looks at the database to see which steps haven't been run, then runs all the needed evolve scripts, sequentially, to bring the database up to date.
- All changes within an evolve script are in the scope of a transaction. If all the evolve scripts run to completion without exception, the transaction is committed.
Manually Marking a Step As Evolved¶
In some cases you might have performed the work in an evolve step by hand and
you know there is no need to re-perform that work. You'd like to mark the step
as finished for one or more evolve scripts, so these steps don't get run. The
--mark-step-finished argument to
sd_evolve accomplishes this. The
"Mark finished" button in the SDI evolution GUI does the same.
Baselining¶
Evolution is baselined at first startup. When there's no initial list of finished steps in the database. Substance D, in the root factory, says: "I know all the steps participating in evolution, so when I first create the root object, I will set all of those steps to finished."
If you wish to perform something after
Root was
created, see Affecting Content Creation. | http://substanced.readthedocs.io/en/latest/evolution.html | 2017-07-20T22:48:42 | CC-MAIN-2017-30 | 1500549423512.93 | [] | substanced.readthedocs.io |
Name
gettextfile
Auth
yes
Description
Download a file in different character encoding Takes fileid (or path) as parameter and returns contents of the file in different character encoding. The file is streamed as response to this method by the content server.
Optional parameter fromencoding specify the original character encoding of the file. If ommited it will be guessed based on the contents of the file.
Optional parameter toencoding specify the requested character encoding for the output. The default is utf-8.
If the optional parameter forcedownload is set, the file will be served by the server with content type application/octet-stream, which typically forces user agents to save the file.
Alternatively you can provide parameter contenttype with the Content-Type you wish the server to send. If these parameters are not set, the content type will depend on the extension of the file.
URL
Required
Use fileid or pathUse fileid or path
Optional
Output
On success this method outputs the data by the API server. No links to content servers are provided. Unless you provide invalid encodings in fromecoding or toencoding you can safely assume that this method will not fail. | https://docs.pcloud.com/methods/streaming/gettextfile.html | 2017-07-20T22:40:04 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.pcloud.com |
To create an offsite payment processor provider for the payment processor you must perform the following:
Handles the offsite response after redirect back to our site.
Handles the offsite notification send by them via asynchronous call.Usually in PayPal this functionality is called IPN, in WorldPay it is payment response and etc.
The methods described above are responsible for handing the offsite return which is when the offsite payment gateway redirects back to our site. The redirect is automatically handled by a handler on our site which then calls the HandleOffsiteReturn method. The request object passed there contains all the information which the offsite gateway has returned to us. The other method HandleOffsiteNotification is also invoked from the offsite gateway (if supported) but this call is made from their backend to our site to tell us the result of a transaction. This call is not visible on the client browser, it is direct call between them (PayPal IPN service for example) and our handler, which then invokes this method.
Here is a sample code:
public
class
CustomOffsiteProvider : IOffsitePaymentProcessorProvider
{
/// Implementation of regular IPaymentProcessorProvider interface methods goes here
IPaymentResponse HandleOffsiteNotification(
int
orderNumber, HttpRequest request, PaymentMethod paymentMethod)
IPaymentResponse response =
new
PaymentResponse() { IsOffsitePayment =
true
};
var paymentStatus = request.Params[
"offsite-gateway-specific-return-param-name"
];
//TODO implement the rest logic here
//You must populate in the response object the following properties:
// IsSuccess, IsAuthorizeOnlyTransaction, GatewayTransactionID
return
response;
}
IPaymentResponse HandleOffsiteReturn(
"offsite-gateway-specific-notification-param-name"
}
//end CustomOffsiteProvider
After you have implemented your provider class, depending on your custom provider, you may want to specify a return and redirect URL. You specify these in SubmitTransaction method. In this class, you can specify either of the following Sitefinity CMS properties:
The way you pass the properies in the SubmitTransaction method is also specified in your payment provender documentation.
For example: postValues.Add("notificationURL", data.GatewayNotificationUrl);
In the above example, notificationURL is a property specified by the payment provider.
Back To Top | http://docs.sitefinity.com/for-developers-create-an-offsite-payment-provider-class | 2017-07-20T22:43:00 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.sitefinity.com |
Advanced Captive Portal Techniques
How to create elegant, custom-branded captive portal pages with CSS, JavaScript, and Flash.
Log into SputnikNet, then select Captive Portals under the CONFIG menu.Log into SputnikNet, then select Captive Portals under the CONFIG menu.
You have full control over the HTML used in captive portal pages, and can use advanced techniques such as CSS, JavaScript, and Adobe Flash to build extremely sophisticated designs. Here's we'll create a captive portal that utilizes all of these options.
To create or modify captive portals, select "Captive Portals" from the "CONFIG" menu.
Add a new captive portal.Add a new captive portal.
Name your new captive portal.Name your new captive portal.
Apply appropriate authentication system(s) to your new captive portal.Apply appropriate authentication system(s) to your new captive portal.
Review authentication system content.Review authentication system content.
Upload graphics to your captive portal.Upload graphics to your captive portal.
Follow instructions in online documentation chapter titled "Branding your Wi-Fi Network with Captive Portals" to upload graphics for your captive portal. Once uploaded, you can use graphics in any captive portal.
Preview your captive portal with basic CSS.Preview your captive portal with basic CSS.
If you want CSS style definitions to apply to both your captive portal content and the login form from the authentication system, place them between [style] and [/style] tags.
Here is the complete captive portal HTML previewed above. We've created an asterisk ("all") CSS selector to apply styles-- in this case, a dark blue background and white text-- to everything in the captive portal.
[style]
* {
background-color: #071f31;
color: white;
}
[/style]
Create walled garden policies for external content.Create walled garden policies for external content.
Elements fetched from external sites for display on captive portal pages need corresponding walled garden ("allow") policies so that they are not blocked before the user logs in. Here we intend to add some JavaScript from "mrodeo.com". First we'll create the walled garden policy, then we'll apply it to the Rodeo captive portal.
For more details on walled garden policies, see online documentation chapter titled "Branding using Walled Gardens and Redirects".
Add JavaScript to your captive portal page.Add JavaScript to your captive portal page.
To use JavaScript in a captive portal page, it needs to be saved to a site external to SputnikNet. Here the JavaScript calls a nice animation of the photos of past rodeo winners from mrodeo.com. The following is pasted into the "portal content" field just after the CSS definition [/style] tag:
<script type="text/javascript" src=""></script>
<script type="text/javascript" src=""></script>
<div id="BZ06B0C49547644155862B"><div style="background-color:#ffffff;color:#000000;padding:1em;"><div style="margin: 0pt auto; max-width: 500px; text-align: justify;"></div></div></div>
Add a graphic and position it using CSS.Add a graphic and position it using CSS.
We'll add a CSS definition to place a logo over the JavaScript banner. The following goes between the [style] and [/style] tags:
img#mandan {
position: absolute;
top: 100px;
left: 580px;
}
The following HTML can go anywhere in the content below the closing [/style] tag:
<img id="mandan" src="/sputnik/pc/14/33" alt="mandan-230.png">
Use a table to layout additional design elements.Use a table to layout additional design elements.
We'll add a few more graphics to the bottom of the captive portal, flanking the login form. An HTML table is an easy way to do this:
<div align="center">
<table border=0">
<tr>
<td width="200" ><img src="/sputnik/pc/14/36" alt="mandan-tix.png" align="right"></td>
<td width="60"> </td>
<td width="400">[login]</td>
<td><img src="/sputnik/pc/14/34" alt="bareback-horse-rider.png"></td>
</tr>
</table>
</div>
Style the login form with CSS.Style the login form with CSS.
Add more CSS selectors to style the greeting text (which we tagged with class "greeting"), the login form itself, and the login button. Sputnik provides a few CSS classes to help you style these elements, but each element is not present in every authentication system. They are:
- login_form: the overall form
- login_username: username field
- login_password: password field
- login_error: error message displayed for failed login
- login_submit: login button
NOTE: since some authentication systems don't include all of these classes, it's a good idea to use your browser to "view source" of the captive portal under construction. For example, the guest authentication system doesn't use the "login_submit" class for the login button, but you can select the element based on input type, as shown below:
p.greeting {
font: normal x-large Georgia, "Times New Roman", Times, serif;
text-align: center;
}
.login_form {
font: large "Lucida Grande", Lucida, Verdana, sans-serif;
}
input[type="button"],input[type="submit"] {
font: small-caps xx-large Georgia, "Times New Roman", Times, serif;
color: white;
background-color: #369;
}
Add some more flair to the login form.Add some more flair to the login form.
Since in this example the captive portal will be viewed outdoors, we'll highlight the access code and login button in its "hover" state with some additional CSS definitions.
You have full control over the branding of your captive portal and can create any kind of customer experience you desire.
input[type="text"] {
font: bold x-large "Lucida Grande", Lucida, Verdana, sans-serif;
padding:3px;
color: yellow;
}
input[type="button"],input[type="submit"]:hover {
font: small-caps xx-large Georgia, "Times New Roman", Times, serif;
color: #369;
background-color: yellow;
} | http://docs.sputnik.com/m/sputniknet/l/4074-advanced-captive-portal-techniques | 2017-07-20T22:39:25 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.sputnik.com |
Contributing¶
Note
Portions of this page have been modified from the excellent OpenComparison project docs.
Contributing CDL and EDL Samples¶
Please, please, please submit samples of the following formats:
- FLEx
- ALE
- CMX
- CCC
- CDL
These are complex formats, and seeing real world samples helps write tests that ensure correct parsing of real world EDLs and CDLs. If you don’t even see a format of CDL listed that you know exists, open an issue at the github issues page asking for parse/write support for the format, and include a sample.
Issues & Bugs¶
Take a look at the issues page and if you see something that you think you can bang out, leave a comment saying you’re going to take it on. While many issues are already assigned to the principal authors, just because it’s assigned doesn’t mean any work has begun.
Feel welcome to post issues, feature requests and bugs that aren’t present.
Workflow¶
cdl_convert is a GitFlow workflow project. If you’re not familiar with
GitFlow, please take a moment to read the workflow documentation. Essentially
it means that all work beyond tiny bug fixes needs to be done on it’s own
feature branch, called something like
feature/thing_I_am_fixing.
After you fork the repository, take a second to create a new feature branch from
the
develop branch and checkout that newly created branch.
Submitting Your Fix¶
Once you’ve pushed your feature branch to GitHub, it’s time to generate a pull
request back to the central
cdl_convert repository.
The pull request let’s you attach a comment, and when you’re addressing an issue it’s imperative to link to that issue in the initial pull request comment. We’ll shortly be notified of your request and it will be reviewed as soon as possible.
Warning
If you continue to add commits to the feature branch you submitted as
a pull request, the pull request will be updated with those changes (as
long as you push those changes to GitHub). This is why you should not
submit a pull request of the
develop branch.
Pull Request Tips¶
cdl_convert really needs your collaboration, but we only have so much time
to work on the project and merge your fixes and features in. There’s some easy
to follow guidelines that will ensure your pull request is accepted and integrated
quickly.
Run the tests!¶
Before you submit a pull request, please run the entire test suite via
$ python setup.py test
If the tests are failing, it’s likely that you accidentally broke something.
Note which tests are failing, and how your code might have affected them. If
your change is intentional- for example you made it so urls all read
https://
instead of
http://, adjust the test suite, get it back into a passing state,
and then submit it.
If your code fails the tests (Travis-ci.org checks all pull requests when you create them) it will be rejected.
Add tests for your new code¶
If your pull request adds a feature but lacks tests then it will be rejected.
Tests are written using the standard unittest framework. Please keep test cases as simple as possible while maintaining a good coverage of the code you added.
Pull requests should be as small/atomic as possible. Large, wide-sweeping changes in a pull request will be rejected, with comments to isolate the specific code in your pull request.
Follow PEP-8 and keep your code simple!¶:
- package instead of pkg
- grid instead of g
- my_function_that_does_things instead of mftdt
If the code style doesn’t follow PEP 8 , it’s going to be rejected. | http://cdl-convert.readthedocs.io/en/latest/contributing.html | 2017-07-20T22:27:11 | CC-MAIN-2017-30 | 1500549423512.93 | [] | cdl-convert.readthedocs.io |
Name
uploadfile
Auth
yes
Description
Upload a file.
String path or int folderid specify the target directory. If both are omitted the root folder is selected.
Parameter string progresshash can be passed. Same should be passed to uploadprogress method.
If nopartial is set, partially uploaded files will not be saved (that is when the connection breaks before file is read in full). If renameifexists is set, on name conflict, files will not be overwritten but renamed to name like filename (2).ext.
Multiple files can be uploaded, using POST with multipart/form-data encoding. If passed by POST, the parameters must come before files. All files are accepted, the name of the form field is ignored. Multiple files can come one or more HTML file controls.
Filenames must be passed as filename property of each file, that is - the way browsers send the file names.
If a file with the same name already exists in the directory, it is overwritten and old one is saved as revision. Overwriting a file with the same data does nothing except updating the modification time of the file.
URL
Required
Optional
Output
Returns two arrays - fileids and metadata.
Example
{ "result": 0, "fileids": [ 1729212 ], "metadata": [ { "ismine": true, "id": "f1729212", "created": "Wed, 02 Oct 2013 14:29:11 +0000", "modified": "Wed, 02 Oct 2013 14:29:11 +0000", "hash": 10681749967730527559, "isshared": false, "isfolder": false, "category": 1, "parentfolderid": 0, "icon": "image", "fileid": 1729212, "height": 600, "width": 900, "path": "\/Simple image.jpg", "name": "Simple image.jpg", "contenttype": "image\/jpeg", "size": 73269, "thumb": true } ] } | https://docs.pcloud.com/methods/file/uploadfile.html | 2017-07-20T22:34:30 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.pcloud.com |
Installation of the Yabi web application under Apache¶
The IUS repository provides a
httpd24u package that unfortunately conflicts with
httpd.
Therefore if you try to install
yabi-admin and you don’t have one of the
httpd packages already installed you will get a conflict error.
The recommended way (in the email announcing httpd24u)
to get around this problem is to install the httpd package first and only after that install yabi-admin:
# yum install httpd mod_ssl # yum install yabi-admin
This will add an Apache conf file to
/etc/httpd/conf.d called
yabiadmin.ccg. Please feel free to read through it and edit if required.
When you are happy with the contents create a symbolic link for Apache to pick this config up automatically:
# pushd /etc/httpd/conf.d && ln -s yabiadmin.ccg yabiadmin.conf && popd | http://yabi.readthedocs.io/en/latest/installation/apache.html | 2017-07-20T22:50:48 | CC-MAIN-2017-30 | 1500549423512.93 | [] | yabi.readthedocs.io |
By default, the VMkernel scans for LUN 0 to LUN 255 for every target (a total of 256 LUNs). You can modify the Disk.MaxLUN parameter to improve LUN discovery speed.
About this task after the last one you want to discover.
For example, to discover LUNs from 0 through 31, set Disk.MaxLUN to 32. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.hostclient.doc/GUID-E94D8DBB-4759-4E0A-B068-18FA6CF4A7FC.html | 2017-07-20T22:51:05 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.vmware.com |
vCenter Single Sign-On installation displays an error referring to the vCenter Server or the vSphere Web Client.
Problem
vCenter Server and Web Client installers show the error Could not contact Lookup Service. Please check VM_ssoreg.log....
This problem has several causes, including unsynchronized clocks on the host machines, firewall blocking, and services that must be started.
Procedure
- Verify that the clocks on the host machines running vCenter Single Sign-On, vCenter Server, and the Web Client are synchronized.
- View the specific log file found in the error message.
In the message, system temporary folder refers to %TEMP%.
- Within the log file, search for the following messages.
The log file contains output from all installation attempts. Locate the last message that shows Initializing registration provider... | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.security.doc/GUID-B8D60389-AF95-4368-8AB2-D282CBE0C4A9.html | 2017-07-20T22:52:10 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.vmware.com |
Examine how packets change when they pass through a vSphere Network Appliance (DVFilter).
About this task
DVFilters are agents that reside in the stream between a virtual machine adapter and a virtual switch. They intercept packets to protect virtual machines from security attacks and unwanted traffic.
Procedure
- (Optional) : To find the name of the DVFilter that you want to monitor, in the ESXi Shell, run the summarize-dvfilter command.
The output of the command contains the fast-path and slow-path agents of the DVFilters that are deployed on the host.
- Run the pktcap-uw utility with the
--dvfilter dvfilter_nameargument and with options to monitor packets at a particular point, filter captured packets and save the result to a file.
pktcap-uw --dvFilter dvfilter_name --capture PreDVFilter|PostDVFilter [filter_options] [--outfile pcap_file_path [--ng]] [--count number_of_packets]
where the square brackets [] enclose optional items of the
pktcap-uw --dvFilter vmnicXcommand and the vertical bars | represent alternative values.
- Use the --capture option to monitor packets before or after the DVFilter intercepts them.
-. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.networking.doc/GUID-639ED633-A89A-470F-8056-5BB71E8C3F8F.html | 2017-07-20T22:51:45 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.vmware.com |
To use peripheral SCSI devices, such as printers or storage devices, you must add the device to the virtual machine. When you add a SCSI device to a virtual machine, you select the physical device to connect to and the virtual device node.
Before you begin
Required privileges:
About this task
The SCSI device is assigned to the first available virtual device node on the default SCSI controller, for example (0:1). To avoid data congestion, you can add another SCSI controller and assign the SCSI device to a virtual device node on that controller. Only device nodes for the default SCSI controller are available unless you add additional controllers. If the virtual machine does not have a SCSI controller, a controller is added when you add the SCSI device.
For SCSI controller and virtual device node assignments and behavior, see SCSI and SATA Storage Controller Conditions, Limitations, and Compatibility.
Procedure
- Right-click a virtual machine in the inventory and select Edit Settings.
- On the Virtual Hardware tab, select SCSI Device from the New device drop-down menu and click Add.
The SCSI device appears in the Virtual Hardware devices list.
- Expand New SCSI device to change the device options.
- (Optional) : From the Virtual Device Node drop-down menu, select the virtual device node.
- Click OK.
Results
The virtual machine can access the device. | https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-CE27B978-30DA-45B9-AB0D-25D0F6F63343.html | 2017-07-20T22:51:03 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.vmware.com |
Release Notes - openSEEK - Version 1.2.0 New Feature [OPSK-827] - Allow installations to disable permission summary pop-up [OPSK-849] - Feature request: Accept sftp links [OPSK-872] - Default license for a project [OPSK-917] - Delete extracted samples when deleting a datafile [OPSK-928] - Interlinking between sample types [OPSK-940] - Generation of excel template for sample type [OPSK-966] - Tags for samples types [OPSK-978] - Sample types only editable by project members [OPSK-995] - Show an overview of data usage for programme/project on admin apge Improvement [OPSK-725] - Handle sample type attribute name clashes [OPSK-795] - Organisation of seed data [OPSK-840] - Description for Sample Type [OPSK-841] - SampleType can_edit? and can_destroy? [OPSK-843] - Provide citation suggestion for assets and snapshots [OPSK-855] - Support for arbitrary URL schemes for remote files [OPSK-918] - Redefine the sample title field [OPSK-921] - update the project browser to use jstree instead of yui [OPSK-924] - Sample dropdown menus aren't bootstrapped [OPSK-937] - Create an enumerator for sample attribute base types [OPSK-951] - Index DOIs in Solr so the related thing is found when searched for [OPSK-960] - Authorization improvements [OPSK-975] - Don't show "Extract Samples" whilst pending [OPSK-976] - Create Sample button on Sample Type [OPSK-977] - Sample Type - pending template generation box [OPSK-1000] - Allow a sample type to link to its own type [OPSK-1001] - Investigate filtering search by project id as params [OPSK-1004] - Sample / Sample type should be favouritable [OPSK-1007] - Add URI attribute type [OPSK-1017] - Model.find_each preferred over Model.all.each [OPSK-1028] - Docker - split off delayedjob for docker compose [OPSK-1031] - SampleDataExtractionJob should just be able to take the data file id [OPSK-1032] - Provide some descriptive text for attribute types [OPSK-1047] - Samples extracted from a template should be none editable [OPSK-1055] - Sample extraction should recognise sheet with 'sample' in the name [OPSK-1056] - Projects selector broken on "Use spreadsheet template" tab of new sample type form Story [OPSK-622] - Deprecation of old sample data & models [OPSK-628] - Custom Sample Type Definitions [OPSK-636] - Assay updated for new Sample framework [OPSK-712] - Create Sample Type from Spreadsheet [OPSK-713] - Populate samples from template spreadsheet Task [OPSK-691] - Associating samples with projects [OPSK-692] - Sample authorization [OPSK-693] - Sample resource list items, and related to [OPSK-697] - Indexing of Samples for search [OPSK-706] - Prevent sample types being changed after they have been used [OPSK-708] - Sample#title as an attribute [OPSK-730] - Samples framework work loose ends [OPSK-745] - Ability to turn samples on and off [OPSK-781] - SEEK Strain ID attribute type [OPSK-782] - Boolean sample attribute type [OPSK-783] - Add sample attribute types to seed data [OPSK-784] - Simpler way of handling accessor_name [OPSK-785] - Sample code reviewing and cleaning and documenting [OPSK-787] - Extract samples from template as a background job [OPSK-808] - Remove the Deprecated sample models [OPSK-821] - Add attribute type seed data to upgrade task [OPSK-830] - Upgrade task to add the sample attribute seed data [OPSK-856] - SEEK4Science website needs new logo, and favicon [OPSK-896] - Also show the number of all assets [OPSK-915] - Resurrect Project Browser / Organiser [OPSK-947] - Document docker [OPSK-971] - Check if old samples related configs are needed [OPSK-988] - Sample type selector for SEEK Sample, group by project [OPSK-989] - Improve resource list item for sample type [OPSK-990] - highlight title field in sample type view [OPSK-991] - Remove text pointing to biostars forum [OPSK-1014] - Sample type needs a better icon [OPSK-1019] - Rake task to get the old samples & cell cultures [OPSK-1027] - SEEK 1.2 release notes [OPSK-1045] - Remove index view for samples [OPSK-1046] - Extracted samples should always inheritic permissions from data file [OPSK-1065] - Selected Strain not set when editing a sample [OPSK-1066] - Prevent a new data file version ability being available if samples have been extracted Sub-task [OPSK-623] - rename Sample -> DeprecatedSamples [OPSK-624] - rename Specimen -> DeprecatedSpecimens [OPSK-625] - rename SampleAsset -> DeprecatedSampleAssets [OPSK-626] - rename SopSpecimen -> DeprecatedSpecimenSop [OPSK-627] - rename Treatment -> DeprecatedTreatments [OPSK-629] - Create bare Sample model [OPSK-630] - Create SampleType model [OPSK-631] - Ability to define sample type for a sample and store values [OPSK-632] - A pre-defined list of supported attribute types [OPSK-633] - Page for defining sample types [OPSK-634] - Simple page for adding a new Sample [OPSK-635] - Add support for required fields [OPSK-637] - Define 0 or more incoming samples [OPSK-638] - Define 0 or more “outgoing” samples [OPSK-641] - Remove Biosamples browser [OPSK-642] - Remove biosamples_enabled config [OPSK-644] - Remove Sample & Specimen views and controllers [OPSK-663] - Remove Biosample search fields [OPSK-685] - Support units for sample attribute [OPSK-686] - Support for controlled vocabulary sample attribute type [OPSK-705] - Index page for sample types [OPSK-714] - Ability to upload a spreadsheet and associate with a new sample type [OPSK-715] - Defined sample type attributes from columns in spreadsheet [OPSK-716] - Submit new sample type for template [OPSK-717] - After uploading data file, identify whether it matches a sample type [OPSK-718] - Provide option to extract samples [OPSK-719] - Handle problematic sample entries [OPSK-720] - Ability to download template associated with a sample type [OPSK-724] - SampleAttributes not destroyed when SampleType is [OPSK-731] - Samples in Browse Menu [OPSK-732] - Add Sample Type to Create Menu [OPSK-733] - Creating sample type manually or from template in separate tabs [OPSK-734] - Sample Type is missing required * next to title [OPSK-735] - Hide Add Attribute button when defining from a template [OPSK-736] - Javascript for title constraints in sample type form [OPSK-737] - Put extract samples button in proper menu [OPSK-738] - Add confirmation step, and select sample type if > 1, when extracting samples [OPSK-739] - Summary of extracted samples after extraction [OPSK-740] - Show samples linked to Data File [OPSK-741] - Show originating data file for sample [OPSK-742] - Add a attribute type for Date (in addition to date time) [OPSK-744] - Dependant destroy for sampe_type#content_blob [OPSK-746] - Authorise the sample extraction action [OPSK-748] - Integers should be accepted as a valid Float [OPSK-749] - Show page for Sample Type [OPSK-750] - Searchable sample types [OPSK-751] - Better error reporting for uploading a none valid template [OPSK-752] - Default attribute type to String when creating sample type [OPSK-753] - Weblink types should be shown as a link [OPSK-754] - Sample type link from sample page can now go to sample type show page [OPSK-755] - Table view for samples under a data file/sample type [OPSK-780] - Update ISA graph to show inputs and outputs rather than just links [OPSK-929] - Define Sample Type attribute type [OPSK-930] - Allow selection of Sample Type attribute type, along with sample type in the form [OPSK-931] - Allow selection of sample when creating a sample linked to a sample type [OPSK-932] - Prevent a sample type being deleted when used as an interlinked sample type [OPSK-933] - Handle samples defined in a spreadsheet [OPSK-934] - Display sample as a link when showing parent sample [OPSK-935] - Include interlinked sample in related items list [OPSK-939] - Show the sample type in attributes for SEEK Sample [OPSK-941] - Small java apache poi library to generate spreadsheet [OPSK-942] - Generate spreadsheet include CV data validations [OPSK-943] - Include sample and strain list as a dropdown [OPSK-944] - Determine when the template needs regenerating [OPSK-945] - Bundle up as a ruby gem and integrate with SEEK [OPSK-946] - Concider whether the template could be generated as a back ground process [OPSK-961] - More efficient auth lookup refresh [OPSK-962] - Re-structure "is_authorized?" method [OPSK-968] - Cluster sample types by tag Bug [OPSK-747] - Can extract samples multiple times [OPSK-758] - "Remove" sample attribute broken [OPSK-792] - Replace tooltip plugin with more modern one [OPSK-809] - Sample extraction fails if no matching samples [OPSK-819] - Title of sample not linked for source datafile [OPSK-822] - Floats ending in 0 are not accepted in samples extraction [OPSK-825] - References to Specimens still appear in assets_creators [OPSK-826] - Samples and SampleType are not correctly using the grouped pagination [OPSK-829] - SampleTypes must be paginated [OPSK-833] - Cytoscape visualisation [OPSK-851] - Split up contributors and creators [OPSK-853] - XSS on sample type show page [OPSK-862] - First user is no longer added to the default project and institution [OPSK-882] - Check and Fix Gemfile env seperation [OPSK-886] - there is an extra 'add' icon at admin avatar, when it is the contributor of an asset [OPSK-887] - New study but on edit page [OPSK-888] - db:setup shows the entries seeded in the log, but no entries in database [OPSK-894] - file_path points to the filestore/tmp/image_assets/ [OPSK-895] - when 'view content', it is counted as download, but not when 'explore' [OPSK-912] - On the sandboxed SEEK I can't create a strain (also problem on test SEEK) [OPSK-914] - Error when clicking link to a scale [OPSK-926] - Samples creators always 'None' in resource list item [OPSK-927] - Samples are not included in person related items [OPSK-938] - Asset Creator not behaving correctly for sample [OPSK-949] - Problem with creators box [OPSK-957] - XSS in JStree [OPSK-964] - inconsistent writing of SOP/sop [OPSK-969] - Associate sample type with a project [OPSK-970] - Document samples [OPSK-992] - Match all tags option for sample type selection not working as expected [OPSK-998] - auth lookup task seems to be not triggered when removing a permission [OPSK-999] - search turned off after an upgrade 1.1 -> 1.2 [OPSK-1002] - Update message for why you can't delete a sample type [OPSK-1005] - Sample type edit - some drop downs become incorrectly visible [OPSK-1009] - Error generating sample template with a CV for apples [OPSK-1010] - Error when trying to change type from CV [OPSK-1011] - Sample Text attribute didn't like new line [OPSK-1015] - should only be able to delete sample type if a member of the project [OPSK-1016] - Limit sample type creation and editing to project administrators [OPSK-1042] - error when trying to create a sample type with project missing [OPSK-1043] - When Strain is not required, there is not a blank default option [OPSK-1044] - Strain linked to samples doesn't show samples as related items [OPSK-1050] - content_blob_id on SampleType is unused [OPSK-1071] - The Sample Type administer menu button always appears [OPSK-1072] - Error when creating sample with the title set to a SEEK entity [OPSK-1079] - Not clear how to edit controlled vocabulary | http://docs.seek4science.org/tech/releases/release-notes-1.2.0.html | 2017-07-20T22:35:53 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.seek4science.org |
Table of Contents
In Xubuntu, you don't need to download and install packages separately. Instead, repositories contain sets of packages. These repositories are then accessed with package managers in order to add, remove or update the packages.
Xubuntu comes with two package managers installed:
Gnome Software, a simple graphical user interface to install new software.
apt-get, a command-line tool that can be used for advanced package management. For more information on apt-get, see the Debian apt manual.
You can launch Gnome Software from → .
Search for an application or select a category to find an application you want to install
From the application page, click
You will be asked to enter your password; once you do that, installation will begin
A shortcut to your application will added to the Applications menu
It is possible to add extra repositories, such as those provided by third parties. To enable more software repositories:
Open → → or → → and go to the Other Software tab
Pressto add a new repository.
Enter the APT line for the extra repository. This is available from the website of the repository in the majority of cases and it should look something like the following:
deb etch main
Pressand then click to save your changes.
You will be notified that the information about available software is out-of-date. Press.. Once you have downloaded the GPG key, import the key by selecting the Authentication tab, clicking on, and then selecting the GPG key to be imported.
Most of the software available for Xubuntu is free, open-source software. This software is free for anyone to install and use, and people can modify the software and redistribute it if they like. Xubuntu is built from this type of software.
Non-free software is software that is not freely redistributable or modifiable. This makes it difficult for the Xubuntu developers to improve the software and correct problems, so it is normally recommended that you use free software instead.
Restricted software is software that has restrictions on its use, preventing it from being classed as free software. Non-free software is a type of restricted software, where the restrictions are due to the software having a non-free license. Other reasons for software being classed as restricted include legal issues (use of some types of software is illegal in some countries) and patent issues (some software requires a patent license to be used legally).
In some cases, restricted software is the only option. Such cases include software for the playback of certain audio and video formats, some fonts and certain video card drivers.
You should be warned by the package manager when you try to install restricted software. If the restricted software cannot be used legally in your country then there is little you can do; you should not install the software. If the software is restricted simply because it is non-free, you may choose to use it (for example, in the case of graphics card drivers). Be aware that most restricted software is not supported in Xubuntu and problems with such software often cannot be corrected by Xubuntu developers.
To add a disc as a software source for your system:
Insert a disc which contains packages; e.g., the Xubuntu installation disc which comes with a limited selection of packages
Open → → and go to the Other Software tab
Press thebutton; you will be prompted for your password
After adding the disc to the software sources, you will be able to install packages from the disc.
If you have less than optimal Internet access, apt-offline allows you to use another computer with better access to download packages and check for package updates like security fixes. All you need is time, patience, and a portable USB storage device. A usage example to learn more about this can be found in Chapter 10, Offline Package Management.
You can change the frequency of the check and the way in which updates are handled. When Software Updater runs and presents you with its dialog, there is a Settings button at the bottom. Pressing this will open the Software Sources dialog at the Updates tab. Alternatively, you can access the settings dialog by going to → → and opening the Updates tab.
The following settings can be changed from this dialog:
Important security updates - Updates that fix critical security flaws are made available through this source. It is recommended that all users leave this source enabled (it should be enabled by default).
Recommended updates - Updates that fix serious software problems (which are not security flaws) are made available through this source. Most users will want to leave this source enabled as common and annoying problems are often fixed with these updates.
Pre-released updates - Updates thatported” to an older version of Xubuntu so that users can benefit from new features and fixes for problems. These backports are unsupported, may cause problems when installed and should only be used by people who are in a real need of a new version of a software package that they know has been backported.
This section of the Software Updater deals with the way you wish future versions to be given to you. You have three options:
For any new version - You will get notifications of all new releases, once in 6 months
For long-term support versions - You will get notifications of new Long-term Support (LTS) releases, once in 2 years
Never - You will not get notifications of new releases
Package updates can be scheduled from the desktop and you can change how and when the system updates itself.
Frequency of check - Allows you to schedule when to check for updates
Checking and installing updates automatically - Allows you to define if the system downloads and install updates without confirmation or downloads all updates in the background but waits for you to manually install them
Displaying notifications about security updates - Allows you to define when will the system notify you about available security updates | https://docs.xubuntu.org/1610/user/C/managing-applications.html | 2017-07-20T22:44:56 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.xubuntu.org |
For ESXi hosts, updates are all-inclusive. The most recent update contains the patches from all previous releases.
The ESXi image on the host maintains two copies. The first copy is in the active boot and the second one is in the standby boot. When you patch an ESXi host, Update Manager creates a new image based on the content of the active boot and the content of the patch. The new ESXi image is then located in the standby boot and Update Manager designates the active boot as the standby boot and reboots the host. When the ESXi host reboots, the active boot contains the patched image and the standby boot contains the previous version of the ESXi host image.
When you upgrade an ESXi host, Update Manager replaces the backup image of the host with the new image and replaces the active boot and the standby boot. During the upgrade, the layout of the disk hosting the boots changes. The total disk space for an ESXi host remains 1GB, but the disk partition layout within that 1GB disk space changes to accommodate the new size of the boots where the ESXi 5.5 images will be stored.
For purposes of rollback, the term update refers to all ESXi patches, updates, and upgrades. Each time you update an ESXi host, a copy of the previous ESXi build is saved on your host.
If an update fails and the ESXi 5.5 host cannot boot from the new build, the host reverts to booting from the original boot build. ESXi permits only one level of rollback. Only one previous build can be saved at a time. In effect, each ESXi 5.5 host stores up to two builds, one boot build and one standby build.
Remediation of ESXi 4.0, 4.1, 5.0, and 5.1 hosts to their respective ESXi update releases is a patching process, while the remediation from version 4.x, 5.0 and 5.1 to 5.5 is considered an upgrade. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.update_manager.doc/GUID-1C87F994-F9D0-4DD1-AAA7-EBD9188EBFA1.html | 2017-07-20T22:43:08 | CC-MAIN-2017-30 | 1500549423512.93 | [] | docs.vmware.com |
If you have Git repositories at Codeplane you can migrate your projects to Codenvy. Here’s how you can setup SSH connection to your Codeplane workspace and clone, push, pull and fetch repositories.
SSH Key Configuration
First off, you should configure SSH keys:
- go to Window > Preferences > SSH Keys and press Generate Key button. Enter hostname as:
codeplane.com
(no www or https!) and click OK
- the key will appear in the list of SSH keys. Click View and copy the key to clipboard. Make sure you’ve selected and copied the entire key text
- go to your Codeplane account, SSH Public Keys section
- add a new key
Clone From Codeplane
The easiest way to migrate your projects to Codenvy is to clone Codeplane repository with the project’s source code:
- go to File > Import Project > Import From Location
- copy and paste the repository Git URL
- modify project’s name, if appropriate, and click Import
Push to Codeplane
To push to Codeplane, you first need to add a remote repository in Codenvy at Git> Remotes > Add (unless it is already there after cloning). Enter the repository name and its SSH Git URL which should look like:
[email protected]:test/new.git
Having added a remote repository, you can push to Codeplane at Git > Remote > Push. The repository you have just added should appear in the list of remote repositories. For more info see Remote Repositories. | http://docs.codenvy.com/user/integration-with-codeplane/ | 2014-12-18T04:16:34 | CC-MAIN-2014-52 | 1418802765610.7 | [] | docs.codenvy.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation..
Why are we retiring API BaaS?
When Apigee launched Apigee API BaaS (based on the open source Apache Usergrid) in 2012, it provided solutions to many challenges for mobile app developers. Now that Apigee is part of Google Cloud Platform (GCP), you can take advantage of a variety of best-in-class, widely adopted technologies for each of the use cases that Apigee API BaaS addressed.
What are the GCP alternatives to API BaaS?
GCP has a number of database solutions for customers to consider, which provide a range of solutions that go well beyond the capabilities of Apigee API BaaS.
Google Cloud Platform Cloud SQL Postgres is an alternative database solution that offers the most similar functionality to API BaaS for JSON blobs, and it also has a relational database capability. If you are considering retaining.
You should determine if one or more such solutions meet your needs.
What other options are available for API BaaS users?
Our partners, Intelliswift Software and EPAM, are available for both Private Cloud and Public Cloud customers. Their services are independent of Google, and would need to work with them directly. Intelliswift plans to provide Intelliswift Usergrid, their hosted version of Apache Usergrid (similar to API BaaS) for customers who wish to stay with a similar solution to API BaaS. For more information or to contact these companies, please email Intelliswift at [email protected] and EPAM at [email protected].
Is the source code of API BaaS 4.18.01 shared with Private Cloud customers?
No, Apigee will not open source 4.18.01 source code or Apigee BaaS portal other than what has already been done as a part of Apache Usergrid. | https://docs.apigee.com/release/notes/api-baas-eol?hl=es | 2022-06-25T05:16:07 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.apigee.com |
Creating a strong password
A strong password helps prevent someone else from accessing your information. Weak passwords, such as 1234, might be easy to remember, but they're easier to guess as well.
To create a strong password, avoid using a password with the following characteristics (in order of importance):
The key is to create a strong password that you can remember easily. Consider the following tips: | https://docs.blackberry.com/en/apps-for-android/password-keeper/latest/help/hjo1443028280934 | 2022-06-25T05:08:56 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.blackberry.com |
SwiftUI App Lifecycle
Initializing the Purchases SDK in SwiftUI
With the next iteration of SwiftUI announced at WWDC 2020, entire apps can be created with just a simple struct conforming to the new
App protocol, like so:
@main struct SampleApp: App { var body: some Scene { WindowGroup { ContentView() } } }
Without traditional application delegate methods commonly used to initialize the SDK, it can seem a little confusing as to where the SDK should be initialized.
Option 1: App Init
For basic initialization without delegate methods, you can implement the App
init method:
import Purchases @main struct SampleApp: App { init() { Purchases.configure(withAPIKey: "api_key") } var body: some Scene { WindowGroup { ContentView() } } }
Option 2: App Delegate
Another method of initialization is to use the new
@UIApplicationDelegateAdaptor property wrapper to configure the Purchases SDK. The
@UIApplicationDelegateAdaptor gives the option of using UIApplicationDelegate methods that are traditionally used in UIKit applications.
Creating a Delegate
Begin by creating a delegate class and initializing the Purchases SDK like the following:
import Purchases class AppDelegate: UIResponder, UIApplicationDelegate { func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey : Any]? = nil) -> Bool { Purchases.debugLogsEnabled = true Purchases.configure(withAPIKey: "api_key") return true } }
Attaching the Delegate
As previously mentioned, the new
@UIApplicationDelegateAdaptor property attaches the delegate to the new SwiftUI App struct. Add the property wrapper like the following:
@main struct SampleApp: App { @UIApplicationDelegateAdaptor(AppDelegate.self) private var appDelegate var body: some Scene { WindowGroup { ContentView() } } }
Build and run the app, and Purchases will be initialized on app launch.
For more information on configuring the SDK, check out the Configuring SDK guide.
Updated 5 months ago | https://docs.revenuecat.com/docs/swiftui-app-lifecycle | 2022-06-25T04:28:51 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.revenuecat.com |
Export Citations
Export the contents of a dataset to your favorite reference manager.
To start this job, select the question: “Can I export a set of articles to my favorite format or reference manager?”
Currently, this job supports exporting to the following formats:
- MARC 21 transmission format
- MARCXML
- MARC-JSON (draft)
- MODS
- RDF/XML (using Dublin Core Grammar)
- RDF/N3 (using Dublin Core Grammar)
- BibTeX
- EndNote (ENW format)
- Reference Manager (RIS format)
Options
The only option for this analysis is the export format desired. | https://docs.sciveyor.com/manual/analyses/export_citations/ | 2022-06-25T04:45:43 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.sciveyor.com |
For creating your own form, you need the extension "CMS extensions". This is part of the Professional Edition of Shopware 6.
To add your individual form to your Shopping Experience world, click on Contents> Shopping Experience worlds and select the experience world to which the form should be added. Alternatively, you can also create a completely new adventure world / layout.
Click on the + symbol (1) on the right-hand side to add a new block. Now select the block category form (2) in the dropdown. In addition to the default form, e.g. for contact, you can now also drag your own form into your world of experience. You can drag and drop the form (3) into the Shopping Experience layout.
A pop-up window opens in which you can choose whether you want to use a template you have previously created or create a new form.
When you create a new form or edit an existing form, the form Settings window opens with the options and field tabs. In the options tab you can configure the basic settings for the form. out by the user. For this you can create groups, which then.
For the field type Text or E-mail, you can specify a placeholder text to be displayed if the field has not yet been filled in.....
The selection field is a checkbox that the user can activate or deactivate. In the default value you can specify whether the field should already be activated by default or not..
The text area is used to enter a longer text. In addition to the placeholder text, you can also specify how many lines the user may use and whether the user may change the size of the text area. | https://docs.shopware.com/en/shopware-6-en/tutorials-and-faq/create-individual-forms?category=shopware-6-en/tutorials-and-faq | 2022-06-25T05:04:21 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.shopware.com |
The Platforms tab is where you can authorize, revoke, or edit the authorization of platforms within the Cisco Cloudlock environment. The available platforms with an active Cloudlock license are listed as well as their statuses and what actions are available. Platforms with a Not authorized status have been authorized in the past, but the credentials have been changed and the platform needs reauthorization.
Table of Contents
There is a checkbox that allows you to treat all subdomains of the authorized platforms as internal.
Authorize a Platform
- Choose the platform to authorize and select Authorize.
- Select Authorize in the pop-up window. When you are redirected to the platform's login page, sign in with the appropriate authorizing credentials.
Edit a Platform
Platforms with the option to Edit have additional settings for their Monitoring Scope. The Monitoring Scope enables you to decide if the entire domain or only a selection of users, groups, or OUs should be monitored. You can manage which domains and subdomains are monitored as well. For example, with Office 365 you can choose to monitor all users, a select number of users, or the majority of users with some exceptions. Domains from O365 are imported daily, but you also have the option to import a list of domains to monitor as internal.
Revoke a Platform
- To discontinue the monitoring of your environment through Cloudlock, select Revoke Authorization in the Actions column.
- An option appears to delete all incidents in Cloudlock related to that platform when revocation is performed. To complete the process, select Revoke Authorization.
Updated 2 months ago | https://docs.umbrella.com/cloudlock-documentation/docs/platforms | 2022-06-25T04:49:35 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
puA1600-60uc#
Specifications#
General Specifications#
Spectral Response#
The spectral response curve excludes lens characteristics, light source characteristics, and IR cut filter characteristics.
IR Cut Filter#
The camera is equipped with an IR cut filter. The filter is mounted in a filter holder inside the lens mount.
The IR cut filter is manufactured by Fujita Electric Works Ltd. and has the following spectral characteristics:
Mechanical Specifications#
Camera Dimensions and Mounting Points#
→ Download the CAD/technical drawing for your Basler Camera.
Maximum Allowed Lens Intrusion#
→ See Maximum Allowed Lens Intrusion.
Stress Test Results#
→ See Stress Test Results.
Requirements#
Environmental Requirements#
Temperature and Humidity#.
Camera Power#
You must supply camera power that complies with the Universal Serial Bus 3.0 specification.
The camera's nominal operating voltage is 5 VDC, effective on the camera's connector.
Cable Requirements#
- Use a high-quality USB 3.0 cable with a USB 3.0 Micro-B plug.
Do not use cables with a USB 1.x/2.0 Micro-B cable plug, even if you are connecting the camera to a USB 2.0 port. Otherwise, the camera may not work.
USB 2.0 Compatibility#
All Basler pulse USB 3.0 cameras puA#
USB 3.0 Connector#
The camera's USB 3.0 connector is a standard Micro-B USB 3.0 connector. It provides a USB 3.0 connection to supply power to the camera and to transmit video data and control signals.
Connection assignments and numbering adhere to the Universal Serial Bus 3.0 standard. The recommended mating connector is a Micro-B USB 3.0 plug.
Tripod Socket#
All pulse cameras are equipped with a tripod socket to attach the camera to a tripod. The socket has a standard 1/4-20 UNC thread.
Precautions#
→ See Safety Instructions (dart Cameras).
Installation#
→ See Camera Installation. | https://docs.baslerweb.com/pua1600-60uc | 2022-06-25T04:09:52 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['images/image-pulse-usb-mono-color.jpg',
'Basler pulse USB 3.0 Camera'], dtype=object)] | docs.baslerweb.com |
The UI of DriveWorks Live can be used to customize the following:
Additionally the Integration Theme hosts the DriveWorks Live Web API enabling you to embed your DriveWorks Implementation in an existing web site or application.
There are three main themes to choose from:
Please refer to: To Select a Theme for information on doing this.
Please refer to: To Change the Skin used in the Web Theme for information on doing this.
Please refer to: Modules for information on doing this.
Project image and description used for your projects can be changed by following the instructions in the topic Project Image, Name and Description.
Please see Integration Theme for information on using this theme.
With some knowledge of html and xml, more customization can be done. See Further Customization for more information. | https://docs.driveworkspro.com/Topic/LiveCustomize1 | 2022-06-25T05:39:44 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.driveworkspro.com |
Warning
This document is for an old release of Galaxy. You can alternatively view this page in the latest release if it exists or view the top of the latest release's documentation.
Source code for galaxy.datatypes.util.generic_util
from galaxy.util import commands[docs]def count_special_lines(word, filename, invert=False): """ searching for special 'words' using the grep tool grep is used to speed up the searching and counting The number of hits is returned. """ cmd = ["grep", "-c", "-E"] if invert: cmd.append('-v') cmd.extend([word, filename]) try: out = commands.execute(cmd) except commands.CommandLineException: return 0 return int(out) | https://docs.galaxyproject.org/en/release_21.01/_modules/galaxy/datatypes/util/generic_util.html | 2022-06-25T04:35:40 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.galaxyproject.org |
influx task run retry
This page documents an earlier version of InfluxDB. InfluxDB v2.3 is the latest stable version. View this page in the v2.3 documentation.
The
influx task run retry command retries to run a task in InfluxDB.
Usage
influx task run retry [flags]
Flags
Examples
Authentication credentials
The examples below assume your InfluxDB host, organization, and token are
provided by the active
influx CLI configuration.
If you do not have a CLI configuration set up, use the appropriate flags to provide these required credentials.
Required permissions
Use an Operator or All-Access token to retry tasks.
Retry a task run
influx task run retry \ --task-id 0Xx0oox00XXoxxoo1 \ --run-id ox0Xx0ooxx00XX. | https://test2.docs.influxdata.com/influxdb/v2.0/reference/cli/influx/task/run/retry/ | 2022-06-25T04:43:02 | CC-MAIN-2022-27 | 1656103034170.1 | [] | test2.docs.influxdata.com |
Get Resources and Statuses from FileMaker
Overview
By default, DayBack stores resources and statuses with its own settings. You can make new resources and statuses, make new folders, and change their sort order as shown here:
But you may already have FileMaker tables for your resources and statuses, so DayBack offers a way to pull these values in from FileMaker. You can pull all your values in from FileMaker or continue to have some resources and statuses stored in DayBack's settings. These scripts will let you base your filters and filter folders on your own FileMaker scripts, so you can do things like...
- Load only the territories (resource folders) and direct reports (resources) for each sales manager as they log in;
- Keep a complex list of equipment (resources) up to date in FileMaker instead of managing it by hand in DayBack;
- User the same value list of project types (statuses) between DayBack and FileMaker.
Using FileMaker Tables for Resouces and/or Statuses
DayBack ships with two examples to show you how to do this.
Status Example: Send JSON to DayBack
Take a look at the FileMaker script "Sample Statuses - DayBack" that comes with DayBack. This example builds a list of statuses, status colors, and folders in JSON. Use the JSON in this script as a guide for how you should format your own JSON; either hard-code it as we're doing or loop through your own records (or your own value list) to build a similar JSON object.
If you need help writing a script that builds JSON like this, we can write that for you as part of an implementation package.
To run this example, you'll head to admin settings and create an app action with an "On Statuses Fetched" trigger like this:
Here's the code for that action:
// Trigger - On Statuses Fetched // Prevent Default Action - Yes // Do not enable this action for "Shares", only for App. var filterItems = []; var item; dbk.performFileMakerScript('Sample Statuses - DayBack', null, function(result) { if (result && result.payload) { for (var i = 0; i < result.payload.length; i++) { item = result.payload[i]; dbk.mutateFilterField(item) filterItems.push(item); } } seedcodeCalendar.init('statuses', filterItems); action.callbacks.confirm(); });
(This code uses the dbk.performFileMakerScript() function with lets you call any FileMaker script from within DayBack. It takes the script name as its first parameter, then any script parameters, then the name of any callback function you'd like to run.)
Note that this action is set to replace any statuses you have with the contents of the FileMaker script. That's done in the first line that initializes filterItems and in line 11, the seedcodeCalendar.init line. The Resources example below appends the contents of the FileMaker script to any resources already there and you can see those two lines are different.
To use the code above for resources, change two things:
- the name of the FileMaker script in line 3. Remember that this script needs to produce JSON in the form used in "Sample Statuses - DayBack".
- replace "statuses" with "resources" in line 11.
Resources Example: Send a Simple List to DayBack
This example is very similar except that it just takes a list of resources as its starting point. This means you don't have to make any JSON on your end, but it also means that you can't create folders for your resources. You'd need to pass in a JSON object like that in the statuses example (and use the status-example style script) if you want to create folders automatically.
Take a look at the FileMaker script "Sample Resources - DayBack" that comes with DayBack. You'll see that in line 12 it just assembles a list of resource names. You could do this in a loop, using ListOf(), or using ValueListItems().
You'll run this script by creating an app action with an "On Resources Fetched" trigger like this:
Here's the code for that action:
// Trigger - On Resources Fetched // Prevent Default Action - Yes // Do not enable this action for "Shares", only for App var filterItems = seedcodeCalendar.get('resources'); var item; dbk.performFileMakerScript('Sample Resources - DayBack', null, function(result) { if (result && result.payload) { for (var i = 0; i < result.payload.length; i++) { item = dbk.nameToFilterItem(result.payload[i]) filterItems.push(item); } } action.callbacks.confirm(); });
Note since this action loads filterItems with the current content of 'resources', it appends the contents of the FileMaker script to any resources already in DayBack.
To use the code above for statuses, change two things:
- the name of the FileMaker script in line 2. Remember that this script needs to produce a simple list of names in the form used in "Sample Resources - DayBack".
- replace "resources" with "statuses" in line 1.
Resources Example: Build resources from a table as JSON including folders, sort, selected status, and tags.
This example builds your resource filter objects in JSON format from a table in your file. It includes the folder structure, sort order, whether or not the status is selected, and tags.
You'll need to download the sample file here for this one: DayBack Calendar JSON Filters
There is a Resources table in this file that contains some sample filter values. You can import this table into your file to use them for your own filters.
Then, copy over the FileMaker script "Sample Resources JSON - DayBack" into your file. This is the script that loops through the Resources table and builds the JSON data to send to FileMaker
You'll run this script by creating an app action with an "On Resources Fetched" trigger. You can download the code for this action here and paste it into your app action in DayBack: filtersJSON.js
There's also a sample Statuses table and script called "Sample Statuses JSON- DayBack" that can be used for Statuses. Just change the following in the app action to use it for Statuses:
- Point inputs.scriptName to your script name on line 30.
- Replace 'resources' with 'statuses' in line 33.
Updating the resource list after the calendar has already loaded
If you'd like to update the list of resources in the sidebar after the calendar has already loaded, for example, in an "After View Changed" event action, you'll just need to add one line right after actionCallbacks.confirm(); in the example action:
dbk.resetResources();
This function will reset and refresh the resource list based on the values you've just provided, without needing to refresh the calendar.
There are some cases where you might want to update your resource or status list based on the date
// Trigger - On Events Rendered // Prevent Default Action - No //Update Resources on Date Change var filterItems = []; var item; var currentView = seedcodeCalendar.get('view'); var currentDate = seedcodeCalendar.get('date'); var resourceRefresh = seedcodeCalendar.get('resourceRefresh'); if (resourceRefresh){ seedcodeCalendar.init('resourceRefresh'); } else{ seedcodeCalendar.init('resourceRefresh', true); dbk.performFileMakerScript('Sample Resources - DayBack', {view: currentView, date: currentDate}, function(result) { if (result && result.payload) { for (var i = 0; i < result.payload.length; i++) { item = result.payload[i]; dbk.mutateFilterField(item) filterItems.push(item); } seedcodeCalendar.init('resources', filterItems); dbk.resetResources(); } }); }
Considerations When Making Public Shares
If you make publicly shared bookmarks that include resources or statues pulled from FileMaker, remember that your users won't have access to FileMaker and won't be able to pull statues and resources from your tables. Your bookmark will work, but recipients won't be able to filter it further because they'll see DayBack's stock resources instead of your own. We suggest locking the sidebar for shares in this case or removing the resources section with CSS.
Editing Resources and Statuses Created From Actions
Any filter items you create from actions like this are not stored in DayBack's settings and thus aren't editable in DayBack. You'll need to edit them, changing their color and sort order, in FileMaker in order to see changes in DayBack.
Sorting Resources or Statuses before they are Displayed
Since you can't edit dynamically-created items in DayBack, you can't sort them either without add a separate app action, or additional JavaScript code to sort the list after it is loaded. You can, however, provide a sort order in the JSON you send to DayBack. Add a new attribute "sort" in the form "sort: 2" (without the quotes). Note that if you pass in a sort for any one item you'll want to include a sort attribute for every item, including any folders.
Sorting Resources after they have been Loaded
If you do not specify a "sort" JSON attribute, DayBack will simply sort the resources in the order in which they come back in the JSON. If you prefer to sort them yourself in JavaScript, you can add the following After Events Rendered action, to add a resources.sort() call that re-sorts the list. In the following example, the list will be sorted alphabetically in ascending order:
// Load existing resources var resources = seedcodeCalendar.get("resources"); // Set a variable that tracks that we only run this action only the // first time the Calendar's events are displayed var resourceRefresh = seedcodeCalendar.get('resourceRefresh'); if (resourceRefresh){ seedcodeCalendar.init('resourceRefresh'); } else{ seedcodeCalendar.init('resourceRefresh', true); // Sort resources in ascending alphabetical order resources.sort((a,b) => a.name.localeCompare(b.name)); // Set the resources seedcodeCalendar.init('resources', resources); // Tell DayBack that the resources have been reset dbk.resetResources(); }
Locking Yourself Out with Custom Actions
If you write some really bad JavaScript, you can write an action that prevents DayBack from starting. You'll just see the blue loading bars run across the screen and you'll never get to the calendar. If that happens, close your FileMaker file (or navigate to another layout) and then return to DayBack while holding down the shift key and moving the mouse back and forth. That will cause DayBack to bypass any app-actions you've written and take you right to the settings screen where you can correct your action or turn it off.
(Turn your action "off" by unchecking the box next to "app" below the action.)
Sorting Resources in Salesforce
If you are looking for documentation on how to retrieve and sort resources in Salesforce, check out our documentation on Dynamic Resources and Statuses for Salesforce. | https://docs.dayback.com/article/170-get-resources-and-statuses-from-filemaker | 2022-06-25T04:49:23 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.dayback.com |
This document describes the different kinds of rules that apply to parameters that can be captured from drawings.
The file name rule is used to determine the name of the driven drawing.
The relative path rule controls the folder that the drawing is created in. The folder is specified relative to the "Results" folder which is in the same folder as the project file.
Layer state rules are used to control the visibility of a layer.
Sheet state rules are used to delete or rename sheets.
Sheet numerator and denominator rules are used to control the scale of the sheet.
View state rules are used to delete or change the configuration of a view.
View numerator and denominator rules are used to control the scale of the view.
View left and top rules are used to control the position of the view relative to the bottom, left of the sheet.
Break.
If the breakline is not required forcing the rule to result in "Delete" will remove the breakline from the drawing.
For more accurate control of breakline positions we recommend creating planes in the model from which the breaklines can be dimensioned from. Capture the plane distance and control the breakline position from the plane. Set the distance from the plane to the breakline to a minimum nominal value.
Custom property rules control the text of a custom property. For more information about special custom properties used for driving colors, materials, and textures, see Custom Properties.
Various types of annotations can be controlled by DriveWorks. These include:
For more information on these please refer to the See Also links below. | https://docs.driveworkspro.com/Topic/DrawingRulesGeneral | 2022-06-25T04:41:14 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.driveworkspro.com |
The Amazon Chime SDK identity, meetings, and messaging APIs are now published on the new Amazon Chime SDK API Reference. For more information, see the Amazon Chime SDK API Reference.
TagMeeting
Applies the specified tags to the specified Amazon Chime SDK meeting.
Request Syntax
POST /meetings/
meetingId/tags?operation=add HTTP/1.1 Content-type: application/json { "Tags": [ { "Key": "
string", "Value": "
string" } ] }
URI Request Parameters
The request uses the following URI parameters.
Request Body
The request accepts the following data in JSON format.: | https://docs.aws.amazon.com/chime/latest/APIReference/API_TagMeeting.html | 2022-06-25T06:23:58 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.aws.amazon.com |
Modify environment variables to set logging levels.
BlueCat Health Monitoring receiver
Add a LOG_LEVEL parameter to the environment variables in receiver-docker-compose.yml.
Log level options
- ERROR
- DEBUG
- INFO
- WARNING
- CRITICAL
Example receiver-docker-compose.yml:
version: "3" volumes: portal_logs: services: receiver: image: quay.io/bluecat/bhm:22.1-GA-receiver ports: - "10045:80" environment: - DB_CLUSTER=${DB_CLUSTER} - DB_USERNAME=${DB_USERNAME} - DB_PASSWORD=${DB_PASSWORD} - PORTAL_IP=${PORTAL_IP} - LOG_LEVEL=DEBUG restart: always
BlueCat Health Monitoring portal
Configure BlueCat Health Monitoring portal logging through the Gateway user interface. For more information, refer to Configuring log settings in the Gateway Administration Guide. | https://docs.bluecatnetworks.com/r/BlueCat-Health-Monitoring-Administration-Guide/Changing-default-logging-level/22.1 | 2022-06-25T04:36:13 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.bluecatnetworks.com |
One of the most methods to get to know the lady’s character is to not only on speak to her, but in addition do the things your lover loves. Which means, your my will develop more robust and you discover some extra distributed pursuits. Arequipa includes a lot to supply to the guests, through the intimidating volcanoes to the wonderful colonial-era structure. Once you might have established which you like the girl and long for her, consult her away straight away. Usually do not beat all over the bush, when women like decisive, immediate men. In the event you dilly-dally an excessive amount of, she could be put off, assuming you aren’t interested. And we’re not simply talking about the reality that there is no need to worry about infidelity.
- Consequently it’s hardly surprising that Ceviche, additionally spelled Cebiche, which is taken into account Peru’s national dish, has it is very personal national day on 06 28.
- However , there are way fewer fixed marriages in Peru as compared to India, just where mother and father likewise significantly effect their children’s lives.
- For EliteMailOrderBrides, we conduct exhaustive analysis and examine prices, options, and ensures to write down detailed opinions.
- That is why Peruvian ladies prefer to discover a husband from in another country and focus with him for a better life.
Peru may be a rustic of colourful holidays and sunny celebrations. The day simply by day existence of the people next door is stuffed with dances, music, and songs. The most famous fests in Peru are The Celebration of the Sunshine and La Candelaria, which is considered one of the most colorful festivals on the the planet. Also, one of many world-famous things about Peru is the nationwide special treats. It was created underneath the influence of many nationalities who have added incredible beautiful women from peru seasonings and flavours to their typical meals.
All of us don’t showcase services, therefore the order of critiques shouldn’t be looked at as a promotion. Likewise, observe that the specialists for EliteMailOrderBrides. com can’t check and examine each relationship service within the sector. Thus, you’d possibly be liberated to use any kind of matchmaking program you really want, even when it’s not examined by simply our team however. Men who wish devoted and constant wives can find one in Peruvian chicks. Peruvians believe unfaithfulness to be a huge signal, and women stay away from dishonest prove husbands. The fee of divorce in the country is low because individuals not often betray their friends.
It might be simple to organize a great attention-grabbing weekend or vacation. They don’t ever fake to become cooler or richer than they are surely. In Peruvian dating traditions, the woman do not pays for food, taxi, or other expenses on a date. Your lady might take out her billfold as a pleasant gesture, however you should never seriously accept her proposal to pay or to separate the verify.
Peruvian ladies are loving, caring, and really family-oriented. Peruvian girls are dedicated wives, superb housewives, they love to put together dinner and adore children. Her partner is a the lord to whom the woman treats with love and respect each of the years of her existence collectively. Peruvian girl likes you what she wears and how your woman appears normally.
All You Must Learn About Courting A Peruvian Girl
A girl uses the PictureThis application on an iPhone to get a flower by taking a photo. The flower inside the image is usually Peruvian lily. A girl uses the PictureThis app by using an iPhone to identify a flower by using a photograph. Like those stated earlier, even though Peruvian young girls are pumped up about overseas males, you still have to point out good manners if you wish their appeal.
Beautiful Peruvian ladies are show stoppers, they normally flirt with guys. Nevertheless , once they begin a relationship with somebody they will like, they make themselves unavailable to others. Peruvian women additionally do not reduce betrayal and anticipate one to be loyal. Therefore , you should use the frequent feeling the moment assembly any new partner and take a look at if appreciate or additional pursuits sign up for you.
For what reason Peruvian Women Are So Well-known?
Many of them also grow up dreaming about marrying overseas men. Women out of Peru who have stay in Cusco put on their particular hearts prove sleeves. All of their emotions are poured out plainly, and there’s simply no hiding with them. However , that is not mean you won’t land on your greatest conduct. Display as much as your new chance not to be alone at your greatest, and there’ll be a number of women who would probably discover you engaging. Ladies in Cusco will start friendship and also will be delighted to spend a little while around you.
The commonplace going out with situation in Peru is a little completely different coming from what you might find at residence. Here, pretty for a female from Peru to compel a boy leaving for dinner or maybe a film. The boy does not have obligation to pay off, but this individual could offer to adopt her residence in the event that he interests the woman. In Lima and also other populated areas, on-line relationship is soon becoming more popular amongst younger adults. The idea of getting married to someone you don’t know scares a large number of males through the large urban centers.
Marriages between Peruvian women and Western men happen to be gaining ever more recognition yearly. It is because with their allure, attention, and flexibility. Peruvian women prefer to meet fresh individuals with their lives, and a lot of young Peruvian beauties happen to be dreaming of getting all their ideal spouse abroad.
Amazing Czech Women of all ages Most Imagined
That’s so why when discussing with a Peruvian lady or her close relatives, it is going to be rude to scream or speak too loud. They simply adore their homes and whatever the scenario, that they like to laugh about anything. Nevertheless, for many who got in this article to pay attention to the fairly girls, among the many finest place to always be is at Lima, Arequipa, and Cusco. The partnership customs in Peru idolizes commitment and leaves no bed with respect to unfaithfulness.
So , at the time you get together with those who are circular her, she’ll belief you extra. Peruvian girls will not likely waste their very own time on blind-alley connection with men. So , do your best to make the best impression onto her. Peruvian women don’t cease working as soon as they turn into wives. Friendships happen to be as critical to a Peruvian woman when romance and she’s not prepared to quit her representatives for like. Moreover, you want to acquire her nearest associates to like you, so that they may put in an excellent word for everyone. | https://docs.jagoanhosting.com/fabulous-peruvian-sweetheart-at-the-current-market-in-chiclayo-peru/ | 2022-06-25T04:04:30 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.jagoanhosting.com |
R/findStations.R
cf_find_station.Rd
Search for clifro stations based on name, region, location or network
number, and return a
cfStation object.
Usage
Arguments
- ...
arguments to pass into the search, these differ depending on
search.
- search
one of
name,
network,
regionor
latlongindicating the type of search to be conducted.
- datatype
cfDatatypeobject for when the search is based on datatypes.
- combine
character string
"all"or
"any"indicating if the stations contain all or any of the selected datatypes for when the search is based on datatypes.
- status
character string indicating
"open",
"closed"or
"all"stations be returned by the search.
Details
The
cf_find_station function is a convenience function for finding
CliFlo stations in R. It uses the CliFlo
Find Stations
page to do the searching, and therefore means that the stations are not
stored within clifro.
If
datatype is missing then the search is conducted
without any reference to datatypes. If it is supplied then the
search will only return stations that have any or all of the supplied
datatypes, depending on
combine. The default behaviour is to search
for stations based on pattern matching the station name and return only the
open stations.
If the
latlong search type is used the function expects named
arguments with names (partially) matching latitude,
longitude and radius. If the arguments are passed in without names they must
be in order of latitude, longitude and radius (see examples).
Note
Since the searching is done by CliFlo there are obvious restrictions. Unfortunately the pattern matching for station name does not provide functionality for regular expressions, nor does it allow simultaneous searches although clifro does provide some extra functionality, see the 'OR query Search' example below.
See also
cf_save_kml for saving the resulting stations as a KML
file,
cf_station for creating
cfStation objects
when the agent numbers are known,
vignette("choose-station") for a
tutorial on finding clifro stations and
vignette("cfStation")
for working with
cfStation objects.
Examples
if (FALSE) { # Station Name Search ------------------------------------------------------ # Return all open stations with 'island' in the name (pattern match search) # Note this example uses all the defaults island_st = cf_find_station("island") island_st # Region Search ------------------------------------------------------------ # Return all the closed stations from Queenstown (using partial matching) queenstown.st = cf_find_station("queen", search = "region", status = "closed") queenstown.st # Long/Lat Search ---------------------------------------------------------- # Return all open stations within a 10km radius of the Beehive in Wellington # From Wikipedia: latitude 41.2784 S, longitude 174.7767 E beehive.st = cf_find_station(lat = -41.2784, long = 174.7767, rad = 10, search = "latlong") beehive.st # Network ID Search -------------------------------------------------------- # Return all stations that share A42 in their network ID A42.st = cf_find_station("A42", search = "network", status = "all") A42.st # Using Datatypes in the Search -------------------------------------------- # Is the Reefton EWS station open and does it collect daily rain and/or wind # data? # First, create the daily rain and wind datatypes daily.dt = cf_datatype(c(2, 3), c(1, 1), list(4, 1), c(1, NA)) daily.dt # Then combine into the search. This will only return stations where at least # one datatype is available. cf_find_station("reefton EWS", datatype = daily.dt) # Yes # OR Query Search ---------------------------------------------------------- # Return all stations sharing A42 in their network ID *or* all the open # stations within 10km of the Beehive in Wellington (note this is not # currently available as a single query in CliFlo). cf_find_station("A42", search = "network", status = "all") + cf_find_station(lat = -41.2784, long = 174.7767, rad = 10, search = "latlong") # Note these are all ordered by open stations, then again by their end dates } | https://docs.ropensci.org/clifro/reference/cf_find_station.html | 2022-06-25T04:32:13 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['../logo.png', None], dtype=object)] | docs.ropensci.org |
..
Organizations
The Organizations chart displays the active and inactive networks, roaming computers and VAs across the orgs in your MSSP. The organization column can be adjusted to display the orgs in alphabetical order and each column can be arranged in ascending or descending order. The chart also displays the PSA (Personal years ago | https://docs.umbrella.com/mssp-deployment/docs/deployment-status-report | 2022-06-25T05:27:20 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.umbrella.com |
Progenetix News¶
Implementation.
CNVs in Prenatal Tests & Maternal Malignancies
Publication indicating rare CNV signatures from a nationwide Dutch screening program
In a new publication in the Journal of Clinical Oncology CJ Heesterbeek, SM Aukema and the co-authors from the Dutch NIPT Consortium report about the incidence and diagnostic significance of incidential detection om maternal copy number variations in a large screening program aimed at detecting chromosomal imbalances in embryos, for a prediction of developmental abnormalities.
Paginated Downloads
Chunk-wise downloads of search results
Throught its Search Samples page Progenetix has
always offered options to download search results (biosamples, variants) in different formats (JSON,
tab-delimited tables, pgxseg files ...). However, especially for large results with
thousands of samples and potentially millions of variants this led to inconsistent behaviour
e.g. time-outs or dropped connections.
Now, API responses are capped through the
limit parameter to default "sensible" values
which, however, can be adjusted for systematic data access & retrieval. This functionality
is also implemented in the sample search form, allowing e.g. the limited retrieval of
a subset of samples from large or general cancer types, or the "paging" through consecutive
sample groups for partitioned data retrieval.
VRSified Variants
Variant Response in GA4GH Variant Representation Standard (VRS) Format
The variant format served through the API has now changed to a format commpatible with the GA4GH Variant Representation Standard (VRS version (bleeding edge version...).
{ "caseLevelData": [ { "analysisId": "pgxcs-kftwfurn", "biosampleId": "pgxbs-kftvj7rz", "id": "pgxvar-5c86664409d374f2dc4eeb93" } ], "variation": { "location": { "interval": { "end": { "type": "Number", "value": 62947165 }, "start": { "type": "Number", "value": 23029501 } }, "sequenceId": "refseq:NC_000018.10", "type": "SequenceLocation" }, "relativeCopyClass": "partial loss", "updated": "2022-03-29T15:06:46.526020", "variantInternalId": "18:23029501-62947165:DEL" } }
{ "caseLevelData": [ { "analysisId": "pgxcs-kftwfurn", "biosampleId": "pgxbs-kftvj7rz", "id": "pgxvar-5c86664409d374f2dc4eeb93" } ], "variantInternalId": "18:23029501-62947165:DEL" "referenceName": "18", "start": 23029501, "end": 62947165, "variantType": "DEL", "updated": "2022-03-29T15:06:46.526020" }
{ "caseLevelData": [ { "analysisId": "pgxcs-kl8hg4ky", "biosampleId": "pgxbs-kl8hg4ku", "id": "pgxvar-5be1840772798347f0eda0d8" } ], "variation": { "location": { "interval": { "end": { "type": "Number", "value": 7577121 }, "start": { "type": "Number", "value": 7577120 }, "type": "SequenceInterval" }, "sequenceId": "refseq:NC_000017.11", "type": "SequenceLocation" }, "state": { "sequence": "G", "type": "LiteralSequenceExpression" }, "updated": "2022-03-29T15:35:35.700954", "variantInternalId": "17:7577121:C>G" } }
{ "caseLevelData": [ { "analysisId": "pgxcs-kl8hg4ky", "biosampleId": "pgxbs-kl8hg4ku", "id": "pgxvar-5be1840772798347f0eda0d8" } ], "variantInternalId": "17:7577121:C>G", "start": 7577120, "end": 7577121, "referenceName": "17", "referenceBases": "C", "alternateBases": "G", "updated": "2022-03-29T15:35:35.700954" }
Histogram Improvements
Excluding reference samples from default plots
So far all samples matching a grouping code ("collation"; disease, publication etc.)
have been included when generating the pre-computed CNV frequencies. However, the
potential inclusion of normal/refernce samples sometimes lead to "dampened" CNV
profiles. Now, samples labeled as "reference sample" (
EFO:0009654) -
a term we had introduced into the Experimental Factor Ontology - are excluded from
pre-computed histograms. However, when e.g. calling up samples from publications
using the search panel referencve samples will be included unless specifically excluded.
Query-based histograms
Direct generation of histogram plots from Beacon queries
So far, the plot API only provided (documented) access to generate CNV histogram
plots from "collations" with pre-computed frequencies.
The
bycon API now offers a direct access to the histograms, by adding
&output=histoplot
to a Beacon (biosamples) query URL. The server will first query the samples and then perform
a handover to the plotting API. Please be aware that this procedure is best suited for limited
queries and may lead to a time-out.
Genomic Interval Changes
New positions for the 1Mb interval maps
So far, CNV histograms and .pgxseg segment and matrix files used a 1Mb genome binning, based on the consecutive assignment of 1Mb intervals from 1pter -> Yqter. This resulted in 3102 intervals, with the last interval of each chromosome being smaller.
On 2022-02-11 we have changed the procedure. Now, the last interval of the short arm of any chromosome is terminated at the centromere, leading to
- a (potentially) shortened "last p" interval
- a shift of most interval positions
- a changed interval number from 3102 to 3106
CNV Ontology Proposal - Now Live at EFO
EFO Ontology contains now terms for (relative) CNV levels
As part of the hCNV-X work - related to "Workflows and Tools for hCNV Data Exchange
Procedures" and to the intersection with Beacon and GA4GH VRS - we have now a new
proposal
for the creation of an ontology for the annotation of (relative) CNV events. The CNV
representation ontology is targeted for adoption by Sequence Ontology (SO)
and then to be used by an updated version of the VRS standard. Please see the
discussions linked from the proposal page.
However, we have also contributed the CNV proposal to EFO where it has gotten
live on January 21.
Introducing variant_state classes for CNVs
More granular annotation of CNV types
More information can be found in the description of ontology use for CNVs.
Term-specific queries
Allowing the de-selection of descendant terms in ontology filters
So far (and still as standard), any
selected filter will also include matches on its child terms; i.e. "NCIT:C3052 -
Digestive System Neoplasm" will include results from gastric, esophagus, colon
... cancer. Here we introduce a selector for the search panel to make use of the Beacon v2
filters
includeDescendantTerms pragma, which can be set to false if one only
wants to query for the term itself and exclude any child terms from the matching.
Please be aware that this can only be applied globally and will affect all filtering terms used in a query. More information is available in the Filtering Terms documentation.
BUG FIX Frequency Maps
Fix "only direct code matches" frequencies
Pre-computed Progenetix CNV frequency histograms (e.g. for NCIT codes) are based
samples from all child terms; e.g.
NCIT:C3262 will display an overview of all
neoplasias, although no single case has this specific code.
However, there had been a bug when under specific circumstances (code has some mapped samples and code has more samples in child terms) only the direct matches were used to compute the frequencies although the full number of samples was indicated in the plot legend. FIXED.
Publications - Updated publication listings
Progenetix citations page and better map
Progenetixuse Page¶
We have introduced a new publications listing page which contains links to articles that cite or use Progenetix and resources from this "ecosystem." Please let us know if you are aware of other such cases - frequently the publications do not use a proper citation format but just refer to "according tho the Progenetix resource" or similar in the text.
API: Biosample Schema Update
Conversion of `biocharacteristics` array to separate parameters
The
Biosample schema used for exporting Progenetix data has been adjusted with respect to representation of "bio-"classifications. The previous
biocharacteristics list parameter has been deprecated and its previous content is now expressed in1:
histologicalDiagnosis(PXF)
sampledTissue(PXF)
icdoMorphology(pgx)
icdoTopography(pgx)
Progenetix - An open reference resource for copy number vatiation data in cancer
Qingyao Huang¶
Cancer Genomics Consortium Annual Meeting 2021 Aug 1-4¶
Additional Links¶
API: Beacon Paths Updates
For testing the rapidly evolving Beacon v2 API, we have now implemented more paths/endpoints which mostly conform to the brand new & still "flexible" v2.0.0-draft.4 version. Please check the documentation and examples.
API: JSON Exports now camelCased
In "forward-looking" conformity with the Beacon v2
API, the JSON attributes of the API responses has been changed from
snake_cased
to
camelCased. Please adjust your code, where necessary.
The Progenetix oncogenomic resource in 2021
The Progenetix oncogenomic resource in 2021¶
Qingyao Huang, Paula Carrio Cordo, Bo Gao, Rahel Paloots, Michael Baudis¶
Database (Oxford). 2021 Jul 17;2021:baab043.¶
- doi: 10.1093/database/baab043.
- PMID: 34272855
- PMCID: PMC8285936.
- bioRxiv. doi: doi.org/10.1101/2021.02.15.428237
This article provides an overview of recent changes and additions to the Progenetix database and the services provided through the resource.
New feature - LOH data
Loss of heterozygosity (LOH) is a phenomenon frequently observed in cancer genomes where the selective pressure to keep only the susceptible gene product from one allele removes the other healthy allele from the pool; In this context, copy neutral - loss of heterozygosity (CN-LOH) is commonly observed in haematological malignancies (O'keefe et al., 2010 and Mulligan et al., 2007). To Progenetix oncogenomic resource, comprising of nearly 800 cancer types (by NCIt classification) as of 2021, we expanded the new feature of LOH in our data collection, in addition to the total copy number, to open the door for the analysis of frequency and impact of this phenomenon.
Update 2021-01-28:
LOH variants can now be queried through the Search and Beacon+ interfaces, either as specific variants or together with deletions.
Please be aware that in contrast to the "complete for chromosomes 1-22" DUP and DEL calls, LOH is only determined for a subset of samples and therefore will be underreported in the statistics section.
Improved Data Access through Histograms
Histograms for static datasets (e.g. NCIT codes, publications ...) provide now links to the dataset details page as well as a download option for the binned CNV frequency data.
Signatures of Discriminative CNA in 31 Cancer Subtypes
Bo Gao and Michael Baudis (2021)¶
Accepted at Frontiers in Genetics, 2021-04-15¶
Abstract¶
Copy number aberrations (CNA) are one of the most important classes of genomic mutations relatedto oncogenetic effects. In the past three decades, a vast amount of CNA data has been generated bymolecular-cytogenetic and genome sequencing based methods. While this data has been instrumentalin the identification of cancer-related genes and promoted research into the relation between CNA andhisto-pathologically defined cancer types, the heterogeneity of source data and derived CNV profilespose great challenges for data integration and comparative analysis. Furthermore, a majority of exist-ing studies have been focused on the association of CNA to pre-selected ”driver” genes with limitedapplication to rare drivers and other genomic elements.
Beacon+ and Progenetix Queries by Gene Symbol
We have introduced a simple option to search directly by Gene Symbol, which will match to any genomic variant with partial overlap to the specified gene. This works by expanding the Gene Symbol (e.g. TP53, CDKN2A ...) into a range query for its genomic coordinates (maximum CDR).
Such queries - which would e.g. return all whole-chromosome CNV events covering the gene of interest, too - should be narrowed by providing e.g.
Variant Type and
Maximum Size (e.g. 2000000) values.
Diffuse Intrinsic Pontine Glioma (DIPG) cohort
Diffuse Intrinsic Pontine Glioma (DIPG) is a highly aggressive tumor type that originate from glial cells in the pon area of brainstem, which controls vital functions including breathing, blood pressure and heart rate. DIPG occurs frequently in the early childhood and has a 5-year survival rate below 1 percent. Progenetix has now incorporated the DIPG cohort, consisting of 1067 individuals from 18 publications. The measured data include copy number variation as well as (in part) point mutations on relevant genes, e.g. TP53, NF1, ATRX, TERT promoter.
TCGA CNV Data) for quite some time, we have now launched a dedicated search page to facilitate data access and visualization using the standard Progenetix tools.
Additionally, the TCGA page section priovides pre-computed CNV frequency data for the individual TCGA studies.
arrayMap is Back
After some months of dormancy, the arrayMap resource has been relaunched through integration with the new Progenetix site. All of the oiginal arrayMap data has now been integrated into Progenetix, and of today the
arraymap.org domain maps to a standard Progenetix search page, where only data samples with existing source data (e.g. probe specific array files) will be presented.
Website updates
The new year brings some refinements to biosamples search and display:
- added example for a pure filter search (HeLa)
- made UCSC link depending on variants
- added info pop-ups to biosamples table header
- removed DEL and DUP fractions from biosamples table
- added label display for
external_referencesitems in biosamples table
Enjoy ...
Genomic Copy Number Signatures...
Genomic Copy Number Signatures Based Classifiers for Subtype Identification in Cancer¶
Bo Gao and Michael Baudis (2020)¶
bioRxiv, 2020-12-18¶
API and Services Documentation
Following the launch of the updated Progenetix website (new interface, now much
more data with >130'000 samples...) and the recent introduction of the new
Python based
bycon API for BeaconPlus and Progenetix Services
we now also have some structured information for the different API options.
pgx namespace and persistant identifiers
While the
pgx prefix had been registered in 2017 with identifiers.org
we recently changed the resolver and target mappings on the Progenetix server.
This went hand-in-hand with the generation of unique & persistant identifiers
for the main data items.
bycon powered BeaconPlus
Moving to a new, Python-based API
We've changed the Beacon backend to the
bycon code base. The new project's
codebase is accessible through the
bycon
project. Contributions welcome!
Example¶
- progenetix.org/beacon/variants/?referenceBases=G&alternateBases=A&assemblyId=GRCh38&referenceName=17&start=7577120
GA4GH Beacon v2 at GA4GH Plenary
GA4GH Beacon v2 - Evolving Reference Standard for Genomic Data Exchange¶
GA4GH 8th Plenary¶
Gary Saunders, Jordi Rambla de Argila, Anthony Brookes, Juha Törnroos and Michael Baudis¶
For the ELIXIR Beacon project, GA4GH Discovery work stream and the international network of Beacon API developers¶
The Beacon driver project was one of the earliest initiatives of the Global Alliance for Genomics and Health with the Beacon v1.0 API as first approved GA4GH standard. Version 2 of the protocol is slated to provide fundamental changes, towards a Internet of Genomics foundational standard:
Progenetix at GA4GH 2020 Plenary
GA4GH 8th Plenary¶
Michael Baudis¶
The Progenetix oncogenomics resource provides sample-specific cancer genome profiling data and biomedical annotations as well as provenance data from cancer studies. Especially through currently 113322 curated genomic copy number number (CNV) profiles from 1600 individual studies representing over 500 cancer types (NCIt), Progenetix empowers aggregate and comparative analyses which vastly exceed individual studies or single diagnostic concepts.
New Progenetix Website
The Progenetix website has been completely rebuilt using a JavaScript / React based framework and API based content delivery. At its core, the site is built around the Beacon standard, with some extensions for data colections and advanced query options.
Progenetix now licensed under CC-BY 4.0
After many years of using a CreativeCommons CC-BY-SA ("attribution + share alike"), the Progenetix resource has dropped the "SA - share alike" attribute and is now "attribution" only. This may facilitate the use of the data in more complex and/or commercial scenarios - enjoy!
Error Calibration ... for CNA Analysis
Minimum Error Calibration and Normalization for Genomic Copy Number Analysis.¶
Bo Gao and Michael Baudis (2020)¶
bioRxiv, 2019-07-31. DOI 10.1101/720854¶
Genomics, Volume 112, Issue 5, September 2020, Pages 3331-3341, accepted 2020-05-06 doi.org/10.1016/j.ygeno.2020.05.008.¶
Geographic assessment of cancer genome profiling studies
Geographic assessment of cancer genome profiling studies.¶
Paula Carrio Cordo, Elise Acheson, Qingyao Huang and Michael Baudis (2020)¶
DATABASE, Volume 2020, 2020, baaa009, doi.org/10.1093/database/baaa009¶
CURIE Prefix Remapping - NCIT & PMID
Wherever possible, data annotation in Progenetix uses {S}[B] OntologyClass
objects for categorical values, with CURIEs as id values. So far, the
Progenetix databases had used
pubmed: for PubMed identifiers and
ncit:
for NCI Metathesaurus (Neoplasm) ids.
Population assignment from cancer genomes
Enabling population assignment from cancer genomes with SNP2pop.¶
Huang Q and Baudis M. (2020)¶
Sci Rep 10, 4846 (2020). doi.org/10.1038/s41598-020-61854-x¶
BeaconPlus in ELIXIR Beacon Network
The Beacon+ implementation of the GA4GH Beacon protocol has become a part of the ELIXIR Beacon Network, an expanding Beacon service to query multiple Beacon resources and aggregate their query results.
Beacon Variants in UCSC Browser
The response element of the Beacon+ interface now contains a link for displaying the matched variants e.g. of e.g. a CNV query in the UCSC genome browser.
Minimum Error Calibration and Normalization for Genomic Copy Number Analysis
Minimum Error Calibration and Normalization for Genomic Copy Number Analysis.¶
Bo Gao and Michael Baudis (2019)¶
bioRxiv, 2019-07-31. DOI 10.1101/720854¶
New info.progenetix.org site
Launch of new info.progenetix.org resource site¶
Today, we started to provide a new documentation structure for our group's work and software projects.
The site is assumed, over time, to replace the previous Progenetix guide.
arrayMap 2014: an...
arrayMap 2014: an updated cancer genome resource.¶
Cai H, Gupta S, Rath P, Ai N, Baudis M.¶
Abstract Somatic copy number aberrations (CNA) represent a mutation type encountered in the majority of cancer genomes. Here, we present the 2014 edition of arrayMap (), a publicly accessible collection of pre-processed oncogenomic array data sets and CNA profiles, representing a vast range of human malignancies. Since the initial release, we have enhanced this resource both in content and especially with regard to data mining support.
Chromothripsis-like...
Chromothripsis-like patterns are recurring but heterogeneously distributed features in a survey of 22,347 cancer genome screens.¶
Cai H, Kumar N, Bagheri HC, von Mering C, Robinson MD, Baudis M.¶
Abstract.
Progenetix: 12 years...
Progenetix: 12 years of oncogenomic data curation.¶
Cai H, Kumar N, Ai N, Gupta S, Rath P, Baudis M.¶
Abstract.
Progenetix & arrayMap Changes (2012-06-01 - 2013-05-22)
2013-05-22¶
- bug fix: fixing lack of clustering for CNA frequency profiles in the analysis section
- removed "Series Search" from the arrayMap side bar; kind of confusing - just search for the samples & select the series
2013-05-12¶
- introduced a method to combine sample annotations and segmentation files for user data processing (see "FAQ & GUIDE")
- fixed some array plot presentation and replotting problems
2013-05-05¶
- consolidation of script names - again, don't use deep links (besides for "api.cgi?...")
- moving of remaining sample selection options (random sample number, segments number, age range) to the sample selection page, leaving the pre-analysis page (now "prepare.cgi") for plotting/grouping options
- fixed the KM-style survival plots
2013-04-10¶
- re-factoring of the cytobands plotting for histograms and heatmaps; this also fixes missing histogram tiles
- analysis output page: the circular histogram/connections plot and group specific histograms are now all available as both SVG and PNG image files
2013-04-06¶
Some changes to the plotting options:
- the circular plot is now added as a default; and connections are drawn in for <= 30 samples (subject to change)
- one can now mark up multiple genes (or other loci of interest), for all plot types
2013-03-25¶
- added option to create custom analysis groups based on text match values
- rewritten circular plot code
2013-02-27¶
- copied data for PMIDs 17327916, 17311676, 18506749 and 18246049 from arrayMap to Progenetix
2013-02-24¶
- bug fix: gene selector was broken for about a week; fixed
2013-02-17¶
- In many places, images are now converted sever side to PNG data streams and embedded into the web pages. This will substantially decrease web data traffic and page download times. Fully linked SVG images (including region links etc.) are still available through the analysis pipeline.
2013-02-13¶
- data fix: PMID 18160781 had missing loss values (due to irregular character encoding); fixed, thanks to Emanuela Felley-Bosco for the note!
2012-12-14¶
- moved the region filter from the analysis to the sample selection page
- added a "mark region" option to the analysis page: one now can highlight a genome region in histograms and matrix plots
2012-11-29¶
- added "select all" option to entity lists
- implemented first version of sample-to-entity match score
- added single sample annotation input field to "User File Processing"; i.e. one can now type in CNA data for a single case, and have this visualised and similar cases listed
- added per sample CNA visualisation to the samples details listings (currently if up to 100 samples)
- added direct access to sample details listing to the subsets pages
2012-11-09¶
- adding of abstract search to the publication search page
2012-10-25¶
- introduction of a matching function for similar cases by CNA profile, accessible through the sample details pages of both Progenetix and arraymap
2012-10-22¶
- Introduction of SEER groups
2012-09-26¶
The database now contains the copy number status for different interval sizes (e.g. 1MB). With this, users can now create their own data plots (histograms etc.) using more than 10000 cancer copy number profiles with a high resolution. The options here are still being tested and improved - comments welcome!
2012-09-18¶
- added a new export file format "ANNOTATED SEGMENTS FILE", which uses the first columns for standard segment annotation, followed by some diagnostic and clinical data; i.e., the information for a case is repeated for each segment:
GSM255090 22 25063244 25193559 1 NA C50 8500/3 breast Infiltrating duct carcinoma, NOS Carcinomas: breast ca. NA 1 51 0.58 GSM255090 22 25368299 48899534 -1 NA C50 8500/3 breast Infiltrating duct carcinoma, NOS Carcinomas: breast ca. NA 1 51 0.58 GSM255091 1 2224111 30146401 -1 NA C50 8500/3 breast Infiltrating duct carcinoma, NOS Carcinomas: breast ca. NA 0 72 0.54 GSM255091 1 35418712 37555461 1 NA C50 8500/3 breast Infiltrating duct carcinoma, NOS Carcinomas: breast ca. NA 0 72 0.54
2012-09-13¶
- added gene selection for region specific replotting of array data
2012-08-22¶
- the gene database has been changed to the last version of the complete (HUGO names only) Ensembl gene list for HG18; previously, only a subset of "cancer related genes" was offered in the gene selection search fields
2012-07-04¶
- some interface and form elements have been streamlined (e.g. less commonly used selector fields, sample selection options)
- some common options are now displayed only if activated (e.g. "mouse over" to see all files available for download)
- icon quality has been enhanced for all but the details pages
2012-06-13¶
- New: All pre-generated histogram and ideogram plots are now produced based on a 1Mb matrix, with a 500Kb minimum size filter to remove CNV/platform dependent background from some high resolution array platforms. The unfiltered data can still be visualized through the standard analysis procedures.
- Bug fix: Interactive segment size filtering so far only worked for region specific queries, but not as a general filter (see above). This has been fixed; a minimum segment size in the visualization options now will remove all smaller segments.
2012-06-01¶
- NEW: change log; that is what is shown here
- FEATURE: The interval selector now has options to include the p-arms of acrocentric chromosomes (though the data itself there may be incompletely annotated!). Feature requested by Melody Lam.
arrayMap feature update(s)
arrayMap feature update(s)¶
Over the last weeks, we have introduced a number of new search/ordering features to arrayMap. Some of those mimic functions previously implemented in Progenetix. Overall, the highlights are:
ICD entity aggregation
- all ICD-O entities with their according samples
ICD locus aggregation
- all tumor loci with their according samples
Clinical group aggregation
- clinical super-entities (e.g. "breast ca.": all carcinoma types with locus breast) with their samples
Publication aggregation
- all publication with samples in arrayMap
In contrast to Progenetix, we do not offer precomputed SCNA histograms. However, users can generate them on the fly, but should consider the specific challenges in doing so (e.g. noise background in frequency calculations).
Genomic imbalances in 5918 malignant tumors
Genomic imbalances in 5918 malignant epithelial tumors: an explorative meta-analysis of chromosomal CGH data.¶
Baudis M.¶.
Online database and...
Online database and bioinformatics toolbox to support data mining in cancer cytogenetics.¶
Baudis M.¶
Progenetix.net: an...
Progenetix.net: an online repository for molecular cytogenetic aberration data.¶
Baudis M, Cleary ML.¶. | https://docs.progenetix.org/news/ | 2022-06-25T04:58:13 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://info.baudisgroup.org/assets/img/2021-08-03_CGC-session-info.png',
None], dtype=object)
array(['http://info.progenetix.org/assets/img/histogram-new-options.png',
None], dtype=object) ] | docs.progenetix.org |
Getting Started with SMS in Recapture
Table of Contents
- Before you Start
- Activating SMS
- SMS Configuration
- Change or Cancel a Phone Number
- Supported SMS Campaigns
- Responding to customers via SMS
Before you start with SMS
SMS marketing is a powerful feature within Recapture to allow you to reach your audience in a new and personal way. But before you start, you need to be aware of the TCPA guidelines and the penalties for breaking the law. It's important that you use SMS within the boundaries of what your customers want, and within the guidelines of the law.
Activating SMS in Recapture
SMS can be activated in Recapture in one of two ways:
1) Add an SMS campaign in your Abandoned Carts campaigns area:
When you pick "Add Campaign" under Abandoned Carts campaigns in Recapture, you'll be given a chance to select SMS vs. Email:
If you pick SMS Campaign, you'll be asked to setup the basics of SMS integration below (See SMS Configuration and Picking a Phone Number)
NOTE: Keep in mind that using SMS will activate additional charges to your Recapture account.
For more details about what to do after activation, see Creating SMS Abandoned Cart Campaigns
2) If you enable any of the Order Updates:
If you visit the "Order Updates" area:
And then select "Campaigns" and activate a campaign, this will also enable the SMS configuration.
NOTE: Keep in mind that using SMS will activate additional charges to your Recapture account.
For more details about what to do after activation, see Creating SMS Order Notification Campaigns
SMS Configuration and Picking a Phone Number
First you'll need to select a phone number for your texts to originate from. We recommend picking a number that is in the same state or province that your store operates from. Currently, Recapture only supports US and Canadian numbers. We are looking into international support, but this is currently not available outside the US and Canada.
When you enable SMS from the section above, you'll see this screen:
You can filter numbers by country (currently we support US and Canada), or enter an area code OR a state or province. This will filter the list of numbers for you to pick something that you like. Pick a number from the list:
And then click Reserve This Number to complete your setup. If you don't like the choices, you can click to load more numbers and scroll through available choices.
NOTE: Not all area codes will have numbers available and this is outside of our control.
Changing Your Phone Number or Cancelling SMS in Recapture
If you wish to change your phone number or stop your SMS service in Recapture, you can do this under the Account Settings area:
Scroll to the bottom and click Release Number:
And then you will see this:
Releasing your number will mean several things:
- Your SMS campaigns will immediately stop sending (if you have any active ones)
- You will lose your existing phone number and will not likely get it back (we cannot guarantee keeping the phone number if you release it)
- Your SMS additional charges on your account will stop as of that date
Please make sure that this is what you want to do. If you have any questions or concerns, reach out to Customer Support.
What kind of campaigns can I send via SMS?
Currently Recapture supports the following campaigns via SMS:
- Abandoned cart reminders
- Order update notifications
- Order shipped notifications
- Order on hold notifications
For more info about Order Notifications, see Creating SMS Order Notification Campaigns
You can enable the Order notifications from
For more info about Abandoned Cart Notifications, see Creating SMS Abandoned Cart Campaigns
You can enable the Abandoned Cart reminders from when you create a new campaign on the upper right.
General broadcast campaigns (similar to Email Broadcasts) are planned but not currently available. Most stores need to build up their mobile number list first in order to take advantage of that when it's available. We strongly recommend setting up these flows now so you can take full advantage of the power of SMS once SMS broadcasts are supported.
Responding to customers via SMS Inbox
Recapture allows you to have two-way conversations with your customers via SMS. We recommend responding within a day or less to ensure your customers are happiest with your level of service. The faster, the better. Remember, SMS is considered a fast, personal medium so be prepared to communicate with your customers in that way.
Your customers' replies can be found in one of the two SMS Inboxes:
These are the same inbox, just linked from two places. Here, the messages will appear on the left side. The most recent messages will be at the top, and the oldest messages at the bottom.
Clicking on a message from the left pane will show you the message in the middle pane and the full conversation back and forth to date (including any automated messages from Recapture).
On the right hand pane, Recapture will show you the order information for this customer (if any) so you can be aware of the complete situation with the customer when you reply.
If the customer replies with a known "opt out" phrase, like "STOP" or "No more texting" or similar, Recapture will automatically process opt-out notification replies and mark them accordingly in the SMS Inbox
For more details on your SMS inbox, see our article on Managing your SMS Inbox | https://docs.recapture.io/article/87-getting-started-with-sms-in-recapture | 2022-06-25T04:53:03 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55fb1fb2c697913bc9927e29/images/605f9038c44f5d025f448a5e/file-Wxu9gMIgii.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55fb1fb2c697913bc9927e29/images/605f92410ad05379b10d16e4/file-BzeqScvJ8y.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/55fb1fb2c697913bc9927e29/images/605f91330ad05379b10d16e3/file-vw1wWGYOu8.png',
None], dtype=object) ] | docs.recapture.io |
The Trend Micro Toolbar works with Microsoft Internet Explorer 7.0*, 8.0, 9.0, 10.0 or 11.0, Mozilla Firefox 22.0 or previous versions still supported by Mozilla, and Google Chrome 28 or or pevious versions still supported by Google.
The Trend Micro Page Rating feature works with the Google™, Bing™, Yahoo!®, Baidu™, Biglobe™, OCN™, Infoseek®, and Goo search engines.
The Trend Micro Webmail Rating feature works with Outlook.com® , Yahoo! Mail, and Gmail™.
*Trend Micro Privacy Scanner does not work with Microsoft Internet Explorer 7.0.
The Wi-Fi Advisor works with the Microsoft Wireless Manager.
The Trend Micro IM Rating feature works with these services:
AOL® Instant Messenger™ (AIM®) 6.8 and 6.9
Yahoo! Messenger 9.5, 10.0 and 11.0
Titanium 2014 does not support the "Click to Run" version of Outlook 2013.
The Data Theft Prevention feature works with the following services: | https://docs.trendmicro.com/en-us/consumer/titanium2014/system_requirements.aspx | 2022-06-25T05:05:33 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.trendmicro.com |
Velruse Documentation¶
Velruse is a set of authentication routines that provide a unified way to have a website user authenticate to a variety of different identity providers and/or a variety of different authentication schemes.
It is similar in some ways to Janrain Engage:
-
Velruse aims to simplify authenticating a user. It provides auth provider‘s that handle authenticating to a variety of identity providers with multiple authentication schemes (LDAP, SAML, etc.). Eventually, Velruse will include widgets similar to RPXNow that allow one to customize a login/registration widget so that a website user can select a preferred identity provider to use to sign-in. In the mean-time, effort is focused on increasing the available auth provider‘s for the commonly used authentication schemes and identity providers (Facebook, Google, OpenID, etc). Unlike other authentication libraries for use with web applications, a website using Velruse for authentication does not have to be written in any particular language.
Contents:
- | https://velruse.readthedocs.io/en/latest/ | 2022-06-25T04:57:37 | CC-MAIN-2022-27 | 1656103034170.1 | [] | velruse.readthedocs.io |
imports performed.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
list-import-file-task:
taskInfos
list-import-file-task [-.
taskInfos -> (list)
Lists information about the files you import.
(structure)
Information about the import file tasks you request.
completionTime -> (timestamp)The time that the import task completes.
id -> (string)The ID of the import file task.
importName -> (string)The name of the import task given in
StartImportFileTask.
inputS3Bucket -> (string)The S3 bucket where the import file is located.
inputS3Key -> (string)The Amazon S3 key name of the import file.
numberOfRecordsFailed -> (integer)The number of records that failed to be imported.
numberOfRecordsSuccess -> (integer)The number of records successfully imported.
startTime -> (timestamp)Start time of the import task.
status -> (string)Status of import file task.
statusReportS3Bucket -> (string)The S3 bucket name for status report of import task.
statusReportS3Key -> (string)The Amazon S3 key name for status report of import task. The report contains details about whether each record imported successfully or why it did not. | https://docs.aws.amazon.com/de_de/cli/latest/reference/migrationhubstrategy/list-import-file-task.html | 2022-06-25T06:05:24 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.aws.amazon.com |
Specification Test Mode combines the individual diagnostic capabilities from the Rules Builder, Form Designer and SOLIDWORKS into a single interface.
Using specification test mode complete testing scenarios can be carried out on:
Unlike form design test mode, specification test mode will allow testing of the following:
During Specification Test any element that can have a rule applied can be analyzed using the Analyze Rule tool.
The Analyze Rule tool gives feedback on the item being analyzed in the header. The tool consists of three sections:
The rule window displays the rule applied to the selected item.
If the rule is not calculating as expected it can be manually changed directly in the window to try out new scenarios. The modified rule will be reflected in the Values and Steps and the Rule Drill Down tabs.
The Values and Steps tab displays the current values of the named ranges used in the rule and the steps taken to calculate the result.
The Rule Drill Down tab allows the rule and all of the named ranges used in the calculation of the rule to be drilled into to see exactly where the rule obtains it's values.
To Drill Down into a rule, expand the item by clicking the + box next to the expandable item.
Specification test mode is started on a new specification by:
To change the current user form or make further form control selections/entries:
Specification test mode is started on an existing specification by:
The properties, values and rules of the various stages from the task explorer in DriveWorks Administrator can now be selected from the tab strip that runs across the top of the specification window:
Once the Test Mode button has been clicked the specification window changes to display various tasks that can be diagnosed as tabs along the top of the user form.
Any control can now be selected from the current user form. Once a control has been selected the properties for that control will be displayed to the right of the user form.
Any property that has a rule applied can be diagnosed using the Analyze Rule tool.
Selecting the Navigation tab in specification test mode will display the current navigation path to progress through the user forms.
If a user form does not appear in the navigation it will not be shown to the user. Dialog forms or Child Specification forms will not appear in the navigation.
Follow the steps in the links below to start specification test mode:
To view the current Form Navigation:
When the Form Navigation contains a decision the inclusion/exclusion of the decision form can be shown by:
The values of constants used in the project can be modified here to test results of rules where they are used. Any changes made to the constants here will not update in the project. Once a satisfactory value for the constant has been achieved return to the Define Constants stage in the task explorer and apply the new value to the affected constant.
Any variable can be analyzed and modified in the rule window of the analyze tool. Any changes will not be applied into the project.
To apply changes into the project return to the Define Variables stage in the task explorer and apply the new value to the affected variable.
Any document created by the project can be viewed in specification test mode. To view a document:
All parameters being driven into a document can be analyzed:
Follow the steps in the links below to start specification test mode:
To view the rule and current result of a parameter associated to a document:
The rule and current result are displayed in corresponding columns in the properties window. To further analyze the rule:
The model rules tab of specification test mode displays a tree view of the assembly structure and models that will be generated. The tree view is structured in the order the models are listed in the model rules section. Driven alternative models should be placed higher up in the tree than the assembly they will be replaced into.
The tree view shows the master file name of each model and then the new file name of the generated file after the - symbol.
When a Driven Alternative model requires generating it will be listed as (New) against the actual model name in the tree view. The master model it is to replace will be shown with that master name and the new driven alternative name. At this stage it will be classed as (Existing) because the driven alternative is generated prior to the assembly.
For instance the image below shows a Driven Alternative named Forked Clevis Assy 1.63 IN (New) being generated ahead of the Hydraulic Cylinder. In the structure of the Hydraulic Cylinder you will see the sub assembly named Dummy Clevis Assy followed by the name of the driven alternative it will be replaced with - Forked Clevis Assy 1.63 IN (Existing).
All results being driven into the driven parameters are listed in the properties section. To view the results of the parameters being driven into a model:
Follow the steps in the links below to start specification test mode:
To view the rule and current result of a parameter associated to a captured model:
The rule and current result are displayed in corresponding columns in the properties window. To further analyze the rule: | https://docs.driveworkspro.com/Topic/HowToDiagnoseProjectIssuesUsingSpecificationTestMode | 2022-06-25T05:36:18 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.driveworkspro.com |
There are many great use sexual cam females. Whether you’re looking for and take note action or possibly a more close experience, a live web cam girl could make you feel just like you’re in the real world. Whether you’re timid, or a total hottie, there’s a having sex cam person anticipating you. Here are several of the most well-known ones. Go through to discover the best ways to get the most away of your time with a live webcam female.
Live sex cams and video live asian cam girls discussion sites don’t require visitors to turn on their camcorders. That means products are able to give you free cam materials without the worry to be recorded. Premium quality sites may also possess a HTTPS certificate so that the person on the other end is secure. To have a glimpse of your girls, be sure to use the latest technology and software. Here are several of our most desired live webcam chat rooms.
If you’re trying to find an climax or two, weight loss go wrong having a sex cam girl. These types of young women currently have plenty of ability and are prepared to entertain you. While you’re in it, make sure you treat associated with respect. They must appreciate it! Because of this , they’re a popular choice. They have everything you want within a sexual romance, and you may be able to get a glimpse of the best during these women.
Whether you need to watch or perhaps squirt, the ladies you see on sex camshaft sites are professionally-trained. All the women of all ages on StripChat happen to be porn-ready and know how to make you feel like a sex empress. The site does have a couples section for lovers, but it genuinely for transgender people. You should know of the fact that a great number of cameras are heterosexual.
Despite being an previous site, Omegle is still a popular website and a great place to match people. The live having sex cam section of Omegle features beautiful camgirls. Omegle as well caters to several languages, and it offers a free of charge college keep track of college students. The trial offer of Omegle is worth checking out. With Omegle, you can find a live having sex cam person for just $0. 99, turning it into an excellent way to test out a fresh sexcam experience.
You may also choose to enjoy a streaming coverage of a pornstar’s show on the premium camming website just like Jerkmate. Although it’s unlikely you are able to have a one-on-one appointment with a pornstar, you’ll be able to enjoy their concert events and choose based on your own hobbies. Naturally , pornstars expect you to shell out a token to request specific operates, but that may be common in a camming website.
For those who prefer an even closer observe of bare cam young ladies, Instagram is a great way to do hence. It’s easy to get live video clips of girls displaying their cleavage and making themselves accessible to live talkers. You can also get yourself a break peek of any private sexual activity session using their accounts upon Instagram. These are not only a good way to connect with sex cam females, but they’re also a good way to get a quick dose of live action. | https://docs.jagoanhosting.com/top-five-sex-camera-girls-sites/ | 2022-06-25T04:24:23 | CC-MAIN-2022-27 | 1656103034170.1 | [array(['https://camslutsblog.com/wp-content/uploads/2021/12/combien-gagne-une-camgirl-salaire-revenu-4.png',
'combien gagne une camgirl salaire revenu'], dtype=object) ] | docs.jagoanhosting.com |
Documentation | Samples | CustomPaint
How to do custom drawings in a platform-independent way and how to assign renderer other than default and access its low-level API (like DC, Graphics).
DK allows using different renderers. For example .NET version is capable to draw using Graphics (GDI+) or SharpDX. This sample shows how to use methods called Canvas* (eg CanvasDrawText) to draw using same API independent of a platform, operating system etc. However, it is still possible to call low-level underlying graphics object. And this sample shows how to do this.
This sample illustrates use of:
This sample is available on following platforms (click to see source code): | https://docs.tatukgis.com/DK11/samples:samples:custompaint | 2022-06-25T05:26:23 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.tatukgis.com |
.
Requires permission to access the AssociateTargetsWithJob action.
Request Syntax
POST /jobs/
jobId/targets?namespaceId=
namespaceIdHTTP/1.1 Content-type: application/json { "comment": "
string", "targets": [ "
string" ] }
URI Request Parameters
The request uses the following URI parameters.
- jobId
The unique identifier you assigned to this job when it was created.
Length Constraints: Minimum length of 1. Maximum length of 64.
Pattern:
[a-zA-Z0-9_-]+
Required: Yes
- namespaceId
The namespace used to indicate that a job is a customer-managed job.
When you specify a value for this parameter, Amazon IoT Core sends jobs notifications to MQTT topics that contain the value in the following format.
$aws/things/THING_NAME/jobs/JOB_ID/notify-namespace-NAMESPACE_ID/
Note
The
namespaceIdfeature is in public preview.
Pattern:
[a-zA-Z0-9_-]+.
Length Constraints: Maximum length of 2048.: 400
See Also
For more information about using this API in one of the language-specific Amazon SDKs, see the following: | https://docs.amazonaws.cn/en_us/iot/latest/apireference/API_AssociateTargetsWithJob.html | 2022-06-25T04:50:14 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.amazonaws.cn |
Gain Auto (BCON for MIPI)#
If you want to use Gain Auto and Exposure Auto at the same time, use the Auto Function Profile feature to specify how the effects of both are balanced.
To adjust the gain manually, use the Gain feature.
Using the Feature#
Enabling or Disabling Gain Auto#
To enable or disable the Gain Auto auto function, set the
GainAuto parameter to one of the following operating modes: -
Continuous: The camera adjusts the gain continuously while images are being acquired. -
Off: Disables the Gain Auto auto function. The gain remains at the value resulting from the last automatic or manual adjustment.
Info
- On daA2500-60mc cameras, enabling or disabling Gain Auto also enables or disables Exposure Auto.
- When the camera is capturing images continuously, the auto function takes effect with a short delay. The first few images may not be affected by the auto function.
Sample Code#
// Enable Gain Auto by setting the operating mode to Continuous camera.GainAuto.SetValue(GainAuto_Continuous);
// Enable Gain Auto by setting the operating mode to Continuous CEnumParameter(nodemap, "GainAuto").SetValue("Continuous");
/* Macro to check for errors */ #define CHECK(errc) if (GENAPI_E_OK != errc) printErrorAndExit(errc) GENAPIC_RESULT errRes = GENAPI_E_OK; /* Return value of pylon methods */ /* Enable Gain Auto by setting the operating mode to Continuous */ errRes = PylonDeviceFeatureFromString(hdev, "GainAuto", "Continuous"); CHECK(errRes);
You can also use the pylon Viewer to easily set the parameters. | https://docs.baslerweb.com/embedded-vision/gain-auto | 2022-06-25T04:45:31 | CC-MAIN-2022-27 | 1656103034170.1 | [] | docs.baslerweb.com |
Subsets and Splits