content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
AZ-220: Microsoft Azure IoT Developer
Languages: en ja ko fr es pt-br ru ar-sa zh-cn it de zh-tw id-id
Retirement date:
This exam measures your ability to accomplish the following technical tasks: set up the Azure IoT Hub solution infrastructure; provision and manage devices; implement IoT Edge; process and manage data; monitor, troubleshoot, and optimize IoT solutions; and implement security.
Price based on the country or region in which the exam is proctored. | https://docs.microsoft.com/en-us/certifications/azure-iot-developer-specialty/ | 2022-08-08T01:28:23 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.microsoft.com |
".
4 Writing your
aggregate method
The aggregate method is the only method that you need to implement. This is the method that gets called to aggregate the resulting DataDDS object. The DataDDS object is stored in the BESDataHandlerInterface instance passed to your method. Here is an example of getting the DataDDS out of the BESDataHandlerInterface.
void NCESGAggregationServer::aggregate( BESDataHandlerInterface &dhi ) { if( dhi.action == "das" ) { string err = "DAS is not a valid request type in aggregated datasets" ; throw BESInternalError( err, __FILE__, __LINE__ ) ; } BESResponseObject *resp = dhi.response_handler->get_response_object() ; BESDataDDSResponse *bdds = dynamic_cast<BESDataDDSResponse *>(resp) ; if( !bdds ) { string err = "response object is not a DataDDS" ; throw BESInternalError( err, __FILE__, __LINE__ ) ; } DataDDS *dds = bdds->get_dds() ; if( !dds ) { string err = "dap response object is not a DataDDS" ; throw BESInternalError( err, __FILE__, __LINE__ ) ; } .... your code here .....
Once you have the DataDDS you have all of the data that has been read in and you can perform your aggregation. The DataDDS will be organized in the following manner. For each of the containers defined in the BES request (in our example, c1, c2, c3, and c4) there will be a structure containing the data for that container. So, in our example, you would have:
The result of your aggregation will be a new DataDDS object that will take the place of the one you got out. | https://docs.opendap.org/index.php?title=BES_Aggregation&oldid=3399 | 2022-08-08T02:03:29 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.opendap.org |
View the peer dashboard
The indexer cluster peer dashboard provides detailed information on the status of a single peer node.
For a single view with information on all the peers in a cluster, use the manager node dashboard instead, as described in "View the manager node.
- Manager location. The manager! | https://docs.splunk.com/Documentation/Splunk/9.0.0/Indexer/Viewthepeerdashboard | 2022-08-08T00:40:35 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Python Yamcs Client
- General Client
- Mission Database
- TM/TC Processing
- Archive
- Link Management
- Object Storage
- File Transfer
- Time Correlation (TCO)
- Timeline
- Examples
Related
Download this Document
/ Python Yamcs Client / Examples / alarms.py
alarms.py¶
from time import sleep from yamcs.client import YamcsClient def receive_callbacks(): """Registers an alarm callback.""" def callback(alarm_update): print("Alarm Update:", alarm_update) processor.create_alarm_subscription(callback) def acknowledge_all(): """Acknowledges all active alarms.""" for alarm in processor.list_alarms(): if not alarm.is_acknowledged: processor.acknowledge_alarm(alarm, comment="false alarm") if __name__ == "__main__": client = YamcsClient("localhost:8090") processor = client.get_processor(instance="simulator", processor="realtime") receive_callbacks() sleep(10) print("Acknowledging all...") acknowledge_all() # If a parameter remains out of limits, a new alarm instance is created # on the next value update. So you would keep receiving callbacks on # the subscription. # The subscription is non-blocking. Prevent the main # thread from exiting while True: sleep(10) | https://docs.yamcs.org/python-yamcs-client/examples/alarms/ | 2022-08-08T00:57:28 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.yamcs.org |
Data Integration¶
What you'll build¶
A data service provides a web service interface to access data that is stored in various datasources. The following sections describe how you can use WSO2 Integration Studio to work with data services' artifacts.
Tip
Note that this feature is currently supported in WSO2 Integration Studio for relational datasources and CSV files.
Let's get started!¶
Step 1: Set up the workspace¶
Download the relevant WSO2 Integration Studio based on your operating system. The path to the extracted/installed folder is referred to as
MI_TOOLING_HOMEthroughout this tutorial.
To demonstrate how data services work, we will use a MySQL database as the datasource. Follow the steps given below to set up a MySQL database:
- Install the MySQL server.
Download the JDBC driver for MySQL from here. You will need this when you configure the MySQL server with the Micro Integrator.
Create a database named
Employees.
4. Create a user and grant the user access to the Database.
CREATE DATABASE Employees;
CREATE USER 'user'@'localhost' IDENTIFIED BY 'password'; GRANT ALL PRIVILEGES ON Employees.* TO 'user'@'localhost';
Create the Employee table inside the Employees database:
USE Employees; CREATE TABLE Employees (EmployeeNumber int(11) NOT NULL, FirstName varchar(255) NOT NULL, LastName varchar(255) DEFAULT NULL, Email varchar(255) DEFAULT NULL, Salary varchar(255)); INSERT INTO Employees (EmployeeNumber, FirstName, LastName, Email, Salary) values (3, "Edgar", "Code", "[email protected]", 100000);
Step 2: Creating a data service¶
Follow the steps given below to create a new data service.
Creating a Maven Multi Module project¶
All the data services' artifacts that you create should be stored in a Data Service Module. Follow the steps given below to create a module:
Open WSO2 Integration Studio and click New Maven Multi Module Project in the Getting Started tab as shown below.
In the Maven Modules Creation dialog box that opens, give a name (artifactId) for the project.
- If required, change the Maven information about the project.
- Click Finish. The new project will be listed in the project explorer.
Creating a data service module¶
All the data services' artifacts that you create should be stored in a Data Service Module. Follow the steps given below to create a module:
- Right click on the created Maven Multi Module Project and go to New -> Data Service Configs.
- In the New Data Service Configs dialog box that opens, give a name for the config module and click Next.
- If required, change the Maven information about the config module.
- Click Finish. The new module will be listed in the project explorer.
Creating the data service¶
Follow the steps given below to create the data service file:
- Select the already-created Data Service Config module in the project explorer, right-click and go to New -> Data Service.
The New Data Service window will open as shown below.
- To start creating a data service from scratch, select Create New Data Service and click Next to go to the next page.
Enter a name for the data service and click Finish:
A data service file (DBS file) will now be created in your data service module as shown below.
Creating the datasource connection¶
- Click Data Sources to expand the section.
- Click Add New to open the Create Datasource page.
Enter the datasource connection details given below.
Click Test Connection to expand the section.
Click the Test Connection button to verify the connectivity between the MySQL datasource and the data service.
- Save the data service.
Creating a query¶
Let's write an SQL query to GET data from the MySQL datasource that you configured in the previous step:
- Click Queries to expand the section.
- Click Add New to open the Add Query page.
Enter the following query details:
Click Input Mappings to expand the section.
Click Generate to generate input mappings automatically.
Tip
Alternatively, you can manually add the mappings: 1. Click Add New to open the Add Input Mapping page. 2. Enter the following input element details.
Save the input mapping.
- Click Result (Output Mappings) to expand the section.
Enter the following value to group the output mapping:
Click Generate to generate output mappings automatically.
Tip
Alternatively, you can manually add the mappings: 1. Click Add New to open the Add Output Mapping page. 2. Enter the following output element details.
3. Save the element. 4. Follow the same steps to create the following output elements:
| Datasource Type | Output Field Name | Datasource Column Name | Schema Type | |-----------------|-------------------|------------------------|-------------| | column | FirstName | FirstName | string | | column | LastName | LastName | string | | column | Email | Email | string |
Click Save to save the query.
Creating a resource to invoke the query¶
Now, let's create a REST resource that can be used to invoke the query.
Click Resources to expand the section.
Click Add New to open the Create Resource page.
Enter the following resource details.
Save the resource.
Tip
Alternatively, you can generate a data service from a datasource. For more information, refer Generate Data Services.
Step 3: Package the artifacts¶
Create a new composite exporter module
- Right-click the Maven Multi Module Project go to New -> Composite Exporter.
- In the dialog box that opens, select the data service file, and click Finish.
Package the artifacts in your composite exporter to be able to deploy the artifacts in the server.
- Open the
pom.xmlfile in the composite application.
- Ensure that your data service file is selected in the POM file.
- Save the file.
Step 4: Configure the Micro Integrator server¶
We will use the embedded Micro Integrator of WSO2 Integration Studio to run this solution.
To add the MySQL database driver to the server:
- Click the Embedded Micro Integrator Configuration (
) icon on the upper menu to open the dialog box.
- Click the (
) icon to add the MySQL driver JAR (see Setting up the Workspace) to the
/libdirectory of the embedded Micro Integrator.
If the driver class does not exist in the relevant directory, you will get an exception such as
Cannot load JDBC driver class com.mysql.jdbc.Driver when the Micro Integrator starts.
Step 6: Testing the data service¶
Let's test the use case by sending a simple client request that invokes the service.
Send the client request¶
Let's send a request to the API resource to make a reservation. -X GET
Analyze the response¶
You will see the following response received to your HTTP Client:
TopTop
<Employees xmlns=""> <EmployeeNumber>3</EmployeeNumber> <FirstName>Edgar</FirstName> <LastName>Code</LastName> <Email>[email protected]</Email> </Employees> | https://apim.docs.wso2.com/en/latest/tutorials/integration-tutorials/sending-a-simple-message-to-a-datasource/ | 2022-08-08T00:37:32 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['https://apim.docs.wso2.com/en/4.1.0/assets/img/integrate/tutorials/data_services/dataservice_view.png',
None], dtype=object) ] | apim.docs.wso2.com |
To Add Tyres onto an existing purchase invoice simply click the new button just above the invoice item field. This will then create a new line for you to add another item. First you need to specify quantity in the box to the left, then add the tyre onto the invoice by referencing the brand name followed by the stock code and finally input the price you would like to charge. Once that is all done just click the save icon to the right and it will be added it on to the invoice.
| https://docs.motasoft.co.uk/adding-items-to-an-existing-purchase-invoice/ | 2022-08-08T02:26:19 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['http://knowledgebase.motasoft.co.uk/wp-content/uploads/2016/08/image31.png',
'image31'], dtype=object) ] | docs.motasoft.co.uk |
Setting Up a Plugin or Theme from Bitbucket
Installing a plugin or theme from Bitbucket is super easy with WP Pusher. In this guide you will learn about how to setup Bitbucket and how to install your first plugin or theme.
Setup
If you want to install and manage a plugin or theme that is in a private repository on Bitbucket, you will need to obtain an access token for WP Pusher. You also need a token to enable Push-to-Deploy, no matter if your repository is public or private. In order to obtain a token, all you need to do is to navigate to the Bitbucket section of the WP Pusher settings screen, click "Obtain a Bitbucket token" and copy & paste the token into the text field. After clicking the "Save Bitbucket token" you can now install a private repository from Bitbucket.
Install a plugin or theme
You find the plugin or theme installation screen in WP admin under "WP Pusher" -> "Install Plugin/Theme". While installing the plugin or theme, you have the following options:
- Repository host: Where the plugin/theme is hosted.
- Plugin/Theme repository: The repository handle, which is "username/repository-name".
- Repository branch: (Optional) Specify which branch you want to install your plugin/theme from. Defaults to master if left blank.
- Repository subdirectory: (Optional) If your plugin/theme lives in a subdirectory of the repository, you can specify the path here.
- Repository is private: Check this option if your plugin/theme is in a private repository. Requires a license and a Bitbucket token (see above).
- Push-to-Deploy: Check this option if you want WP Pusher to automatically update the plugin/theme every time you push new code. For Bitbucket, this is configured automatically by WP Pusher.
- Link installed plugin/theme: If the plugin/theme is already installed and you just want to connect it to WP Pusher, check this option. It is very important that the folder name of the plugin/theme on your web server is the same as the repository name.
Need any help?
If you have any questions about WP Pusher, Git or WordPress, our email is [email protected]. Don't hesitate shooting us a message! You can also click the little ❤️ in the corner of this page. | https://docs.wppusher.com/article/18-setting-up-a-plugin-or-theme-on-bitbucket | 2022-08-08T02:07:14 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588f52262c7d3a7846306ce2/images/58cff385dd8c8e7f5974bc65/file-OvWKb8Wvu5.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/588f52262c7d3a7846306ce2/images/58cff42f2c7d3a79f5f8e019/file-DgnQmSUufL.png',
None], dtype=object) ] | docs.wppusher.com |
Table of Contents
GeoIP API
The API is *very* much in alpha, so let me know if you have any problems or requests.
JSON
Data will return in JSON by default.
If you pass a “callback” param, the data will be passed to a function using the callback value for the function name.
Using cURL
You can easily use cURL in the command line to test out responses.
Example:
curl
API Methods
Many of these methods will return a
HTTP 204 No Content on failure.
Locate Method
geoip.prototypeapp.com/api/locate
This method returns JSON (by default) or XML using the
format parameter.
If you don't pass it an IP address, it will try to use the referrer's address.
The response should look like:
{ "ip":"69.163.181.149", "timestamp":1263028293112, "location": { "coords": { "latitude":"33.9269", "longitude":"-117.861" }, "address": { "city":"Brea", "country":"United States", "country_code":"US" }, "gmtOffset":"-8", "dstOffset":"-7" } } | http://docs.prototypeapps.com/doku.php?id=geoip:api&rev=0 | 2019-12-05T18:21:23 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.prototypeapps.com |
New Relic Infrastructure's Kubernetes cluster explorer provides a multi-dimensional representation of a Kubernetes cluster that lets you zoom into your namespaces, deployments, nodes, pods, containers, and applications. With the cluster explorer you will be able to easily retrieve the data and metadata of these elements, and understand how they are related.
Requirements
The Kubernetes cluster explorer is available if you have the Kubernetes monitoring integration; there’s nothing additional to deploy or to configure.
Access the cluster explorer
You can access the cluster explorer:
- From the Kubernetes tab in your Infrastructure account.
From the cluster explorer launcher in New Relic One.
To view the cluster explorer in New Relic One, you need version 1.8 or higher of the Kubernetes monitoring integration.
Use your cluster
At the top left corner of the Kubernetes cluster explorer, you can:
- Select the cluster you want to explore.
- Refine the elements to be displayed by filtering by namespace or deployment.
- Select specific nodes.
The cluster explorer has two main parts:
- On top, a panel visually displays the status of up to 24 nodes of the cluster.
- A table provides a complete list of all the selected nodes.
Please note that onscreen data does not refresh automatically. A message on the top right corner informs of the last time it was updated.
Cluster explorer
The cluster explorer shows the nodes that have the most issues in a series of four concentric rings:
- The outer ring shows the nodes of the cluster, with each node displaying CPU, memory, and storage performance metrics.
- The next inner-most ring displays the distribution and status of the non-alerting pods associated with that node.
- The third inner-most ring displays the pods on alert and that may have health issues even if they are still running.
- Finally, the inner-most ring displays pods that are pending or that Kubernetes is unable to run.
You can select any pod to see its details, such as namespace, deployment, its containers, alert status, CPU usage, memory usage, and more.
Cluster explorer node table
The cluster explorer node table displays all the nodes of the selected cluster/namespace/deployments, and can be sorted according to node name, node status, pod, pod status, container, CPU% vs. Limit and MEM% vs. Limit. | https://docs.newrelic.com/docs/integrations/kubernetes-integration/cluster-explorer/kubernetes-cluster-explorer | 2019-12-05T18:24:13 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.newrelic.com |
Released on:
Friday, March 10, 2017 - 12:54
Improvements
- Adds new API
+[NewRelic recordCustomEvent:(NSString*)eventType withAttributes:(NSDictionary*)attributes]
This method replaces
+[NewRelic recordEvent:(NSString*)name withAttributes:(NSDictionary*)attributes] which is now deprecated, with the intention of removal in the future. The new API creates a new event with an event type specified by the
eventType parameter, whereas the deprecated method create a event with the Mobile event type and an attribute with the name 'name' and the value of the
name parameter. This change satisfies customer requests for:
- Improved Insights query performance
- Defining custom event types
- Finding custom event in Insights more easily
- Flexibility to define data retention per custom event type
- Adds helper method
+setUserId:to NewRelic.h which sets a session attribute,
userId, with the passed value. This method is effectively the same as
[NewRelic setAttribute:@"userId" value:<username>];
-. | https://docs.newrelic.com/docs/release-notes/mobile-release-notes/tvos-release-notes/tvos-agent-5121 | 2019-12-05T18:55:11 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.newrelic.com |
DEPRECATION WARNING
This documentation is not using the current rendering mechanism and will be deleted by December 31st, 2020. The extension maintainer should switch to the new system. Details on how to use the rendering mechanism can be found here.
Configuration¶
The configuration settings can be done through the settings of a plugin. Because of using Extbase every setting can also be done by using TypoScript but remember that the not empty settings of the plugin always override the ones from TypoScript. For boolean settings you have an option in the extension manager to choose select boxes instead of checkboxes, which provides an option “from TypoScript” in boolean settings. Further information you can find in chapter Administration → Extension configuration.
The basic configuration to use this extension is already done within the default template. If you want to change certain settings e.g. on more than one page you can change them in your template. | https://docs.typo3.org/typo3cms/extensions/frsupersized/stable/Configuration/Index.html | 2019-12-05T17:57:51 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.typo3.org |
With New Relic Infrastructure integrations, you can monitor the performance of popular services, including AWS, Azure, MySQL, and Cassandra. You can also use open-source integrations, such as Collectd, Memcached, and more; or make your own custom integrations using the Integrations SDK.
This document provides an overview of integration data types and explains how to find and use that data.
Explore your integration data
The best way to understand your integration data and see what you can do with it is to activate an integration and then start exploring the data in New Relic Infrastructure and Insights. Some recommendations for exploring:
- View dashboards: Go to infrastructure.newrelic.com > Third-party services, then select an integration's Dashboard link. See the dashboard documentation for details on functionality.
- Explore data in Insights: Go to infrastructure.newrelic.com > Third-party services, then select an integration's Explore data link. (For more information, see Use integration data in Insights.)
- Create alert conditions: Go to infrastructure.newrelic.com > Third-party services, then select an integration's Create alert link. (Some integrations do not monitor numeric data or allow alert conditions to be created within Infrastructure. For these integrations, you can create NRQL alert conditions with New Relic Alerts.)
- Learn more about a specific integration's data: Browse the documentation for cloud integrations and on-host integrations.
Custom integrations built with the SDK do not appear on the Third-party services page or have pre-built metric dashboards. For more information, see Find and use custom integration data.
Types of integration data
New Relic makes two categories of integrations: cloud integrations like AWS and Azure, and on-host integrations like MySQL and NGINX. An integration can generate four types of data:
- Metrics: Numeric measurement data. Metric data is most of the data that appears in pre-built integration dashboard charts. Metric data can also be found and used in Insights.
- Inventory: Information about the state and configuration of a service or host. Inventory data appears in the Infrastructure Inventory page. For New Relic-built integrations, inventory data also appears in integration dashboards. Changes in inventory generate event data.
- Events: Events represent important activity on a system. Most events represent changes in inventory.
- Attributes: Some integrations will generate other non-metric attributes (key-value pairs) that are available in New Relic Insights.
Metric data
Metric data is numeric measurement data; for example:
- A count of exceptions
- The rate of bytes processed by a service
- The percentage CPU being used.
You can view integration metric data in New Relic Infrastructure and Insights:
All integration data, including metric data, is found using Insights' Event data explorer, and never the Metric data explorer. This is due to the way Insights categorizes data. For more information, see Data collection.
Event data
In Infrastructure, events represent important activity on a host/system. Examples of event data: an admin logging in; a package install or uninstall; a service starting; a table being created. Most events will represent changes to inventory data.
You can view integration event data in New Relic Infrastructure and Insights:
Inventory data
Inventory data is information about the status or configuration of a service or host. Examples of inventory data: configuration settings; the name of the host the service is on; the AWS region; the port being used.
You can view integration inventory data in New Relic Infrastructure and Insights:
For more on the structure of inventory data and how it appears in the Infrastructure UI, see Inventory UI page.
Other attributes
Attributes (key-value pairs) refer to data attached to an event available in Insights. Metrics, for example, are a type of attribute with a numeric value that can be charted. Also, some inventory data is reported as attributes. Some integrations will report attributes that are not considered metrics or inventory.
On-host integration data will also have standard Infrastructure agent attributes attached to it. This is because the data is reported by the Infrastructure agent.
Create alert conditions
To create an alert condition for integration data in Infrastructure, Go to infrastructure.newrelic.com > Third-party services > (select an integration), then select any available alert option. For more information, see Infrastructure and Alerts.
For on-host integrations, you can also create alert conditions using NRQL queries. (NRQL alerts are not supported or recommended for cloud integrations, due to issues related to data latency.) | https://docs.newrelic.com/docs/integrations/new-relic-integrations/getting-started/understand-use-data-infrastructure-integrations | 2019-12-05T18:28:00 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.newrelic.com |
Handlers
Reference documentation
- What is a Sensu event handler?
- Pipe handlers
- TCP/UDP handlers
- Transport handlers
- Handler sets
- Handler configuration
What is a Sensu event handler?
Sensu event handlers are actions executed by the Sensu server on events, such as sending an email alert, creating or resolving an incident (e.g. in PagerDuty, ServiceNow, etc), or storing metrics in a time-series database (e.g. Graphite).
Handler types
There are several types of handlers. The most common handler type is the
pipe
handler, which works very similarly to how checks work, enabling Sensu to
interact with almost any computer program via standard streams.
- Pipe handlers. Pipe handlers pipe event data into arbitrary commands via
STDIN.
- TCP/UDP handlers. TCP and UDP handlers send event data to a remote socket (e.g. external API).
- Transport handlers. Transport handlers publish event data to the Sensu transport.
- Handler sets. Handler sets (also called “set handlers”) are used to group event handlers, making it easy to manage groups of actions that should be executed for certain types of events.
The default handler
Sensu expects all events to have a corresponding handler. Event handler(s)
may be configured in check definitions, however if no
handler or
handlers have been configured, Sensu will attempt to handle the event using a
handler named
default. The
default handler is only a reference
(i.e. Sensu does not provide a built-in
default handler), so if no handler
definition exists for a handler named
default, Sensu will log an error
indicating that the event was not handled because a
default handler definition
does not exist. To use one or more existing handlers as the
default, you can
create a Set handler called
default and include the existing handler(s)
in the set.
Keepalive
Sensu Client
keepalives are the heartbeat mechanism used by Sensu to ensure
that all registered Sensu clients are still operational and able to reach the
Sensu transport. By default
keepalive events are handled by the default
handler.
The
keepalive scope can be configured to use specific handlers on the client,
as well as overriding the default threshold values.
For example:
{ "client": { "name": "i-424242", "...": "...", "keepalive": { "handlers": [ "pagerduty", "email" ], "thresholds": { "warning": 40, "critical": 60 } } } }
Pipe handlers
Pipe handlers are external commands that can consume event data via STDIN.
Example pipe handler definition
{ "handlers": { "example_pipe_handler": { "type": "pipe", "command": "do_something_awesome.rb -o options" } } }
Pipe handler commands
What is a pipe handler command?
Pipe handler definitions include a
command attribute which are literally
executable commands which will be executed on a Sensu server as the
sensu
user.
Pipe handler command arguments
Pipe handler
command attributes may include command line arguments for
controlling the behavior of the
command executable. Most Sensu handler
plugins provide support for command line arguments for reusability.
How and where are pipe handler commands executed?
As mentioned above, all pipe handlers": "handler-irc.rb"
TCP/UDP handlers
TCP and UDP handlers enable Sensu to forward event data to arbitrary [TCP or UDP sockets][t] for external services to consume (e.g. third-party APIs).
Example TCP handler definition
The following example TCP handler definition will forward event data to a
TCP socket (i.e.
10.0.1.99:4444) and will
timeout if an acknowledgement
(
ACK) is not received within 30 seconds.
{ "handlers": { "example_tcp_handler": { "type": "tcp", "timeout": 30, "socket": { "host": "10.0.1.99", "port": 4444 } } } }
The following example UDP handler definition will forward event data to a
UDP socket (i.e.
10.0.1.99:444).
{ "handlers": { "example_udp_handler": { "type": "udp", "socket": { "host": "10.0.1.99", "port": 4444 } } } }
Transport handlers
Transport handlers enable Sensu to publish event data to named queues on the Sensu transport for external services to consume.
Example transport handler definition
The following example transport handler definition will publish event data
to the Sensu transport on a pipe (e.g. a “queue” or “channel”, etc) named
example_handler_queue. One or more instances of an external process or
third-party application would need to subscribe to the named pipe to process the
events.
{ "handlers": { "example_transport_handler": { "type": "transport", "pipe": { "type": "direct", "name": "example_handler_queue" } } } }
Handler sets
Handler set definitions allow groups of handlers (i.e. individual collections of actions to take on event data) to be referenced via a single named handler set.
NOTE: Attributes defined on handler sets do not apply to the handlers they
include. For example,
filter,
filters, and
mutator attributes defined
in a handler set will have no effect.
Example handler set definition
The following example handler set definition will execute three handlers (i.e.
slack, and
pagerduty) for every event.
{ "handlers": { "notify_all_the_things": { "type": "set", "handlers": [ "email", "slack", "pagerduty" ] } } }
Handler configuration
Example handler definition
The following is an example Sensu handler definition, a JSON configuration file
located at
/etc/sensu/conf.d/mail_handler.json. This handler definition uses
the
mailx unix command, to email the event data to
[email protected], with
the email subject
sensu event. The handler is named
{ "handlers": { "mail": { "type": "pipe", "command": "mailx -s 'sensu event' [email protected]" } } }
Handler definition specification
Handler name(s)
Each handler definition has a unique handler name, used for the definition key.
Every handler definition is within the
"handlers": {} configuration
scope.
- A unique string used to name/identify the check
- Cannot contain special characters or spaces
- Validated with Ruby regex
/^[\w\.-]+$/.match("handler-name")
HANDLER attributes
The following attributes are configured within the
{"handlers": { "HANDLER": {}
} } configuration scope (where
HANDLER is a valid handler name).
socket attributes
The following attributes are configured within the
{"handlers": { "HANDLER": {
"socket": {} } } } configuration scope (where
HANDLER is a valid
handler name).
NOTE:
socket attributes are only supported for TCP/UDP handlers (i.e.
handlers configured with
"type": "tcp" or
"type": "udp").
EXAMPLE
{ "handlers": { "example_handler": { "type": "tcp", "socket": { "host": "10.0.5.100", "port": 8000 } } } }
ATTRIBUTES
pipe attributes
The following attributes are configured within the
{"handlers": { "HANDLER": {
"pipe": {} } } } configuration scope (where
HANDLER is a valid handler
name).
NOTE:
pipe attributes are only supported for Transport handlers (i.e.
handlers configured with
"type": "transport").
EXAMPLE
{ "handlers": { "example_handler": { "type": "transport", "pipe": { "type": "topic", "name": "example_transport_handler" } } } }
ATTRIBUTES
NOTE: types
direct,
fanout and
topic are supported by the default
RabbitMQ transport. Redis and other transports may only implement a subset of
these. | https://docs.sensuapp.org/sensu-core/1.5/reference/handlers/ | 2019-12-05T17:14:03 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.sensuapp.org |
Job product batch size¶
A product catalog is unique to one’s need. It is configured with a custom number of attributes, families, locales, channels, etc. Those different combinations may affect the batch jobs (CSV imports/exports, mass edits, indexation, etc) performances.
Indeed the size of the products will define how much RAM they will take to be processed. During a product batch jobs, we regularly flush the processed products into the database to avoid too much memory consumption.
But, if we flush them too often, the transaction time can slow the process: for each iteration, we clear the doctrine cache (to free some memory), index the products (so more communications between PHP and elastic search) and update the Mysql database. On the other hand, if we flush too sporadically, the RAM consumption can increase to a point where the PHP’s garbage collector is called too many times and slows down the batch jobs.
That’s why there is no universal answer to the pim_job_product_batch_size parameter and if you encounter performance issues on your batch jobs, you may need to tweak it.
To do so, try to increase it if your batch job seems unexpectedly slow and your product have a reasonable size. Or try to reduce it if the batch processes are too slow and your product have a lot of product values.
Found a typo or a hole in the documentation and feel like contributing?
Join us on Github! | https://docs.akeneo.com/latest/technical_architecture/performances_guide/batch_page_size.html | 2019-12-05T16:55:06 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.akeneo.com |
Crate partial[−][src]
Partial is similar to Option where
Value replaces
Some and
Nothing replaces
None.
Similarly to
Value,
Fake contains a value of the right type that can be further used (e.g. in
map or
and_then) but this value is a dummy value used to be able to compute the rest and detect more errors. For any method, we have the ordering:
Value >
Fake >
Nothing; a Fake value will never be a Value again.
Use case: When compiling, an error in one function must be reported but should not prevent the compilation of a second function to detect more errors in one run. This intermediate state is represented by
Fake. | https://docs.rs/partial/0.4.0/partial/ | 2019-12-05T17:08:24 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.rs |
A Local Rack runs apps on your computer. This makes it easy to build and run apps for development purposes..
Installing the Rack also starts a Convox Router on your computer. The router provides local DNS for the
*.convox hostname, and offers an HTTPS endpoint for every app. The result is development-friendly hostnames like for apps running on your laptops.
To eliminate a browser SSL security warning, you need to install the Convox Certificate Authority (CA) that the router users. This certificated is generated at install time, so the private key is unique to your computer. You can load this into your keychain with:
$ sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain /etc/convox/ca.crt
You can verify or revoke this certificate by searching for “convox” in the OS X Keychain. | http://docs.convox.com/guides/setup/local/ | 2017-08-16T17:13:55 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.convox.com |
Media Room Section Landing Page
Description: This section describes the components in the Media Room section landing page.
Objective: At the end of this section, you should be able to:
- Identify the content type(s) used to create the component
- Identify the method (modules or techniques) used to create the component
The media room is a Panel page, created using the Panels module. It is made up of a small bit of introduction text and two view blocks: List of five most recent press release nodes and a display of up to 6 media galleries (created with the Media gallery content type) and made up on nodes with photos (made using the Photo content type). | http://docs.openpublicapp.com/documentation/media-room-section-landing-page | 2017-08-16T17:18:29 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['http://products.phase2technology.com/system/files/styles/panopoly_image_original/private/Screen%20Shot%202013-04-18%20at%202.04.53%20PM.png?itok=UeTWQWSw',
None], dtype=object) ] | docs.openpublicapp.com |
Database Replication and Clustering
Within masters or slaves.
In an advanced application deployment, individual servers may have been configured to connect to masters and slaves and configured your application to talk directly to the master and/or slaves within the database infrastructure to handle the scalability offered by using replication (Figure master and read from one or more slaves, changes to this architecture and structure are not handled. If the master fails, application servers must be manually updated to direct their queries to an alternative host.
When deploying Continuent Tungsten master and slave hosts within a dataservice, including HA events.
As an HA solution redirecting queries to the master and slaves within a dataservice, but with application driven master/slave selection.
When deploying your application with Continuent Tungsten 6.5, “Tungsten Connector during a failed datasource”, the slave exisiting channel.
The Connector will re-establish a channel to an available Manager if the Manager it is connected to is stopped or lost. | http://docs.continuent.com/continuent-tungsten-2.0/connector-basics.html | 2017-08-16T17:29:33 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.continuent.com |
avg
avg — aggregate function to determine the mean average of a field or column.
DESCRIPTION
This function computes a value for each record returned by the query predicate and returns the mean average of the specified values.
The argument
expr must include the name of a column in a LiveView table, where that column's data type is int, long, double, or timestamp.
The returned value is of the same data type.
EXAMPLE
This example shows how to use dynamic aggregation to find the average for values in a column. In the Hello LiveView sample, create a query that finds the average of the lastSoldPrice field from the ItemsSales table, grouping by Item. Follow these steps:
In LiveView Desktop connected to a server running the Hello LiveView sample, select the ItemsSales table from the Tables pane of the LiveView Tables view.
In the Select field, enter:
Item, avg(lastSoldPrice) AS avgPrice
In the Query field, enter the following:
group by Item
Click.
The query results open in a grid view. LiveView Server recalculates the average whenever the rows returned by the query predicate change. | http://docs.streambase.com/latest/topic/com.streambase.sb.studio.liveview.help/data/html/lv-reference/avg.html | 2017-08-16T17:31:10 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.streambase.com |
Az can deploy AX 2012 R3 on Azure. When you do, you may realize the following benefits:
- Reduce costs Since you don’t have to build out or manage infrastructure with Azure, IT costs may be greatly reduced.
- Save time An on-premises AX 2012 R3 environment may take weeks to plan, acquire necessary hardware, and deploy. By using the Cloud-hosted environments tool in Lifecycle Services, you can deploy an AX 2012 R3 environment on Azure in hours.
- Gain flexibility The cloud enables you to easily scale up (or scale down) to meet the changing needs of your business.
Note
Dynamics AX 2012 R3 can also be deployed on-premises. For details, see the topic Install Microsoft Dynamics AX 2012.
The Azure services model
Azure offers three types of services: Software-as-a-Service (SaaS), Platform-as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS).
When you deploy AX 2012 R3 on Azure, you will be using the IaaS offering. This means that Azure provides the virtual machines, storage, and networking capabilities. You must manage and secure the operating systems, applications, and data installed on the virtual machines.
Architecture of AX 2012 R3 on Azure, such as a demo or development/test environment. Based on your selection, the Cloud-hosted environments tool provisions the appropriate number of virtual machines on Azure. These virtual machines have AX 2012 R3 components—and all of their prerequisites—already installed on them. For example, if you deploy an AX 2012 R3 test environment, the architecture looks like this:
You can deploy the following types of AX 2012 R3 environments on Azure with the Cloud-hosted environments tool:
Demo environment
Dev/test environment
High availability environment
For more information about the virtual machines, and the software installed on each virtual machine in these environments, see Plan your Microsoft Dynamics AX 2012 R3 deployment on Azure.
The process for deploying AX 2012 R3 on Azure
The process for deploying AX 2012 R3 on Azure is complex and should be completed by system implementers who have experience with:
- Licensing requirements You’ll need to provide license information for the software included in the AX 2012 R3 environment. This documentation will point you to resources to help you complete this task, but will not provide a step-by-step procedure for completing this task.
- Networks and domains To enable users in your corporate network to easily access the virtual machines on Azure, you’ll need to create a site-to-site VPN connection. This documentation will point you to resources to help you create the VPN connection, but will not provide a step-by-step procedure for completing this task.
To deploy AX 2012 R3 on Azure, see the articles that are listed in the following table. | https://docs.microsoft.com/en-us/dynamics365/unified-operations/dev-itpro/lifecycle-services/ax-2012/deploy-2012-r3-azure-lcs | 2017-08-16T17:39:43 | CC-MAIN-2017-34 | 1502886102309.55 | [array(['media/deployaxonazureusinglcs1.jpg', 'DeployAXonAzureUsingLCS1'],
dtype=object)
array(['media/deployaxonazureusinglcs2.jpg', 'DeployAXonAzureUsingLCS2'],
dtype=object) ] | docs.microsoft.com |
Contents
The TIBCO StreamBase® Adapter for 360T Supersonic TEX allows a StreamBase application to connect to the 360T SuperSonic TEX trading infrastructure and to exchange FIX messages with it.
Because this adapter uses the FIX protocol to communicate with the 360T SuperSonic TEX infrastructure, its user-visible functionality is identical to that of the StreamBase FIX adapter. See the FIX Adapter page for primary instructions on configuring and using the 360T SuperSonic TEX. | http://docs.streambase.com/latest/topic/com.streambase.sb.ide.help/data/html/adaptersguide/embedded360TSuperSonic.html | 2017-08-16T17:29:38 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.streambase.com |
Contents
This sample,
firstapp.sbapp, demonstrates a StreamBase application with one notable characteristic: it is easy to create. You may have already created
it yourself, if you performed the tutorial in the Getting Started Guide.
The application is modest in functionality, but building it teaches you some important fundamentals including:
The general design process of defining input streams, schemas, operators, and output streams
Adding and connecting components in an EventFlow
Typechecking operators to ensure that streaming data can flow through the application without errors
Features of the SB Authoring and SB Test/Debug perspectives
Running the application in StreamBase Studio
Creating a feed simulator to enqueue test data automatically to the running application
In StreamBase Studio, import this sample with the following steps:
From the top menu, select→ .
Type
firnarrow the list of options.
Select firstapp from the Applications category.
Click.
StreamBase Studio creates a project for the sample.
To start.
When done, press F9 or click the
Stop Running Application button.
The server is ready to receive input. See the following sections to manually enqueue data into this sample. For additional enqueuing instructions, see the Getting Started Guide.
To enqueue data manually in Studio:
Open the Manual Input view.
In the Application Output view, make sure that the Output stream option is set to
All streams.
Enter
IBMin the symbol field, and
10000in the quantity field.
Clickto send this tuple to the application.
Observe the output in the Application Output view:
Because the quantity satisfies the Filter predicate, your tuple is passed through on the BigTrades stream.
Enter
100in the quantity field and click again.
Observe that this time, your output appears in the AllTheRest output stream.
Why? Because the quantity is below the predicate's threshold of 10000.
Enter different values above and below the threshold, and see how your application responds. Try entering a nonvalid quantity like
alpha.
To enqueue test data using any of the three feed simulations:
Open the Feed Simulations view.
In the Feed Simulations view, select a feed simulation.
Click.
The Application Input view displays generated tuples enqueued from your Feed Simulation. At the same time, the Application Output view begins displaying tuples on the two output streams.
If you chose
firstapp-enum.sbfs, let it run for five or ten seconds, then click (otherwise it runs continuously). The other two feed simulators run to completion, but feel free to stop them when you have enough data.
Note that stopping the feed simulation does not stop the application.
Observe the results in the Application Input and Application Output views. (If necessary, resize the views so that you can see their contents clearly.)
Tip
Click a row to display its field summary below the list, as shown in the next figure.
You should see trades values that are both above and below the threshold of 10000 that was set in your Filter operator.
When done, press F9 or click the
Stop Running Application button.
In addition to the application, the installed First Sample includes two additional feed simulations. Thus, there are three simulations in total, each demonstrating a different way of modeling test data:
- firstapp-enum.sbfs
Also featured in the tutorial, generates values from a list of symbols and quantities, in random order. The enumeration also specifies the weights of the quantities.
- firstapp-trace1.sbfs
Uses an external comma-separated value data file,
firstapp-trace1.csv, to explicitly list the order and values of tuples.
- firstapp-trace2.sbfs
Uses values specified in another external comma-separated value data file,
firstapp-trace2.csv, and also uses relative timestamps to control the rate of enqueuing._firstapp
See Default Installation Directories for the default location of
studio-workspace on your system. | http://docs.streambase.com/latest/topic/com.streambase.sb.ide.help/data/html/samplesinfo/Firstapp.html | 2017-08-16T17:29:42 | CC-MAIN-2017-34 | 1502886102309.55 | [] | docs.streambase.com |
Message-ID: <246111736.8589.1418895214067.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_8588_745571083.1418895214067" ------=_Part_8588_745571083.1418895214067 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Currently, Maven supports additive mapping of mojos to lifecycle phases =
using
four basic sources:
This lifecycle phase mapping is defined in the component descriptor
(components.xml) for the artifact that defines a particular packaging. In= some
cases, this artifact is in the core of Maven (in the case of 'j= ar' packaging,
for instance). In other cases, the lifecycle phase map= ping may be specified by
a build extension, or even a plugin that has= <extensions/> enabled (set to
true). In any case, this lifecyc= le mapping specifies the skeletal set of
mojo-to-phase mappings that = defines the basic build process for that type of
project.
These are simply the additional mojo bindings specified in ancestor POMs=
that
have <inherit/> set to true (which is the default setting= if nothing is
specified in most cases). For each level of inheritanc= e, the mojo bindings
that are added are appended to the list of bindi= ngs already in that particular
phase.
When the current POM is built, all mojos bound in the POMs ancestry are<=
br /> appended to the base lifecycle mapping given by the project's packagi=
ng. After
this happens, any mojo bindings specified in the current PO= M are appended to
this cumulative list of phase bindings. This means = that the current POM's mojo
bindings will always be inserted into the= build process after the bindings
given by its packaging and ancestry= .
After a particular POM's inheritance effects are calculated, active prof=
iles
are injected. When a profile is activated, its modifications are= added to the
POM in which it was declared. Only after all active pro= files at that level are
injected is that POM eligible for use in inhe= ritance b= ase POM (not
to mention ancestry and packaging, as described in (iii.= ) above) in which it
was declared.
Since the lifecycle map is additive via inheritance and profile injectio=
n,
with a skeletal base that's defined in a binary artifact, gaining = visibility
to the complete build plan is nearly impossible. This mean= s that in complex
inheritance hierarchies with profiles involved, it = can be nearly impossible to
understand what steps happen in which pha= se, much less know how best to modify
this plan with a new mojo in th= e current POM.
Additionally, with a large inheritance hierarchy, it's highly likely tha=
t the
lifecycle mapping will wind up with multiple mojos bound into s= ome phase or
other. This is not a problem in itself, particularly if = the mojos don't depend
on one another's actions. However, when the pr= oject POM (the leaf of the
inheritance hierarchy) tries to replace a = mojo that is listed ahead of some
other binding in a single phase whi= ch depends on the mojo, it quickly becomes
obvious that this is an im= possible task. Suppressing that first mojo can be
done via a custom p= arameter, and the project POM can add a new mojo binding to
the phase= to substitute for the skipped mojo which provides compatible
functio= nality within that build. However, this new mojo binding cannot REPLACE
the original (first) mojo binding; the second original binding (that dep= ends
on the actions of the first mojo) will ALWAYS run ahead of the n= ew binding,
and fail because there is now a step out of order in the = build process.
Traditionally, Maven has solved this problem by binding the new mojo to =
an
earlier lifecycle phase. If a suitable phase didn't exist, new pha= ses were
introduced in later releases to address the issue. This lead= to a
proliferation of 'pre-' and 'post-' phases, wi= th each addressing a problem
that had no good solution until the next= release of Maven.
This approach to solving the ordering problem assumes that there are a f=
inite
and calculable number of steps (mojo bindings) in any given bui= ld process,
such that each step could occupy its own lifecycle phase = in the lifecycle for
that build. However, as the number of attached a= rtifacts increases to
accommodate new types of metadata, reporting, o= r advanced build capabilities
(such as providing buildable project-so= urce or patch artifacts), so does the
number of new mojo phase bindin= gs to resolve, produce, package, install, and
deploy these attachment= s. If a project packaging has a relatively high
utilization of the li= fecycle phases by default, and is used in a large
development environ= ment with a deep inheritance hierarchy (even 2-3 levels can
be enough= to display it), the lifecycle phase utilization can easily become
sa= turated, resulting in muiltple mojo bindings in some phases. Inheritance, profile injection, and the ability to specify arbitrary lifecycle mapp= ings for
custom packaging types means that it's simply not possible t= o calculate how
many phases will be enough.
Additionally, for each new phase created, the existing phases lose a lit=
tle of
their meaning as verbs in the build process. When you just hav= e 'resources',
'compile', 'package', etc. it's easy to understand the= types of steps that
might execute in each. However, what steps would= you expect to take place in
'pre-package' or 'post-integration-test'= ? What about 'pre-resources'?
The meaning of these new phases would b= e a little more watered down than the
original list. As we add more a= nd more phases, the meaning of each phase
becomes a little less clear= . If the aim is to create one phase for each
possible mojo binding in= any given build, phase names become somewhat
irrelevant (since the d= istribution wat= er them down with
subtle variations and modifiers. Aggregation of bui= ld steps within some verbs
is inevitable if we are to retain a simple= set of build verbs; we should
embrace this aggregation or else aband= on the vocabulary approach altogether.
Maven does not provide any means of suppressing, replacing, or reorderin=
g the
list of mojo bindings for a particular lifecycle phase. As a co= nsequence,
developers are forced into extensive research on the appli= cable lifecycle
mapping given by their packaging, plus the POM inheri= tance hierarchy and any
applicable profiles along the way in order to= determine what steps constitute
the build process for their project.=
Given the finite set of lifecycle phases available for a given build usi=
ng
Maven, along with the likelihood of multiple mojos binding to a si= ngle phase
in complex build environments, developers must have an int= uitive, flexible
mechanism for expressing the steps required for thei= r builds.
This solution involves adding two new sections to the <execut=
ion/> element
of the plugin declaration in a POM, and one= new element to the main plugin
declaration section. In the execution= section, we add a new section called
something like <phas= eOrdering>, and a new element called <disableExecuti= on>.
In the main plugin section, we add a new element cal= led <disablePlugin>. The
meaning of these new = sections and elements will be described below.
Inside the execution section, the new <phaseOrdering> section handles the
ordering of the new execution in relation to= some existing mojo or execution.
It should contain the groupId, arti= factId, and optionally, version of the
plugin that specifies the exec= ution or mojo, along with the executionId, and
optionally, mojo name,= along with an ordering type. Using these elements, the
lifecycle exe= cutor would locate the plugin, execution, and (optionally) the
mojo t= o which this section refers, and apply the specified ordering operation
for the new execution in relation to the target execution/mojo. For inst= ance,
if the type (operation) was "before", it would inject= the new execution just
in front of the target execution/mojo in the = build process. If the type
operation was "replace" it would= remove the target execution/mojo, and insert
the new execution in it= s place. Valid values would be insert-before, inse= rt-after, and
replace. If this section is n= ot specified, existing phase-binding rules will
be applied to preserv= e backward compatibility.
In addition to the new <phaseOrdering> section, t=
he new <disableExecution>
element makes it pos= sible to respecify a plugin execution using the same
executionId, and= then disable it. Since plugin executions can be merged
through inher= itance and profile injection using the executionId as a merge
key, si= mply specifying an empty execution with a particular executionId and
= <disableExecution>true</disableExecution> shou= ld be enough to disable an
existing execution with that executionId b= rought in from a parent POM, a
profile, or even the base lifecycle ma= pping brought in by the project's
packaging. To disable a mojo suppli= ed in a lifecycle mapping, use the default
executionId of 'default'.<= /p>=20
Inside the plugin declaration itself, the new <disablePlugin&=
gt; element makes it
possible to turn off all executions (in= cluding the default one) of that plugin
in the lifecycle. This will i= nclude disabling a binding of that plugin given
in the lifecycle mapp= ing itself. To do this, simply specify the plugin using
groupId, arti= factId, and optionally, version, then set
<disablePlugin&g= t;true</disablePlugin>. All instances of all mojos in the pl= ugin
will be removed from the build process by the lifecycle executor= .
Finally, to aid in debugging lifecycle issues, a new mojo will be added = to the maven-help-plugin, called build-steps or similar. T= his mojo will output a complete list of mojo bindings (with executionId), t= heir phase attachments, and their ordering within each phase, in order to e= nable users to research insertion points for new mojo bindings and debug pr= oblems that may arise from the replacement of inherited mojo bindings.= =20 | http://docs.codehaus.org/exportword?pageId=71884 | 2014-12-18T09:33:34 | CC-MAIN-2014-52 | 1418802765722.114 | [] | docs.codehaus.org |
This is an iframe, to view it upgrade your browser or enable iframe display.
Prev
4.4. Do You Have Enough Disk Space?
Nearly every modern-day operating system (OS) uses
disk partitions
, and Fedora is no exception. When you install Fedora, you may have to work with disk partitions. If you have not worked with disk partitions before (or need a quick review of the basic concepts), refer to
Appendix A,
An Introduction to Disk Partitions
before proceeding.
The disk space used by Fedora Fedora.
Before you start the installation process, you must
have enough
unpartitioned
[1]
disk space for the installation of Fedora, or
have one or more partitions that may be deleted, thereby freeing up enough disk space to install Fedora.
To gain a better sense of how much space you really need, refer to the recommended partitioning sizes discussed in
Section 9.14.5, “Recommended Partitioning Scheme”
.
If you are not sure that you meet these conditions, or if you want to know how to create free disk space for your Fedora installation, refer to
Appendix A,
An Introduction to Disk Partitions
.
[1]
Unpartitioned disk space means that available disk space on the hard drives you are installing to has not been divided into sections for data. When you partition a disk, each partition behaves like a separate disk drive.
Prev
4.3.3. FireWire and USB Disks
Up
4.5. Selecting an Installation Method | http://docs.fedoraproject.org/en-US/Fedora/20/html/Installation_Guide/Disk_Space-x86.html | 2014-12-18T09:36:52 | CC-MAIN-2014-52 | 1418802765722.114 | [] | docs.fedoraproject.org |
The System.Data.CommandBehavior values are used by the IDbCommand.ExecuteReader method of System.Data.IDbCommand and any classes derived from it.
A bitwise combination of these values may be used.
System.Data.CommandBehavior is ignored when used to define a System.Data.Sql.SqlNotificationRequest or System.Data.SqlClient.SqlDependency and should therefore not be used. Use the constructor that does not require a CommandBehavior parameter in these two cases.
Use CommandBehavior.SequentialAccess to retrieve large values and binary data. Otherwise, an OutOfMemoryException might occur and the connection will be closed. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Data.CommandBehavior | 2017-12-11T07:35:27 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.go-mono.com |
Specifies the System.Diagnostics.DebuggerHiddenAttribute. This class cannot be inherited.
See Also: DebuggerHiddenAttribute Members
The common language runtime attaches no semantics to this attribute. It is provided for use by source code debuggers. For example, the vsprvslong debugger does not stop in a method marked with this attribute and does not allow a breakpoint to be set in the method. Other debugger attributes recognized by the vsprvslong debugger are the System.Diagnostics.DebuggerNonUserCodeAttribute and the System.Diagnostics.DebuggerStepThroughAttribute.
For more information about using attributes, see [<topic://cpconExtendingMetadataUsingAttributes>]. | http://docs.go-mono.com/monodoc.ashx?link=T%3ASystem.Diagnostics.DebuggerHiddenAttribute | 2017-12-11T07:34:49 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.go-mono.com |
This chapter walks through additional features that Arquillian provides to address more advanced use cases. These features may even allow you to write tests for scenarios you previously classified as too difficult to test.
Labels:
None
Page: Test run modes Page: Descriptor deployment Page: ArquillianResource injection Page: Multiple Deployments Page: Multiple Containers Page: Protocol selection Page: Enabling assertions | https://docs.jboss.org/author/display/ARQ/Additional+features | 2017-12-11T07:19:18 | CC-MAIN-2017-51 | 1512948512584.10 | [] | docs.jboss.org |
Operating modes
Wild WildFly:Domain Setup] for details.
The following is an example managed domain topology:
WildFly 8 installation on its host's filesystem. The host.xml file contains configuration information that is specific to the particular host. Primarily:
- the listing of the names of the actual WildFly 8 instances that are meant to run off of this installation.
- configuration of how the Host Controller is to contact the Domain Controller to register itself and access the domain configuration. This may either WildFly 8 WildFly 8 WildFly 8 WildFly 8 WildFly:
Address
All WildFly WildFly 8
WildFly management resources are conceptually quite similar to Open MBeans. They have the following primary differences:
- WildFly. WildFly | https://docs.jboss.org/author/display/WFLY8/Core+management+concepts | 2017-12-11T07:27:00 | CC-MAIN-2017-51 | 1512948512584.10 | [array(['/author/download/attachments/66322691/DC-HC-Server.png?version=1&modificationDate=1310046194000',
None], dtype=object) ] | docs.jboss.org |
:
// read external identity from the temporary cookie var result = await HttpContext.AuthenticateAsync(IdentityServerConstants.ExternalCookieAuthenticationScheme); if (result?.Succeeded != true) { throw new Exception("External authentication error"); } // retrieve claims of the external user var externalUser = result.Principal; if (externalUser == null) { throw new Exception("External authentication error"); } // retrieve claims of the external user varbility standad => { // ... }); } | http://docs.identityserver.io/en/release/topics/signin_external_providers.html | 2018-02-17T21:01:59 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.identityserver.io |
:
When writing Mule projects in XML, the HTTP connector can work in one of two ways, depending on how you create it:
As an HTTP Listener
As an HTTP Requester
To instantiate the connector as an HTTP Listener Connector, add the following XML tag at the start of a flow:
<http:listener
This element must reference a global configuration element of the following type:
<http:listener-config
To instantiate the connector as an HTTP Request Connector, add the following XML tag in any part of a flow:
<http:request
This element must reference a global configuration element of the following type:
<http:request-config
Debugging
Gaining visibility into HTTP inbound and outbound behavior can be achieved by enabling underlying library loggers with log4j2. This section assumes you’re comfortable adjusting log levels with log4j2. If you have not adjusted logging levels in the past, read configuring custom logging settings before continuing.
Logging Listener and Request Activity
By enabling the
DEBUG level on
org.mule.module.http.internal.HttpMessageLogger, activity coming from all HTTP Listener and Request components will be logged. This includes the HTTP Listener Connector’s inbound request, HTTP Request Connector’s outbound request, and each connector’s response body. An example of each can be found below.-02-10 11:29:18,647 [[hello].http.requester.HTTP_Request_Configuration(1) SelectorRunner] org.mule.module.http.internal.HttpMessageLogger: REQUESTER GET /v3/hello HTTP/1.1 Host: mocker-server.cloudhub.io:80 User-Agent: AHC/1.0 Connection: keep-alive Accept: */* 19:29:18 GMT Server: nginx Content-Length: 10940 Connection: keep-alive { "message" : "Hello, world" }
Logging Packet Metadata
At a lower level, it can be desirable to log the actual request and response packets transmitted over HTTP. This is achieved by enabling the
DEBUG level on
com.ning.http.client.providers.grizzly. This will log the metadata of the request packets from
AsyncHTTPClientFilter and the response packets from
AhcEventFilter. Unlike the
HttpMessageLogger, this will not log request or response bodies.
The log output of the request packet’s metadata is as follows.
DEBUG 2016-02-10 11:16:29,421 [[hello].http.requester.HTTP_Request_Configuration(1) SelectorRunner] com.ning.http.client.providers.grizzly.AsyncHttpClientFilter: REQUEST: HttpRequestPacket ( method=GET url=/v3/hello query=null protocol=HTTP/1.1 content-length=-1 headers=[ Host=mocker-server.cloudhub.io:80 User-Agent=AHC/1.0 Connection=keep-alive Accept=*/*] )
The log output of the response packet’s metadata is as follows.
DEBUG 2016-02-10 11:16:29,508 [[hello].http.requester.HTTP_Request_Configuration.worker(1)] com.ning.http.client.providers.grizzly.AhcEventFilter: RESPONSE: HttpResponsePacket ( status=200 reason= protocol=HTTP/1.1 content-length=10940 committed=false headers=[ content-type=application/json date=Wed, 10 Feb 2016 19:16:29 GMT server=nginx content-length=10940 connection=keep-alive] ). | https://docs.mulesoft.com/mule-user-guide/v/3.9/http-connector | 2018-02-17T21:46:41 | CC-MAIN-2018-09 | 1518891807825.38 | [array(['./_images/source-flow-new-blank.png',
'show source and process section of flow'], dtype=object)
array(['./_images/http-connector-drag-to-source.png', 'drag to source'],
dtype=object)
array(['./_images/http-connector-67263.png', 'http listener in source'],
dtype=object) ] | docs.mulesoft.com |
The classic Windows desktop client and the in-browser client store your files and app information in the cloud and you can access them only through the classic Windows desktop client and the in-browser client. If you want to access them through the extension for Visual Studio or the , you need to get the code from the cloud. For more information, see Collaborate Across the Offline and Cloud Tools.
When you work with the extension for Visual Studio, files and app information is stored on your local file system and you can access them locally at any time. Your project is also connected with the cloud and you can access your work from the in-browser client.
When you work with the command-line interface, files and app information is stored on your local file system and you can access them locally at any time. However, from the command-line interface you can quickly get any app created with the classic Windows desktop client or the in-browser client by downloading it locally from the cloud.
To back up your work locally, you can export your app from the classic Windows desktop client or the in-browser client. You can also export your app when your trial or subscription plan expires. You can later import the archive to recreate your app or continue development with another integrated development environment. You can also add the exported files to another app in AppBuilder.
Version control history, plugins and app settings such as Apache Cordova version, active build configuration and application identifier are not exported.
Prerequisites
- Verify that your preferred AppBuilder client is running and you are logged in.
- Verify that the app that you want to export contains code for a mobile app.
Procedure
In-Browser
- In the Telerik Platform home page, navigate to the the Apps tab.
- Click the cogwheel icon for the app that you want to export.
- Click Download Code.
- Save your app to disk.
Depending on the download settings of your browser, a
ZIParchive of your app is downloaded to your default download location or you are prompted to store the
ZIPpackage. By default, the
ZIParchive is named after your app.
- If your trial or subscription plan has expired, use the classic Windows desktop client to export your app.
Windows
- From the Dashboard, click My Apps.
- Click the export button for your app.
- Save your app to disk.
The classic Windows desktop client prompts you to save the
ZIParchive of your app. You can rename the file and select a storage location. By default, the
ZIParchive is named after your app. Click Save to download the archive.
Visual Studio
App export is applicable to the classic Windows desktop client and the in-browser client and is not applicable to the extension for Visual Studio.
CLI
- In the command prompt, navigate to the directory which will host your exported app.
The directory must not contain an
.abprojectfile.
To list your apps in the cloud, run the following command.
appbuilder cloud
To export a selected app, run the following command.
appbuilder cloud export <App Name or Index>
<App Name or Index>is the name of the app as listed in the previous step.
This operation creates a new directory named after the app which contains all your code and assets.
Next Steps
If you have exported your app with the in-browser client or the classic Windows desktop client, recreate your app from archive. | https://docs.telerik.com/platform/appbuilder/cordova/export-app | 2018-02-17T21:12:30 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.telerik.com |
(PECL imagick 2.0.0)
Imagick::combineImages — Combines one or more images into a single image
Combines one or more images into a single image. The grayscale value of the pixels of each image in the sequence is assigned in order to the specified channels of the combined image. The typical ordering would be image 1 => Red, 2 => Green, 3 => Blue, etc.
channelType. | http://docs.php.net/manual/de/imagick.combineimages.php | 2018-02-17T21:29:26 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.php.net |
Embed video content in help topics You can embed a link to video content in a custom embedded help topic. YouTube and Vimeo video content is supported. Before you beginRole required: embedded_help_admin or admin About this task You cannot use the insert/edit video icon in the HTML editor to embed the video. You must enter the source code. Figure 1. HTML editor icons The administrator can disable the ability for users to see embedded video in the Embedded help system properties. Procedure Complete the following steps to obtain the embed code for the video. Navigate to the location of the video on YouTube or Vimeo. Open the Share option for the video. In YouTube, the text appears below the video. In Vimeo, click the share icon beside the video. Under Embed, select the code and paste it into a text editor. Modify the code to enclose it in a <div> and remove elements that do not get rendered in the embedded help content page. For example, here is the embed code for a YouTube video. The modified code also contains a title for the embedded video enclosed in a <p> tag. <iframe width="560" height="315" src="" frameborder="0" allowfullscreen></iframe> After modification, the code looks like this. <div class="video"><iframe src="" allowfullscreen=""></iframe> <p class="p">Video: Work at Now</p> </div> Vimeo embed code has more attributes. Strip them out so it looks like the following example. Notice the difference between YouTube and Vimeo for the allowfullscreeen attribute. <div class="video"><iframe src="" frameborder="0" allowfullscreen></iframe> <p class="p">Video: Work at Now</p> </div> Note: As shown in the examples, enclose the <iframe> and the <p> tags within a <div class="video"> tag. If the administrator disables the property to display video content, all content within the <div class="video"> tag are hidden. Navigate to Embedded Help > Help Content, and then open the custom help topic to embed a video. Click the source code icon (<>). Position your cursor at the end of the line above the location of the video. Press the Enter key to move to a blank line enter the embed code you modified. Click OK, and then click Update. To test that the video appears, open the page that displays the content you just updated, and then open the help panel. Related TasksAdd custom embedded help from a copyAdd custom embedded help contentRelated ConceptsEmbedded help planningRelated ReferenceEmbedded help system properties | https://docs.servicenow.com/bundle/kingston-platform-user-interface/page/build/help-guided-tours/task/embed-video-help-content.html | 2018-02-17T21:50:28 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.servicenow.com |
MID Server version selection You can pin all the MID Servers in your environment to a specific version by setting a system property, or you can configure specific versions for individual MID Servers. Note: ServiceNow does not recommend pinning the MID Server to a specific version for a significant amount of time, especially if you upgrade the instance. Under most circumstances, you should let the instance determine which MID Server version to use. Version control properties These properties control the version for all MID Servers: mid.buildstamp:: property using sys_properties.list.. This setting overrides the property setting for the pinned MID Server version. For instructions, see Add a MID Server parameter.Note: The value set in this parameter is not affected by an upgrade. | https://docs.servicenow.com/bundle/kingston-servicenow-platform/page/product/mid-server/reference/mid-server-version-selection.html | 2018-02-17T21:50:18 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.servicenow.com |
Caching¶
Ganeti Web Manager caches objects for performance reasons.
Why are things cached?¶
Ganeti is a bottleneck when accessing data. In tests, over 97% of time taken to render a normal page in Ganeti Web Manager is spent waiting for Ganeti to respond to queries. Thus, Ganeti Web Manager caches some of Ganeti’s data.
Manual Updates¶
Sometimes it is necessary to refresh objects manually. To do this, navigate to the detail page for the cluster of the object that needs to be refreshed, and click the “Refresh” button. This will refresh the cluster and all of its objects.
Cached Cluster Objects¶
Some database-bound objects cache Ganeti data automatically. The functionality
for this caching is encapsulated in the
CachedClusterObject class. Any
models which inherit from this class will gain this functionality.
Bypassing the Cache¶
The cache cannot currently be bypassed reasonably.
CachedClusterObject
uses
__init__() to do part of its work. An unreasonable, albeit working,
technique is to abuse the ORM:
values = VirtualMachine.objects.get(id=id) vm = VirtualMachine() for k, v in values.items(): setattr(vm, k, v) | http://ganeti-webmgr.readthedocs.io/en/latest/features/caching.html | 2018-02-17T21:25:32 | CC-MAIN-2018-09 | 1518891807825.38 | [] | ganeti-webmgr.readthedocs.io |
Printing the Help Documentation
Concepts
Creating a Data Collection Package | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-7/cc749514(v=ws.10) | 2018-02-17T21:53:43 | CC-MAIN-2018-09 | 1518891807825.38 | [] | docs.microsoft.com |
Date: Tue, 5 Apr 2011 11:04:24 -0500 From: Dan Nelson <[email protected]> To: Michael =?utf-8?Q?Gr=C3=BCnewald?= <[email protected]> Cc: FreeBSD questions <[email protected]> Subject: Re: Place to install library of shell functions Message-ID: <[email protected]> In-Reply-To: <[email protected]> References: <[email protected]>
Next in thread | Previous in thread | Raw E-Mail | Index | Archive | Help
In the last episode (Apr 05), Michael Grünewald said: > today I come to you with what seems to be somehow pedantic question: where > is the best place to install libraries of shell functions. > > I read hier(4) carefully and it seems the correct place for this would > be somewhere under `/usr/local/share': > > share/ architecture-independent files > >. > > Several of the ports install shell scripts under `/usr/local/lib' that > hier(4) devotes to ``shared and archive ar(1)-type libraries''. These > shell scripts are: The zsh port installs all its helper scripts and functions into /usr/local/share/zsh , and the portupgrade and portmaster ports insert their zsh autocompletion functions into /usr/local/share/zsh/site-functions. Seems to work great. Ports that install scripts into /usr/local/lib probably either don't have separate script and lib install paths in their makefile (tcl probably), or their scripts aren't meant to be called directly (firefox). -- Dan Nelson [email protected]
Want to link to this message? Use this URL: <> | https://docs.freebsd.org/cgi/getmsg.cgi?fetch=420150+0+archive/2011/freebsd-questions/20110410.freebsd-questions | 2021-09-16T20:01:03 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.freebsd.org |
TFVC Source Code Control in Visual Studio Code
In this post I will be walking you through the Team Foundation Version Control (TFVC) support within Visual Studio Code. I will assume that you are running on a windows machine and thus I will be using the TF executable that comes with Visual Studio 2017 (there is also a free, standalone "Visual Studio Team Explorer 2017" version that contains TF.exe). If you are running on a Mac OS or Linux you can use the Team Explorer Everywhere Command Line Client (TECLC). Regardless of which OS you are running on you should expect the same experience.
First, ensure that you have the Visual Studio Team Services (VSTS) extension installed:
In order to setup TFVC support we need to add one setting in the VS Code settings called "tfvc.location" and set it to the full path of the tf executable which is installed in Visual Studio. Here is the path setup on my machine:
"tfvc.location": "C:\\Program Files (x86)\\Microsoft Visual Studio\\2017\\Enterprise\\Common7\\IDE\\CommonExtensions\\Microsoft\\TeamFoundation\\Team Explorer\\TF.exe"
With TFVC, the extension uses information about the current workspace to determine how to connect to Team Services. Workspaces can be created using the Visual Studio IDE, Eclipse or with the JetBrains IDEs. Note: At the time of writing this post, you will need to have a local TFVC workspace already available on your local machine. Support for Server Workspaces will be added in future updates. More information about the difference between the two types of workspaces can be found here.
I will start by creating an Angular application hosted within an Asp.Net Core 2 application using the newly introduced Angular template with Asp.Net Core 2 SDK under VS 2017.
Now that the application is created and checked into your VSTS/TFS using VS 2017 we are ready to start working with the application from VS Code. The first time you load the project with VS Code you will prompted to run the signin command as you are not connected to Team Services yet:
You can run the signin command by either pressing F1 and typing in the signin command or clicking on the Team button on the lower left hand side as shown below:
Here you will be prompted with two options to sign in. I highly recommend using the new authentication experience which will automatically generate an access token on your behalf under your account which covers all scopes and is valid for one year. More information about the new authentication mechanism can be found here.
If you are signed in successfully, VS Code should display the following icons in the lower left corner:
Now if you select the source control tab under VS Code you will notice that the source code control system is TFVC and the included/excluded changes are shown as well. At this point you are ready to check in your changes into your TFVC repository.
Congratulations as you have now setup VS Code to check in your code into a TFVC source code control system. If you are interested in some of the other features that are supported by the extension you can follow this link. | https://docs.microsoft.com/en-us/archive/blogs/wael-kdouh/tfvc-support-with-visual-studio-code | 2021-09-16T19:50:37 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['https://msdnshared.blob.core.windows.net/media/2017/09/113.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/114.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/115.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/117.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/118.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/119.png',
None], dtype=object)
array(['https://msdnshared.blob.core.windows.net/media/2017/09/120.png',
None], dtype=object) ] | docs.microsoft.com |
vRealize Automation Cloud Assembly supports integration with Bitbucket for use as a Git-based repository for ABX action scripts and VMware cloud templates.
In vRealize Automation Cloud Assembly, you can work with two types of repository items using Bitbucket integration: VMware cloud templates or ABX action scripts. You must synch projects that you want to work with before using a Bitbucket integration. ABX actions support write back to the Bitbucket repository, but you cannot write back cloud templates from the integration. If you want to create new versions of cloud template files, you must do so manually.
Prerequisites
- Set up an on premises Bitbucket Server deployment with one or more ABX or cloud template-based projects that you want to use with your deployments. Bitbucket Cloud is currently not supported.
- Create or designate vRealize Automation Cloud Assembly project to associate your Bitbucket integration.
- Cloud template files to be synched to a Bitbucket integration must be named blueprint.yaml.
Procedure
- Select Add Integration. and click
- Select Bitbucket.
- Enter the Summary information and Bitbucket credentials on the Bitbucket new integration Summary page.
- To check the integration, click Validate.
- If you use add tags to support a tagging strategy, enter capability tags. See How do I use tags to manage vRealize Automation Cloud Assembly resources and deployments and Creating a tagging strategy.
- Click Add.
- Select the Projects tab on the main page for the Bitbucket integration to associate a project with this Bitbucket integration.
- Select the Project to associate with this Bitbucket integration.
- Click Next to add a Repository to Bitbucket project and indicate the type of repository you are adding and then specify the Repository name and Branch, as well as the Folder.
- Click Add.If you want to add one or more repositories to a project, click Add Repository.
Results
Bitbucket integration is configured with the specified repository configuration, and you can view and work with ABX actions and cloud templates contained in configured repositories. When you add a project to a Bitbucket integration, a synch operation runs to pull the latest versions of ABX action scripts and cloud template files from the designated repository. The History tab on the Bitbucket integration page shows records of all synch operations for the integration. By default, files are automatically synched every 15 minutes,but you can manually synch a file by selecting it and clicking SYNCH at any time.
What to do next
You can work with ABX actions on the vRealize Automation Cloud Assembly Extensibility page, and you can work with cloud templates on the Design page. If you save a changed version of an ABX action on the Extensibility area of vRealize Automation Cloud Assembly, the new version of the script is created and written back to the repository. | https://docs.vmware.com/en/vRealize-Automation/8.2/Using-and-Managing-Cloud-Assembly/GUID-F5949937-7844-4CAB-A3E2-FCD2849A8823.html | 2021-09-16T19:59:54 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.vmware.com |
To use Auto Complete, drag and drop it onto the canvas from the toolbox.
Auto Complete tool icon at Toolbox
How it looks like on the Canvas
Properties
Name: The name of the Auto Complete tool is written here. To edit, go to the Property Panel which is located on the right. This area is saved to the database.
Info text: On the client-side, an information text is written about how to enter information into the element. To be information text in the Auto Complete tool, you can write your text in the Info Text area. This text will not be saved anywhere. It will appear visually.
Field Style: Select whether the information entered is a constant or SQL query. You can add your data as a Constant Value or SQL Query.
Default Value: The value entered here comes into the Auto Complete tool by default. If you want to enter data as the default value, you can write in the Default Value part. It fills the Auto Complete tool area on the client screen.
Table Length: Enter the length of the information to be saved in SQL.
Text Size: Sets the size of the tool.
Minimum Characters: The minimum character value of the information to be written in the tool is entered.
Icon: To add an icon to the Auto Complete tool, you can choose from the Icon area. You can change the color and location of the icon.
Text Fonts: Type of text; bold, italic, underlined, replacing text; right-aligned, left-aligned, or centered.
Color: Edit the color of the text inside and/or inside the tool. To edit the text color and background color of the text, you can change from Color area.
Height/Width: It arranges the tool's height and width.
From Left/Top: It arranges the distance of the Auto Complete tool from left and top.
Format: The format of the information to be entered in the Auto Complete tool is selected.
Linked Object: A connection is made to an object element created by the action. When the object element runs, information is filled by the object field.
Linked Object Area: Select from fields within the linked object field.
Values (Text*Value): The values of the tool with the field style fixed value selected are entered.
SQL Query: Field style SQL query is written to the selected tool query.
Tab Order: Specifies the order of the Auto Complete tool within a form. On the Tab Order, when you skip with <TAB> key the cursor will proceed according to the line.
Fit horizontal: Fits the tool to the full screen on the user’s screen.
Pin Right: Pin the Auto Complete tool to the right.
Display: Makes the Auto Complete tool not appear on the screen.
Form Only: It does not save the information entered in the Auto Complete tool to SQL, it only edits it on<<
When you start writing on the Auto Complete tool, it completes the text automatically.
Client view on screen
If the Field Style area in SQL Query, you can write a query from the database in the SQL Query area.
Client view on screen | https://docs.xpoda.com/hc/en-us/articles/360011672399-Auto-Complete | 2021-09-16T18:26:01 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['/hc/article_attachments/360011614200/a1.png', 'a1.png'],
dtype=object)
array(['/hc/article_attachments/360011614180/a2.png', 'a2.png'],
dtype=object)
array(['/hc/article_attachments/360014707900/mceclip0.png',
'mceclip0.png'], dtype=object)
array(['/hc/article_attachments/360014688699/mceclip1.png',
'mceclip1.png'], dtype=object)
array(['/hc/article_attachments/360014707920/mceclip2.png',
'mceclip2.png'], dtype=object)
array(['/hc/article_attachments/360014688719/mceclip3.png',
'mceclip3.png'], dtype=object) ] | docs.xpoda.com |
Sculpting Tools¶
For Grease Pencil sculpt modes each brush type is exposed as a tool, the brush can be changed in the Tool Settings. See Brush for more information.
- Smooth (スムーズ)
フリーハンドのアノテーションを描画します。
- Annotate Line
直線のアノテーションを描画します。
- Annotate Polygon
ポリゴンのアノテーションを描画します。
- Annotate Eraser
書いたアノテーションを消します。 | https://docs.blender.org/manual/ja/latest/grease_pencil/modes/sculpting/tools.html | 2021-09-16T18:39:18 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../../_images/grease-pencil_modes_sculpting_tools_brushes.png',
'../../../_images/grease-pencil_modes_sculpting_tools_brushes.png'],
dtype=object) ] | docs.blender.org |
Build Python apps
Azure Pipelines
You can use Azure Pipelines to build, test, and deploy Python apps and scripts as part of your CI/CD system. This article focuses on creating a simple pipeline.
If you want an end-to-end walkthrough, see Use CI/CD to deploy a Python web app to Azure App Service on Linux.
To create and activate an Anaconda environment and install Anaconda packages with
conda, see Run pipelines with Anaconda environments.
Create your first pipeline
Are you new to Azure Pipelines? If so, then we recommend you try this section before moving on to other sections.
Get the code
Import this repo into your Git repo in Azure DevOps Server 2019:
Import this repo into your Git repo: & install.
When the Configure tab appears, select Python package. This will create a Python package to test on multiple Python versions. Python package.
YAML
- Add an
azure-pipelines.ymlfile in your repository. Customize this snippet for your build.
trigger: - master pool: Default steps: - script: python -m pip install --upgrade pip displayName: 'Install dependencies' - script: pip install -r requirements.txt displayName: 'Install requirements'.
Build environment
You don't have to set up anything for Azure Pipelines to build Python projects. Python is preinstalled on Microsoft-hosted build agents for Linux, macOS, or Windows. To see which Python versions are preinstalled, see Use a Microsoft-hosted agent.
Use a specific Python version
To use a specific version of Python in your pipeline, add the Use Python Version task to azure-pipelines.yml. This snippet sets the pipeline to use Python 3.6:
steps: - task: UsePythonVersion@0 inputs: versionSpec: '3.6'
Use multiple Python versions
To run a pipeline with multiple Python versions, for example to test a package against those versions, define a
job with a
matrix of Python versions. Then set the
UsePythonVersion task to reference the
matrix variable.
jobs: - job: 'Test' pool: vmImage: 'ubuntu-latest' # other options: 'macOS-latest', 'windows-latest' strategy: matrix: Python27: python.version: '2.7' Python35: python.version: '3.5' Python36: python.version: '3.6' steps: - task: UsePythonVersion@0 inputs: versionSpec: '$(python.version)'
You can add tasks to run using each Python version in the matrix.
Run Python scripts
To run Python scripts in your repository, use a
script element and specify a filename. For example:
- script: python src/example.py
You can also run inline Python scripts with the Python Script task:
- task: PythonScript@0 inputs: scriptSource: 'inline' script: | print('Hello world 1') print('Hello world 2')
To parameterize script execution, use the
PythonScript task with
arguments values to pass arguments into the executing process. You can use
sys.argv or the more sophisticated
argparse library to parse the arguments.
- task: PythonScript@0 inputs: scriptSource: inline script: | import sys print ('Executing script file is:', str(sys.argv[0])) print ('The arguments are:', str(sys.argv)) import argparse parser = argparse.ArgumentParser() parser.add_argument("--world", help="Provide the name of the world to greet.") args = parser.parse_args() print ('Hello ', args.world) arguments: --world Venus
Install dependencies
You can use scripts to install specific PyPI packages with
pip. For example, this YAML installs or upgrades
pip and the
setuptools and
wheel packages.
- script: python -m pip install --upgrade pip setuptools wheel displayName: 'Install tools'
Install requirements
After you update
pip and friends, a typical next step is to install dependencies from requirements.txt:
- script: pip install -r requirements.txt displayName: 'Install requirements'
Run tests
You can use scripts to install and run various tests in your pipeline.
Run lint tests with flake8
To install or upgrade
flake8 and use it to run lint tests, use this YAML:
- script: | python -m pip install flake8 flake8 . displayName: 'Run lint tests'
Test with pytest and collect coverage metrics with pytest-cov
Use this YAML to install
pytest and
pytest-cov, run tests, output test results in JUnit format, and output code coverage results in Cobertura XML format:
- script: | pip install pytest pytest-azurepipelines pip install pytest-cov pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml displayName: 'pytest'
Run tests with Tox
Azure Pipelines can run parallel Tox test jobs to split up the work. On a development computer, you have to run your test environments in series. This sample uses
tox -e py to run whichever version of Python is active for the current job.
- job: pool: vmImage: 'ubuntu-latest' strategy: matrix: Python27: python.version: '2.7' Python35: python.version: '3.5' Python36: python.version: '3.6' Python37: python.version: '3.7' steps: - task: UsePythonVersion@0 displayName: 'Use Python $(python.version)' inputs: versionSpec: '$(python.version)' - script: pip install tox displayName: 'Install Tox' - script: tox -e py displayName: 'Run Tox'
Publish test results
Add the Publish Test Results task to publish JUnit or xUnit test results to the server:
- task: PublishTestResults@2 condition: succeededOrFailed() inputs: testResultsFiles: '**/test-*.xml' testRunTitle: 'Publish test results for Python $(python.version)'
Publish code coverage results
Add the Publish Code Coverage Results task to publish code coverage results to the server. You can see coverage metrics in the build summary, and download HTML reports for further analysis.
- task: PublishCodeCoverageResults@1 inputs: codeCoverageTool: Cobertura summaryFileLocation: '$(System.DefaultWorkingDirectory)/**/coverage.xml'
Package and deliver code
To authenticate with
twine, use the Twine Authenticate task to store authentication credentials in the
PYPIRC_PATH environment variable.
- task: TwineAuthenticate@0 inputs: artifactFeed: '<Azure Artifacts feed name>' pythonUploadServiceConnection: '<twine service connection from external organization>'
Then, add a custom script that uses
twine to publish your packages.
- script: | twine upload -r "<feed or service connection name>" --config-file $(PYPIRC_PATH) <package path/files>
You can also use Azure Pipelines to build an image for your Python app and push it to a container registry.
Related extensions
- PyLint Checker (Darren Fuller)
- Python Test (Darren Fuller)
- Azure DevOps plugin for PyCharm (IntelliJ) (Microsoft)
- Python in Visual Studio Code (Microsoft) | https://docs.microsoft.com/en-us/azure/devops/pipelines/ecosystems/python?view=azure-devops&viewFallbackFrom=vsts | 2021-09-16T20:17:13 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.microsoft.com |
Click Test
If this option is turned on, then the Rectangular Mode fence will be used for the visualization of screen area where the 'Click Test' will be performed. This provides a method for verifying and identifying those areas on the sceen which are valid for the selection of objects. For:
| http://docs.imsidesign.com/projects/Turbocad-2018-User-Guide-Publication/TurboCAD-2018-User-Guide-Publication/Selecting-and-Transforming-Objects/Selecting-Objects/2D-3D-Selector/Click-Test/ | 2021-09-16T18:16:52 | CC-MAIN-2021-39 | 1631780053717.37 | [array(['../../Storage/turbocad-2018-user-guide-publication/click-test-1-2018-04-17.png',
'img'], dtype=object)
array(['../../Storage/turbocad-2018-user-guide-publication/click-test-img0001.png',
'img'], dtype=object) ] | docs.imsidesign.com |
The following information details existing device issues that have been discovered within other releases. In most cases, a resolution is included to address the issue.
Foundry EdgeIron Switch
No volatile or non-volatile storage information is available using SNMP, for this device class.
Configuration pushes are not supported, and can crash the device used for development. The following behavior persists through firmware upgrades:
When pushing commands using the command line interface, the device locks after a number of Invalid Syntax messages, due to a lack of exit commands (VLAN, Interface, and similar commands) for ASCII configurations.
For TFTP, transfer is reported as successful, but the device fails to respond to any communication protocols.
Comment characters, such as !, in the configuration cause warnings when issued using the command line interface.
Certain configuration commands do not merge, and issue Failed to set warnings. | https://docs.vmware.com/en/VMware-Smart-Assurance/10.1.4/ncm-dsr-support-matrix-1014/GUID-9A3FF343-E745-4958-A4DB-890ED2F5531D.html | 2021-09-16T19:04:40 | CC-MAIN-2021-39 | 1631780053717.37 | [] | docs.vmware.com |
Guidelines for App Help
Applications can be complex, and providing effective help for your users can greatly improve their experience. Not all applications need to provide help for their users, and what sort of help should be provided can vary greatly, depending on the application.
If you decide to provide help, follow these guidelines when creating it. Help that isn't helpful can be worse than no help at all.
Intuitive Design
As useful as help content can be, your app cannot rely on it to provide a good experience for the user. If the user is unable to immediately discover and use the critical functions of your app, the user will not use your app. No amount or quality help will change that first impression.
An intuitive and user-friendly design is the first step to writing useful help. Not only does it keep the user engaged for long enough for them to use more advanced features, but it also provides them with knowledge of an app's core functions, which they can build upon as they continue to use the app and learn.
General instructions
A user will not look for help content unless they already have a problem, so help needs to provide a quick and effective answer to that problem. If help is not immediately useful, or if help is too complicated, then users are more likely to ignore it.
All help, no matter what kind, should follow these principles:
Easy to understand: Help that confuses the user is worse than no help at all.
Straightforward: Users looking for help want clear answers presented directly to them.
Relevant: Users do not want to have to search for their specific issue. They want the most relevant help presented straight to them (this is called "Contextual Help"), or they want an easily navigated interface.
Direct: When a user looks for help, they want to see help. If your app includes pages for reporting bugs, giving feedback, viewing term of service, or similar functions, it is fine if your help links to those pages. But they should be included as an afterthought on the main help page, and not as items of equal or greater importance.
Consistent: No matter the type, help is still a part of your app, and should be treated as any other part of the UI. The same design principles of usability, accessibility, and style which are used throughout the rest of your app should also be present in the help you offer.
Types of help
There are three primary categories of help content, each with varying strengths and suitable for different purposes. Use any combination of them in your app, depending on your needs.
Instructional UI
Normally, users should be able to use all the core functions of your app without instruction. But sometimes, your app will depend on use of a specific gesture, or there may be secondary features of your app which are not immediately obvious. In this case, instructional UI should be used to educate users with instructions on how to perform specific tasks.
See guidelines for instructional UI
In-app help
The standard method of presenting help is to display it within the application at the user's request. There are several ways in which this can be implemented, such as in help pages or informative descriptions. This method is ideal for general-purpose help, that directly answers a user's questions without complexity.
See guidelines for in-app help
External help
For detailed tutorials, advanced functions, or libraries of help topics too large to fit within your application, links to external web pages are ideal. These links should be used sparingly if possible, as they remove the user from the application experience.
See guidelines for external help | https://docs.microsoft.com/en-us/windows/uwp/in-app-help/guidelines-for-app-help | 2017-03-23T05:25:01 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.microsoft.com |
MovieLens 20m Rating prediction using Factorization Machine
Table of Contents
Data preparation
Download ml-20m.zip and unzip it.
Then, create a database and import the raw ratings data into Treasure Data from the downloaded CSV.
--time-value is used to add a dummy time column (This is because Treasure Data requires each row have a timestsamp).
$ td db:create movielens20m $ td table:create movielens20m ratings $ td import:auto --format csv --column-header --time-value `date +%s` --auto-create movielens20m.ratings ./ratings.csv
The first step is to split the original data for training and testing.
$ td table:create movielens20m ratings_fm $ td query -w --type hive -d movielens20m " INSERT OVERWRITE TABLE ratings_fm select rowid() as rowid, categorical_features(array('userid','movieid'), userid, movieid) as features, rating, rand(31) as rnd from ratings; "
Now, split the data 80% for training and 20% for testing.
$ td table:create movielens20m training_fm $ td query -x --type hive -d movielens20m " INSERT OVERWRITE TABLE training_fm SELECT rowid, features, rating, rnd FROM ratings_fm ORDER BY rnd DESC LIMIT 16000000 "
Caution: If you got “java.lang.RuntimeException: INSERT into ‘v’ field is not supported”, then disable V-column of the table through Web console. Or, avoid using ‘*’ in the query.
$ td table:create movielens20m testing_fm $ td query -w --type hive -d movielens20m " INSERT OVERWRITE TABLE testing_fm SELECT rowid, features, rating, rnd FROM ratings_fm ORDER BY rnd ASC LIMIT 4000263 " $ td table:create movielens20m testing_fm_exploded $ td query -x --type hive -d movielens20m " INSERT OVERWRITE TABLE testing_fm_exploded
$ td table:create movielens20m fm_model $ td query -w -x --type hive -d movielens20m " INSERT OVERWRITE TABLE fm_model select feature, avg(Wi) as Wi, array_avg(Vif) as Vif from ( select train_fm(features, rating, '-factor 10 -iters 50 -min 1 -max 5') as (feature, Wi, Vif) from training_fm ) t group by feature; "
Training options
You can get information about hyperparameter for training using
-help option as follows:
$ td query -w --type hive -d movielens20m "
$ td table:create movielens20m fm_predict $ td query -w -x --type hive -d movielens20m " INSERT OVERWRITE TABLE fm_predict
$ td query -w --type hive -d movielens20m " select mae(p.predicted, rating) as mae, rmse(p.predicted, rating) as rmse from testing_fm as t JOIN fm_predict as p on (t.rowid = p.rowid); "
Fast Training using Feature Hashing
Training of Factorization Machines (FM) can be done more efficiently, in terms of speed and memory consumption, by using INT features. In this section, we show how to run FM training by using int features, more specifically by using feature hashing.
Training using Feature Hashing
Caution: Hivemall uses a dense array internally when both
-int_feature and
-num_features are specified, otherwise uses a sparse map. Dense array is more CPU efficient than sparse map. The default number of features created by feature_hashing is
2^24 = 16777216 and thus
-num_features 16777216 is the case for using
feature_hashing.
$ td query -w -x --type hive -d movielens20m " INSERT OVERWRITE TABLE fm_model select feature, avg(Wi) as Wi, array_avg(Vif) as Vif from ( select train_fm(feature_hashing(features), rating, "-factor ${factor} -iters ${iters} -eta 0.01 -int_feature -num_features 16777216" ) as (feature, Wi, Vif) from training_fm ) t group by feature; "
Prediction and Evaluation using Feature Hashing
Caution: DOT NOT forget to apply
feature_hashing for test data.
$ td query -w --type hive -d movielens20m "); "
Last modified: Jul 01 2016 13:59:28 UTC
If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels. | https://docs.treasuredata.com/articles/hivemall-movielens20m-fm | 2017-03-23T04:19:32 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.treasuredata.com |
ViewPager
- PDF for offline use
-
- Related Samples:
-
Let us know how you feel about this
0/250
last updated: 2016-12
ViewPager is a layout manager that lets you implement gestural navigation. Gestural navigation allows the user to swipe left and right to step through pages of data. This guide explains how to implement gestural navigation with ViewPager, with and without Fragments. It also describes how to add page indicators using PagerTitleStrip and PagerTabStrip.
Overview
A common scenario in app development is the need to provide users with
gestural navigation between sibling views. In this approach, the user
swipes left or right to access pages of content (for example, in a
setup wizard or a slide show). You can create these swipe views by
using the
ViewPager widget, available in
Android Support Library v4.
The
ViewPager is a layout widget made up of multiple child views where
each child view constitutes a page in the layout:
Typically,
ViewPager is used in conjunction with
Fragments;
however, there are some situations where you might want to use
ViewPager without the added complexity of
Fragments.
ViewPager uses an adapter pattern to provide it with the views to
display. The adapter used here is conceptually similiar to that used by
RecyclerView – you
supply an implementation of
PagerAdapter to generate the pages that
the
ViewPager displays to the user. The pages displayed by
ViewPager can be
Views or
Fragments. When
Views are
displayed, the adapter subclasses Android's
PagerAdapter base
class. If
Fragments are displayed, the adapter subclasses Android's
FragmentPagerAdapter. The Android support library also includes
FragmentPagerAdapter (a subclass of
PagerAdapter) to help with the
details of connecting
Fragments to data.
This guide demonstrates both approaches:
In Part 1, a TreePager app is developed to demonstrate how to use
ViewPagerto display views of a tree catalog (an image gallery of deciduous and evergreen trees).
PagerTabStripand
PagerTitleStripare used to display titles that help with page navigation.
In Part 2, a slightly more complex FlashCardPager app is developed to demonstrate how to use
ViewPagerwith
Fragments to build an app that presents math problems as flash cards and responds to user input.
Requirements
To use
ViewPager in your app project, you must install the
Android Support Library v4
package. For more information about installing NuGet packages, see
Walkthrough: Including a NuGet in your project.
Architecture
Three components are used for implementing gestural navigation
with
ViewPager:
- ViewPager
- Adapter
- Pager Indicator
Each of these components is summarized below.
ViewPager
ViewPager is a layout manager that displays a collection of
Views one
at a time. Its job is to detect the user's swipe gesture and navigate
to the next or previous view as appropriate. For example, the
screenshot below demonstrates a
ViewPager making the transition from
one image to the next in response to a user gesture:
Adapter
ViewPager pulls its data from an adapter. The adapter's job is to
create the
Views displayed by the
ViewPager, providing them as
needed. The diagram below illustrates this concept – the adapter
creates and populates
Views and provides them to the
ViewPager. As
the
ViewPager detects the user's swipe gestures, it asks the adapter
to provide the appropriate
View to display:
In this particular example, each
View is constructed from a tree
image and a tree name before it is passed to the
ViewPager.
Pager Indicator
ViewPager may be used to display a large data set (for example,
an image gallery may contain hundreds of images). To help the user
navigate large data sets,
ViewPager is often accompanied by a pager
indicator that displays a string. The string might be the image
title, a caption, or simply the current view's position within the data
set.
There are two views that can produce this navigation information for
you:
PagerTabStrip and
PagerTitleStrip. Each displays a string
at the top of a
ViewPager, and each pulls its data from the
ViewPager's adapter so that it always stays in sync with the
currently-displayed
View. The difference between them is that
PagerTabStrip includes a visual indicator for the "current" string
while
PagerTitleStrip does not (as shown in these screenshots):
This guide demonstrates how to immplement
ViewPager, adapter, and
indicator app components and integrate them to support gestural
navigation.. | https://docs.mono-android.net/guides/android/user_interface/viewpager/ | 2017-03-23T04:16:48 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.mono-android.net |
Queries¶
Since the Cypher plugin is not a plugin anymore, neo4j-rest-client is able to run queries and returns the results properly formatted:
>>>>> result = gdb.query(q=q)
Returned types¶
This way to run a query will return the results as RAW, i.e., in the same way
the REST interface get them. However, you can always use a
returns parameter
in order to perform custom castings:
>>> from neo4jrestclient import client
>>>>> results = gdb.query(q, returns=(client.Node, unicode, client.Relationship))
>>> results[0] [<Neo4j Node:>, u'John Doe', <Neo4j Relationship:>]
Or pass a custom function:
>>> is_john_doe = lambda x: x == "John Doe"
>>> results = gdb.query(q, returns=(client.Node, is_john_doe, client.Relationship)) >>> results[0] [<Neo4j Node:>, True, <Neo4j Relationship:>]
If the length of the elements is greater than the casting functions passed through
the
returns parameter, the RAW will be used instead of raising an exception.
Sometimes query results include lists, as it happens when using
COLLECT or other
collection functions,
neo4j-rest-client is able to handle these cases by passing
lists or tuples in the results list. Usually these lists contain items of the
same type, so passing only one casting function is enough, as all the items are
treated the same way.
>>> a = gdb.nodes.create() >>> [a.relationships.create("rels", gdb.nodes.create()) for x in range(3)] [<Neo4j Relationship:>, <Neo4j Relationship:>, <Neo4j Relationship:>] >>>>> gdb.query(q, returns=(client.Node, [client.Node, ]))[0] [<Neo4j Node:>, [<Neo4j Node:>, <Neo4j Node:>, <Neo4j Node:>]]
>>> gdb.query(q, returns=(client.Node, (client.Node, )))[0] [<Neo4j Node:>, (<Neo4j Node:>, <Neo4j Node:>, <Neo4j Node:>)]
>>> gdb.query(query, returns=[client.Node, client.Iterable(client.Node)])[0] [<Neo4j Node:>, <listiterator at 0x7f6958c6ff50>]
However, if you know in advance how many elements are going to be returned as the result of a collection function, you can always customize the casting functions:
>>> gdb.query(q, returns=(client.Node, (client.Node, lambda x: x["data"], client.Node )))[0] [<Neo4j Node:>, (<Neo4j Node:>, {u'tag': u'tag1'}, <Neo4j Node:>)]
Query statistics¶
Extra information about the execution of a each query is stored in the property stats.
>>>>> results = gdb.query(query, data_contents=True) >>> results.stats {u'constraints_added': 0, u'constraints_removed': 0, u'contains_updates': False, u'indexes_added': 0, u'indexes_removed': 0, u'labels_added': 0, u'labels_removed': 0, u'nodes_created': 0, u'nodes_deleted': 0, u'properties_set': 0, u'relationship_deleted': 0, u'relationships_created': 0}
Graph and row data contents¶
The Neo4j REST API is able to provide the results of a query in other two formats that might be useful when redering. To enable this option (which is the default only when running inside a IPython Notebook), you might pass an extra parameter to the query, data_contents. If set to True, it will populate the properties .rows as a list of rows, and .graph as a graph representation of the result.
>>>>> results = gdb.query(query, data_contents=True) >>> results.rows [[{u'name': u'M\xedchael Doe', u'place': u'T\xedjuana'}], [{u'name': u'J\xf3hn Doe', u'place': u'Texa\u015b'}], [{u'name': u'Rose 0'}], [{u'name': u'William 0'}], [{u'name': u'Rose 1'}]] >>> results.graph [{u'nodes': [{u'id': u'3', u'labels': [], u'properties': {u'name': u'M\xedchael Doe', u'place': u'T\xedjuana'}}], u'relationships': []}, {u'nodes': [{u'id': u'2', u'labels': [], u'properties': {u'name': u'J\xf3hn Doe', u'place': u'Texa\u015b'}}], u'relationships': []}, {u'nodes': [{u'id': u'45', u'labels': [], u'properties': {u'name': u'Rose 0'}}], u'relationships': []}, {u'nodes': [{u'id': u'44', u'labels': [], u'properties': {u'name': u'William 0'}}], u'relationships': []}, {u'nodes': [{u'id': u'47', u'labels': [], u'properties': {u'name': u'Rose 1'}}], u'relationships': []}]
If only one of the represenations is needed, data_contents can be either constants.DATA_ROWS or constants.DATA_GRAPH. | http://neo4j-rest-client.readthedocs.io/en/latest/queries.html | 2017-03-23T04:14:42 | CC-MAIN-2017-13 | 1490218186774.43 | [] | neo4j-rest-client.readthedocs.io |
Deployments¶
BlobStore can be deployed on any Ethereum blockchain. Each blockchain needs a single BlobStoreRegistry to be deployed. This is so that when later versions of BlobStore are deployed they can register with it.
On each blockchain, each BlobStore contract has a serial number that is used outside the blockchain to identify which contract a blob is stored on. This page is the authority on which contracts have which serial numbers on which blockchains.
Each BlobStore contract also has a contractId that is used within contracts.
See blobId for more information.
All current deployments are of BlobStore 1.0 compiled with Solidity 4.4 with optimization enabled, except for the deployment on Link Testnet which was compiled with Solidity 0.4.7-nightly.2016.12.3+commit.9be2fb12
Link Testnet¶
BlobStoreRegistry contract address:
0x1213575561bbc9db7d8493c84ce36da2737e8b57
Ethereum¶
BlobStoreRegistry contract address:
0x71E080a2e36753f880c060Ee38139A799C6366a5 ✔
Ethereum Classic¶
BlobStoreRegistry contract address:
0xb2a3a31c5425cab2a592b22ba6eab4dd24885a18 | http://docs.link-blockchain.org/projects/blobstore/en/latest/deployments.html | 2017-03-23T04:10:12 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.link-blockchain.org |
SILPA¶
SILPA is an acronym of Swathanthra(Mukth, Free as in Freedom) Indian Language Processing Applications. Its a web framework written using Flask micro framework for hosting various Indian language computing algorithms written in python. It currently provides JSONRPC support which is also used by web framework itself to input data and fetch result.
The modules work as standalone python packages which will serve their purpose and also they plug into the silpa-flask webframewok so that they can be accessed as web services also, or become just another webapp like the dictionary module.
Contents:
- Install Instructions
- Module structure for SILPA
- SILPA Webservice APIs
- Hacking on SILPA and related modules
- Frequently Asked Questions
- I am interested in contributing to this application. To whom should I contact?
- Can I use this application in my machine without having internet access?
- Can I use this application in windows?
- Can I host this application in my domain?
- Who is sponsoring the development of this project?
- When did this project development start?
- Is this application available in any GNU/Linux distributions?
- I found a bug in one module. How can I report?
- Credits | http://silpa.readthedocs.io/en/latest/ | 2017-03-23T04:13:17 | CC-MAIN-2017-13 | 1490218186774.43 | [] | silpa.readthedocs.io |
Functions for getting information about completed jobs and calculation outputs, as well as exporting outputs from the database to various file formats.
Bases: exceptions.Exception
Used to separate node labels in a logic tree path
Simple UI wrapper around openquake.engine.export.core.export_from_db() yielding a summary of files exported, if any.
Extract the first pair (dskey, exptype) found in export
Make all of the directories in the path using os.makedirs.
Build a zip archive from the given file names. | http://docs.openquake.org/oq-engine/2.1/openquake.engine.export.html | 2017-03-23T04:23:03 | CC-MAIN-2017-13 | 1490218186774.43 | [] | docs.openquake.org |
Cherokee support¶
Note
Recent official versions of Cherokee have an uWSGI configuration wizard. If
you want to use it you have to install uWSGI in a directory included in your
system
PATH.
- Set the UWSGI handler for your target.
- If you are using the default target (
/) remember to uncheck the
check_fileproperty.
- Configure an “information source” of type “Remote”, specifying the socket name of uWSGI. If your uWSGI has TCP support, you can build a cluster by spawning the uWSGI server on a different machine.
Note
Remember to add a target for all of your URI containing static files (ex. /media /images ...) using an appropriate handler
Dynamic apps¶
If you want to hot-add apps specify the
UWSGI_SCRIPT var in the uWSGI handler options:
- In the section: Add new custom environment variable specify
UWSGI_SCRIPTas name and the name of your WSGI script (without the .py extension) as the value.
Your app will be loaded automatically at the first request. | http://uwsgi.readthedocs.io/en/latest/Cherokee.html | 2017-06-22T20:37:50 | CC-MAIN-2017-26 | 1498128319902.52 | [] | uwsgi.readthedocs.io |
Synchronizing Recordings with the Cloud¶
Uploading Recordings¶
Using the user Token and username, the application will contact the ORamaVR Cloud and upload the recording. This might take a while since the Sound file of the recording is relatively large. All files are stored in Base64 encoding in the server and are encoded during the upload process.
Note
Only operations that reach Operation End will be uploaded. All other recordings will be available locally for the device they were recorded on.
Downloading Recordings¶
Using the user Token and username, the application will contact the ORamaVR Cloud and fetch a list of available recordings. Then, only the recordings that are not present on the local device will be downloaded from the server. They are all decoded from Base64 encoding and stored in the local file system.
Warning
Uploading and Downloading recordings require the user to be logged in to their ORamaVR account. If you are getting HTTP 500 Errors when attempting upload or download, make sure User Login is enabled in your application.
| https://docs.oramavr.com/en/4.0.2/unity/manual/vr_recorder/recuploading.html | 2022-08-08T01:54:57 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['../../../_images/recording_list_ui.jpg',
'Menu with List of Recordings'], dtype=object)
array(['../../../_images/login_check.jpg', 'User Login required'],
dtype=object) ] | docs.oramavr.com |
Windows Error: Unable to execute in the temporary directory
This article relates to Scan2CAD v8 and v9.
This article describes what to do when you see the following error message
The error message reads:
“Unable to execute file in the temporary directory. Setup aborted.
Error 5: Access is denied”
The error message is indicating that you are installing the application in a location for which you do not have full access rights.
How to resolve:
- Firstly, check your anti-virus software settings. Your AV software could be blocking your ability to install the software. You may need to pause your AV software during install.
- If your PC has restricted permissions you will need to install whilst running as Windows Administrator or choose to install in a location for which you have full rights.
- Now try installing Scan2CAD again.
- If you continue to see this error, we recommend following the troubleshooting steps suggested by DriverEasy. | https://docs.scan2cad.com/article/49-windows-error-unable-execute-temporary-directory | 2022-08-08T00:59:31 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/6040e894c6b4255ec178396b/images/6041f00c24d2d21e45edce82/file-AcZbjSdiH8.png',
None], dtype=object) ] | docs.scan2cad.com |
The following minimum material dimension values are widely used and field-proven to conduct lightning currents to ground without excessive heating or physical damage in majority of installations. These dimensions may not fulfill national or international standards.
Table 5 and Table 6 show two examples of practical minimums for ground electrodes. The materials are preferred because they are fairly corrosion-resistant, not because they provide optimum conductivity.
The following table is applicable to an arrangement of three ground rods in a triangle layout at 6-m (20-ft) spacing.
The following table is applicable to an arrangement of three 10-m (33-ft) horizontal ground electrodes in Y-shape. | https://docs.vaisala.com/r/M211786EN-B/en-US/GUID-D6627F24-8C0F-45A0-A393-FD434879E966/GUID-25E62703-20C3-4DBB-8CAF-0608CBCB73F7/GUID-AD2EEDF2-7C43-4FC4-A924-9AFCF7FAA3B0 | 2022-08-08T00:44:25 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.vaisala.com |
File printer
To setup channel for SharePoint you have to setup a channel with the file printer functionality.
Connection
To create a connection with your SharePoint portal you have to choose in the field [Path type] the option SharePoint.
Click on the [Edit] button to specify the connection to the SharePoint site.
To do this you have to enter the url to that site. E.g.
If necessary you can use credentials.
Library
Click in the dialog SharePoint options on the tab [Document library] and the button […]. Here you can select an existing library which is available for you of your SharePoint site. If you can’t connect to a library, please verify that your connection settings are correct. You can create in the library a folder by specifying a fixed name or you can use a value of your printed document by recognition. To use a recognition value, please click on the [Add] button and select a recognition method. If you can’t create new folders please verify the tab [Permissions]. In the library some fields can be available for metadata. Click on the [+] button and select a field. The value of that field can be a result of a recognition method or a fixed value. Repeat the procedure for more fields of your metadata.
File name
You can also change the file name of your document which you will archive on SharePoint. Click on the button [Add] if you want to add a value by a recognition method of your print job. | https://docs.winking.be/Tn/Article/137 | 2022-08-08T00:29:37 | CC-MAIN-2022-33 | 1659882570741.21 | [array(['/img/ad74d225ed4a47068f2a7f7584b1da25.jpeg', 'Image'],
dtype=object) ] | docs.winking.be |
STL_UNLOAD_LOG
Records the details for an unload operation.
STL_UNLOAD_LOG records one row for each file created by an UNLOAD statement. For example, if an UNLOAD creates 12 files, STL_UNLOAD_LOG will contain 12 corresponding rows.
This view is visible to all users. Superusers can see all rows; regular users can see only their own data. For more information, see Visibility of data in system tables and views.
Table columns
Sample query
To get a list of the files that were written to Amazon S3 by an UNLOAD command, you can call an Amazon S3 list operation after the UNLOAD completes. You can also query STL_UNLOAD_LOG.
The following query returns the pathname for files that were created by an UNLOAD for the last query completed:
select query, substring(path,0,40) as path from stl_unload_log where query = pg_last_query_id() order by path;
This command returns the following sample output:
query | path -------+-------------------------------------- 2320 | s3://my-bucket/venue0000_part_00 2320 | s3://my-bucket/venue0001_part_00 2320 | s3://my-bucket/venue0002_part_00 2320 | s3://my-bucket/venue0003_part_00 (4 rows) | https://docs.aws.amazon.com/redshift/latest/dg/r_STL_UNLOAD_LOG.html | 2022-08-08T00:25:22 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.aws.amazon.com |
How can I use FixMe.IT?
FixMe.IT can be used for:
- Technical support/help desk service. Remotely assist your customers and/or users.
- After-hours maintenance and support. Connect to unattended machines during after-hours.
- Access personal programs and files on the go. Setup your home or office computer for unattended access and instantly connect from anywhere.
- Online product demos. Help your prospects navigate through your offers efficiently, or demonstrate your products live to any number of remote users.
- Collaborate and review documents with your customer. Examine spreadsheets, charts and slides. Focus on important data and simplify complicated documents in real time.
- Obtain approvals for your design work from customers that are thousands of miles away. Show photos, sketches, 3D models, charts and presentations. | https://docs.fixme.it/general-questions/how-can-i-use-fixme-it | 2022-08-08T01:39:48 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.fixme.it |
Improvements
Naming changes to Interaction Traces
We've changed the naming scheme for Interaction Traces to be more clear. Instead of "ActivityClass#onCreate", you'll see "Display ActivityClass". If you want to change the name of a running Interaction, just call
setInteractionName().
New API methods for starting and stopping Interaction Traces
We've added two new methods to the NewRelic class API to give you greater control over starting and stopping Interaction Traces. The
startInteraction()method now just takes a string, no context needed. Use
endInteraction()to stop a running interaction.
New
@SkipTraceannotation to exclude methods from default instrumentation
While automatic instrumentation is one of the more convenient features of the agent, there are a few cases where it can get in the way. Should you encounter one of these cases, simply add this annotation to the method in question, and the agent will skip it during compile time instrumentation. | https://docs.newrelic.com/docs/release-notes/mobile-release-notes/android-release-notes/android-34190/?q= | 2022-08-08T01:11:31 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.newrelic.com |
After the build is successfully finished, you can run the Emulation-SW build, or Emulation-AIE build in the Vitis IDE. You can examine various reports generated by the compiler and simulate your design. For the hardware build, you can copy the SD card output for instance and then use it to boot and run the application on the hardware card.
- In the Assistant view expand a specific build target, right-click the Compile Summary (graph) and select Open in Vitis Analyzer. This opens the Compile Summary report as described in Viewing Compilation Results in the Vitis Analyzer.
- Find additional compiler-generated reports in the Explorer view by navigating from your project and select .
- To run the program for hardware emulation, in the Assistant view click the Run button () and select Run Configurations. This opens the Run Configurations dialog box to create a new run configuration or edit an existing one as shown.
- You can specify a name for the configuration, which allows you to create multiple configurations to apply at different times, or to different build targets as your design flow progresses.
- You can enable Generate Trace, and enable event trace for the emulation build using
--dump-vcdin the AI Engine simulator. See Performance Analysis of AI Engine Graph Application during Simulation for more details.
- You can enable Generate Profile to specify the
--profileoption in the AI Engine simulator and trigger a profile for all AI Engine processors or selected tiles. Reports are generated in the project ./Emulation-AIE/aiesimulator_output directory.Tip: Clock cycle count reports are generated with this option enabled for the selected tiles.
- To add additional AI Engine simulator options, select the Arguments tab and enter the option as you would from the command line.
When ready to run emulation, select.
- To debug the program, right-click on the application and select AI Engines stopping at their respective. The simulator starts in debug mode with the
main(). You can set breakpoints, single-step, and resume execution, as well as examine registers, local variables, and memory data structures. See Hardware Emulation Debug from the Vitis IDE for more information. | https://docs.xilinx.com/r/en-US/ug1076-ai-engine-environment/Running-and-Analyzing-the-Graph | 2022-08-08T02:26:56 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Pointers can be used as arguments to the top-level function. It is important to understand how pointers are implemented during synthesis, because they can sometimes cause issues in achieving the desired RTL interface and design after synthesis. Refer to Vitis-HLS-Introductory-Examples/Modeling/Pointers on Github for examples of some of the following concepts. | https://docs.xilinx.com/r/en-US/ug1399-vitis-hls/Pointers-on-the-Interface | 2022-08-08T00:31:56 | CC-MAIN-2022-33 | 1659882570741.21 | [] | docs.xilinx.com |
Problem Guide¶
Certain problems are general enough, if only for educational purposes, to include into our API. This guide will demonstrate some of problems that are included in evol.
General Idea¶
In general a problem in evol is nothing more than an object that has .eval_function() implemented. This object can usually be initialised in different ways but the method must always be implemented.
Function Problems¶
There are a few hard functions out there that can be optimised with heuristics. Our library offers a few objects with this implementation.
The following functions are implemented.
from evol.problems.functions import Rastrigin, Sphere, Rosenbrock Rastrigin(size=1).eval_function([1]) Sphere(size=2).eval_function([2, 1]) Rosenbrock(size=3).eval_function([3, 2, 1])
You may notice that we pass a size parameter apon initialisation; this is because these functions can also be defined in higher dimensions. Feel free to check the wikipedia article for more explanation on these functions.
Routing Problems¶
Traveling Salesman Problem¶
It’s a classic problem so we’ve included it here.
import random from evol.problems.routing import TSPProblem, coordinates us_cities = coordinates.united_states_capitols problem = TSPProblem.from_coordinates(coordinates=us_cities) order = list(range(len(us_cities))) for i in range(3): random.shuffle(order) print(problem.eval_function(order))
Note that you can also create an instance of a TSP problem from a distance matrix instead. Also note that you can get such a distance matrix from the object.
same_problem = TSPProblem(problem.distance_matrix) print(same_problem.eval_function(order))
Magic Santa¶
This problem was inspired by a kaggle competition. It involves the logistics of delivering gifts all around the world from the north pole. The costs of delivering a gift depend on how tired santa’s reindeer get while delivering a sleigh full of gifts during a trip.
It is better explained on the website than here but the goal is to minimize the weighed reindeer weariness defined below:
\(WRW = \sum\limits_{j=1}^{m} \sum\limits_{i=1}^{n} \Big[ \big( \sum\limits_{k=1}^{n} w_{kj} - \sum\limits_{k=1}^{i} w_{kj} \big) \cdot Dist(Loc_i, Loc_{i-1})\)
In terms of setting up the problem it is very similar to a TSP except that we now also need to attach the weight of a gift per location.
import random from evol.problems.routing import MagicSanta, coordinates us_cities = coordinates.united_states_capitols problem = TSPProblem.from_coordinates(coordinates=us_cities) MagicSanta(city_coordinates=us_cities, home_coordinate=(0, 0), gift_weight=[random.random() for _ in us_cities]) | https://evol.readthedocs.io/en/stable/problems.html | 2022-08-08T00:53:24 | CC-MAIN-2022-33 | 1659882570741.21 | [] | evol.readthedocs.io |
Configuring approvals for SRDs and requests
This section describes how to configure approvals for service request definitions (SRDs) and service requests.
The following topics are provided:
- Configuring approvals for SRDs
- Configuring approvals for service requests
- Creating individual and group approvers
- Creating approver mappings.
To use the default approval options, you use settings in the Application Administration Console and in the SRD. Except in special cases, you do not configure anything in the approval server directly.
Note
- You must have the SRM Administrator permission to configure BMC Service Request Management approvals.
- If you are using Process Designer, ad hoc approvals are given precedence over custom approvals in Process Designer.
Warning
Do not modify or disable any of the default approval processes, rules, or chains. Doing so might break functionality or cause unexpected behavior. | https://docs.bmc.com/docs/srm90/configuring-approvals-for-srds-and-requests-514486743.html | 2020-03-28T12:21:40 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.bmc.com |
Intranet Applications for the Citrix Gateway plug-in
Configuring Intranet Applications for the Citrix Gateway plug-in for Java Intranet Applications for the Citrix Gateway plug-in
You create intranet applications for user access to resources by defining the following:
- One IP address
- A range of IP addresses
When you define an intranet application on Citrix Gateway, the Citrix Gateway plug-in for Windows intercepts user traffic that is destined to the resource and sends the traffic through Citrix Gateway.
When configuring intranet applications, consider the following:
- Intranet applications do not need to be defined if the following conditions are met:
- Interception mode is set to transparent
- Users are connecting to Citrix Gateway with the Citrix Gateway plug-in for Windows
- Split tunneling is disabled
- If users connect to Citrix Gateway by using the Citrix Gateway plug-in for Java, you must define intranet applications. The Citrix Gateway plug-in for Java intercepts traffic only to network resources defined by intranet applications. If users connect with this plug-in, set the interception mode to proxy.
When configuring an intranet application, you must select an interception mode that corresponds to the type of plug-in software used to make connections.
Note: You cannot configure an intranet application for both proxy and transparent interception. To configure a network resource to be used by both the Citrix Gateway plug-in for Windows and Citrix Gateway plug-in for Java, configure two intranet application policies and bind the policies to the user, group, virtual server, or Citrix Gateway global.
To create an intranet application for one IP address
- In the configuration utility, on the Configuration tab, in the navigation pane, expand Citrix Gateway Resources and then click Intranet Applications.
- In the details pane, click Add.
- In Name, type a name for the profile.
- In the Create Intranet Application dialog box, select Transparent.
- In Destination Type, select IP Address and Netmask.
- In Protocol, select the protocol that applies to the network resource.
- In IP Address, type the IP address.
- In Netmask, type subnet mask, click Create and then click Close.
To configure an IP address range
If you have multiple servers in your network, such as web, email, and file shares, you can configure a network resource that includes the IP range for network resources. This setting allows users access to the network resources contained in the IP address range.
- In the configuration utility, on the Configuration tab, in the navigation pane, expand Citrix Gateway Resources and then click Intranet Applications.
- In the details pane, click Add.
- In Name, type a name for the profile.
- In Protocol, select the protocol that applies to the network resource.
- In the Create Intranet Application dialog box, select Transparent.
- In Destination Type, select IP Address Range.
- In IP Start, type the starting IP address and in IP End, type the ending IP address, click Create and then click Close.
Configuring Intranet Applications for the Citrix Gateway plug. | https://docs.citrix.com/en-us/citrix-gateway/12-1/vpn-user-config/configure-plugin-connections/ng-plugin-config-network-resources-con/ng-plugin-intranet-app-windows-tsk.html | 2020-03-28T13:23:25 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.citrix.com |
RadTabView Beta provides two layouts out of the box. These are TabStripOverflowLayout and TabStripScrollLayout. The default layout is the scroll layout. The tab view layouts share a common base class called TabStripLayoutBase which in turn implements the TabStripLayout interface.
TabStripScrollLayout arranges tabs in a scroll view. If the maximum number of visible tabs is exceeded, the user can scroll to see the remaining tabs. Using TabStripScrollLayout is as easy as creating an instance:
this.tabView.getTabStrip().setTabStripLayout(new TabStripScrollLayout());
TabStripOverflowLayout puts tabs in a popup after the number of tabs exceeds max tabs. The popup is closed when a tab from the popup is selected or when the user taps outside the popup. To use TabStripOverflowLayout simply create an instance:
this.tabView.getTabStrip().setTabStripLayout(new TabStripOverflowLayout());
All layouts have a maxVisibleTabs property. It determines how many tabs will be shown on screen. If there are more tabs they will be shown in a scroll view for TabStripScrollLayout or in a popup for TabStripOverflowLayout;
tabView.getTabStrip().getLayout().setMaxVisibleTabs(5); | https://docs.telerik.com/devtools/android/controls/tabview/tab-view-layouts | 2020-03-28T12:14:43 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.telerik.com |
Search
To get started, search for "search" from within Locksmith, like so:
... and click on the "Search" search result. (So much searching!)
That's it! :)
Protecting search forms, using Liquid code
Out of the box, this lock only protects the /search url of your shop (as in).
To hide the search boxes that your theme may include elsewhere in your shop, open up the Liquid file that contains the search form in question, and locate the actual search form. Wrap it with Liquid that looks like this:
{% include 'locksmith-variables', locksmith_scope: 'search' %} {% if locksmith_access_granted %} <!-- your search form here! --> {% endif %}
As you can see, this does require manual coding. If you need a hand with this, let us know! :) | https://docs.uselocksmith.com/article/234-search | 2020-03-28T10:55:00 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ddd799f2c7d3a7e9ae472fc/images/5e27859f2c7d3a7e9ae68e75/5e27859f78e77.png',
None], dtype=object) ] | docs.uselocksmith.com |
2.1.00.02: Patch 2
This topic provides information about updates in this patch, and instructions for downloading and installing the patch.
Update in 2.1.00.02 patch
PATROL for Amazon EC2 now supports OpenJDK 11.
For a list of supported resources, see System requirements..02 patch contains all of the files that are required for a complete product release. You can install the patch over an existing installation of the KM, or as a fresh installation.
To install the PATROL for Amazon EC2 2.1.00.02 patch using a TrueSight Presentation Server
Follow the instructions in Installing using TrueSight console.
To install the PATROL for Amazon EC2 2.1.00.02 patch in a PATROL environment
Follow the typical installation instructions in Installing in a PATROL environment. | https://docs.bmc.com/docs/amazon21ec2/2-1-00-02-patch-2-858692735.html | 2020-03-28T12:44:21 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.bmc.com |
WordPress Posts (API)
The best way to display WordPress posts in your app is with a post list page. These grab content from your site via the WP-API, they can be cached for offline, and add additional features like favoriting.
Note: API based content does not support custom plugins
For example, display a list of posts or pages from your WordPress site. You can also display custom post type content, such as books, products, or speakers. This content is stored for offline use after viewing, and is typically faster than an iframe page.
Limitations of List Pages
API lists display read-only post content, they do not work with custom plugins or special content.
For example, an embedded form, slider, or custom gallery plugin will not work through the API. If you have anything other than read-only content such as a blog post, you should use an iframe page instead.
Requirements
Before you create a list page, you must have a WordPress website running version 4.7 or later.
You should be able to navigate to mysite.com/wp-json/wp/v2/posts and see data there.
Create a List Page
To create a list page, in the App Customizer go to Custom Pages => Add New.
Choose "WordPress Posts" option from the choices, and add your title and list route.
All API list pages only show for the app where you created them, so you won't see the page you just created if you go to another app.
Default Featured Image
The small image next to your posts will be pulled from your WordPress featured image. To change this image, upload a file called default.png to your offline assets under the settings tab of the app customizer. Recommended size is 150x150px.
List Routes
A list route is any API endpoint with collection data, such as posts or pages. Here are a list of example endpoints:
-
-
-
-
-
Some endpoints (such as users) require authentication. AppPresser does not have a way to handle authenticated endpoints at this time.
Custom post types and post meta data do not show up in the API by default. View this article for help.
To add custom content to the list item display, you can use template hooks.
Custom Route Parameters
The WP-API allows you to add parameters to your query to show custom results. For example, to show a certain category, you would add this url as your route:
The '40' at the end is the category term ID. You can do the same for tags by changing the parameter to ?tags=XX. How to find the category or tag ID.
There are other parameters available, such as author, orderby, exclude, and more. For a full list, please see the "Arguments" section here. You can string parameters like this:
You would put that full url as the list route in your custom page.
For more information, please view this blog post, and the WP-API documentation.
Linking to App Pages
Your posts' content can contain links that will open pages or menu items in your app. Use the page slug in a data parameter like this:
data-apppage="playlist" | https://docs.apppresser.com/article/290-wordpress-posts-api | 2020-03-28T12:09:45 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5820e6e4c697914aa838144a/file-Z3xtZs3T3W.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/543577d6e4b01359b0ede64c/images/5980a74a042863033a1b8aa2/file-vCFy69WR8D.png',
None], dtype=object) ] | docs.apppresser.com |
Database Dumps¶
It’s often useful to allow users to download a complete CKAN database in a dumpfile.
In addition, a CKAN administrator would like to easily backup and restore a CKAN database.
Creating a Dump¶
We provide two paster methods to create dumpfiles.
- db simple-dump-json - A simple dumpfile, useful to create a public listing of the datasets with no user information. All datasets are dumped, including deleted datasets and ones with strict authorization. These may be in JSON or CSV format.
- db dump - A more complicated dumpfile, useful for backups. Replicates the database completely, including users, their personal info and API keys, and hence should be kept private. This is in the format of SQL commands.
For more information on paster, see Common CKAN Administrator Tasks.
Using db simple-dump-json¶.
Backing up - db dump¶
Restoring a database - db load¶
Daily Dumps¶.
Serving the Files¶> | https://docs.ckan.org/en/ckan-1.8/database-dumps.html | 2020-03-28T11:18:08 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.ckan.org |
Difference between revisions of "Vioso"
Latest revision as of 21:28, 3 July 2019
Overview[edit]
TouchDesigner includes the Vioso TOP which lets you read in calibration data retrieved from the VIOSO Calibrator Software. Vioso's auto-alignment technology makes the setup of installations like multi-projector panorama displays and multi-projector setups for projection mapping more automated and quick.
VIOSO Calibrator is part part of the VIOSO Anyblend and VIOSO Anyblend VR&SIM software packages but can be used individually.
You can download VIOSO Anyblend from here and use it as a trial for 30 days with an demo overlay. The procedure is to use Vioso with your projectors, do the alignment, which outputs files that TouchDesigner understands to do the appropriate image warping and blending, taking into account differences in projector light and tinted surfaces.
See also CamSchnappr, which is less automated and required a 3D model of the objects you are projecting on. CamSchnappr has been upgraded to do multi-projector blending on a given 3D model.
See Scalable Displays, kantanMapper, camSchnappr, projectorBlend
Requirements[edit]
Depending on how many channels need to be calibrated, the projectors need to be properly connected to the computer and the VIOSO Calibrator Software must be installed. Also a TouchDesigner build needs to be installed and a camera needs to be connected to the computer. We have had good experience with the Logitech HD Pro range of Webcams. When setting up the scene it is advised to have the projector overlap between 10% and 25% of the projector area. The camera needs to be able to see the complete scene and should be mounted so it cannot move during the calibration process.
Starting Calibration[edit]
Start VIOSO Calibrator and follow the instructions as outline in the VIOSO Anyblend Manual
For a simple test calibration on a flat screen:
- start VIOSO Calibrator
- click the Calibrate Button
- select the "single client calibration" on the next dialog
- select the projectors to calibrate in the Displays and Camera Dialog
- if you setup your projectors in a NVidia Surround, NVidia Mosaic, AMD Eyefinity or Matrox or Datapath single large display configuration, use the "Display Split" function to identify the correct projectors
- select the "flat screen" setup in the Displays and Camera Dialog
- select the camera you want to use in the Displays and Camera Dialog
- select the appropriate setup of your projectors in the Display arrangement Dialog
- in the Adjust Camera Dialog you have the ability to draw a mask around the area that the projector is projecting onto, if the camera view area is much larger then the actual projection screen, this will improve results during calibration
- adjust the size of the scan pattern so that the camera can clearly identify each projected circle
- after successful calibration, you can fine adjust the result in the Final Adjustments Dialog using keystone and mesh warping tools
- save the project and select Export Calibration from the File Menu of Calibrator
- select the appropriate calibration setup from the list
- select "Wings/VIOSO" format from the Export format dropdown
- select the file name and export path by clicking the Select button beside the file name field.
- make sure no toggles are selected and hit export
- the result should be a single .vwf file which you can load into the Vioso TOP. | https://docs.derivative.ca/index.php?title=Vioso&diff=16395&oldid=10121 | 2020-03-28T12:19:49 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.derivative.ca |
My.Application.DoEvents Method
Processes all Windows messages currently in the message queue.
' Usage My.Application.DoEvents() ' Declaration Public Sub DoEvents()
Remarks.
Note
The My.Application.DoEvents method does not process events in exactly the same way as the form does. Use multithreading to make the form directly handle the events. For more information, see Multithreading in Visual Basic.
Warning
If a method that handles a user interface (UI) event calls the My.Application.DoEvents method, the method might be re-entered before it finishes. This can happen because the My.Application.DoEvents method processes Windows messages, and Windows messages can raise events.
Tasks
The following table lists an example of a task involving the My.Application.DoEvents method.
Example
This example uses the My.Application.DoEvents method to allow the UI for TextBox1 to update.
Private Sub TestDoEvents() For i As Integer = 0 To 10000 TextBox1.Text = i.ToString My.Application.DoEvents() Next End Sub
This code should be in a form that has a TextBox1 component with a Text property.
Requirements
Namespace:Microsoft.VisualBasic.ApplicationServices
Class:WindowsFormsApplicationBase
Assembly: Visual Basic Runtime Library (in Microsoft.VisualBasic.dll)
Availability by Project Type
Permissions
The following permissions may be necessary:
For more information, see Code Access Security and Requesting Permissions.
See Also
Reference
WindowsFormsApplicationBase.DoEvents | https://docs.microsoft.com/en-us/previous-versions/visualstudio/visual-studio-2008/bd65th41(v=vs.90)?redirectedfrom=MSDN | 2020-03-28T12:48:52 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
FAQs
Frequently asked questions
- How to Find Your Theme's Version Number
- How to Find Your FTP Login Details
- How to Create an Image Gallery
- How to Change the Footer Credits
- Changing the Number of Portfolio Items per Page
- Changing the Number of Blog Posts per Page
- Why is the Header Image or Video Not Showing
- How to Display Featured Images on Archives | https://docs.seothemes.com/category/40-faqs | 2020-03-28T11:48:48 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.seothemes.com |
Released on: July 2, 2018
New capabilities
- The complex, nested constructs found in widely-used ontologies can now be successfully imported from OWL, visualized in diagrams, augmented with new concept models, and exported to OWL.
- Complements between classes are supported.
- Property restrictions can now be modeled in a namespace other than that of its domain, range, or restricted property.
- All subtypes of OWL Object Property (e.g., owl:TransitiveProperty) can now be imported and exported.
- A property that has multiple domains is supported.
- A UML property used in multiple classes is now interpreted in OWL as a domain that is the union of those classes.
- Unqualified cardinality restrictions is now supported by {subsets} in UML.
- A minimum cardinality of 0 in OWL is retained as a {subsets} to support its use as a flag.
- A literal annotation may now have either a language or a datatype.
- A datatype property can now be modeled as an association end or a class attribute.
- Both IRI and literal annotations are now supported.
- A maximum cardinality restriction without a minimum cardinality restriction on a property is now supported.
Usability improvements
- The smart manipulator on shapes in a Concept Model diagram shows only relevant relations.
- Importing an OWL ontology into a non-CCM project now prompts to load the CCM profile.
- Empty IRI tagged values are now automatically removed.
- The available choices are clearer when a project file is open, and a style is missing or out of date.
- The AutoStyler plugin now manages the 'Defined Elsewhere' style.
- The CCM plugin now manages the 'Default' style.
- The logging verbosity level can now be adjusted independently for the MagicDraw notification window vs log file.
- Unspecified OWL property cardinalities are now clearer, as explicit UML multiplicities.
- Both IRI and literal annotations are now available on the concept modeling diagram palette.
OWL export improvements
- Classes that are not directly owned by a package now result in a warning.
Bug fixes
- Importing or exporting complex, nested class expressions involving unions, intersections, complements, and restrictions no longer results in a loss of fidelity.
- Annotation language and datatype are no longer lost or forced to be "en".
- IRI annotations are no longer lost on importing and exporting.
- Importing resources with local names that start with a number no longer fails.
- Annotations on an ontology are no longer incorrectly emitted as annotation assertions.
- Messages appearing in the notification window no longer have the beginning of the URL stripped off.
- Importing a restriction that has another restriction as its filler no longer causes a subtle notation inconsistency.
- Opening a project without AutoStyler installed no longer results in a prompt to update the 'Defined Elsewhere' style.
- Importing a property domain that is an intersection of classes no longer creates multiple named properties in UML.
- Importing a property with multiple domains now results in an intersection of those domains as the owner of the UML property.
- Exporting a property with multiple domains to OWL results in a union of those domains as the domain of the property.
- Exporting a property with multiple types to OWL results in a union of those types as the range of the property.
- Importing an ontology into CCM no longer refers to «Anything» as «Property Holder» in log messages.
- Importing an ontology into CCM no longer ignores annotations on ontologies imported by that ontology. | https://docs.nomagic.com/display/NMDOC/What%27s+New+in+Cameo+Concept+Modeler+19.0+LTR | 2020-03-28T12:28:13 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.nomagic.com |
Using the Page Builder template
The Page Builder template creates a blank canvas for you to work with. This is useful for page builder plugins such as Beaver Builder, Elementor or SiteOrigin Page Builder.
Please note that our themes do not include a page builder. Page builder templates used in theme demos are created with page builder plugins.
To assign the Page Builder template to a page, navigate to the Edit Page screen and locate the Page Attributes field:
Select the Page Builder option from the Template dropdown menu and save your changes by clicking Publish. | https://docs.seothemes.com/article/63-using-the-page-builder-template | 2020-03-28T11:37:49 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5a03f50f2c7d3a272c0d866f/images/5a344a2304286346b0bc8fd9/file-qNhe3iMbgf.png',
None], dtype=object) ] | docs.seothemes.com |
What is the latest progress of the project?
We usually have many development efforts going on at once with the node, the GUI, and the contracts. Check out the project tracker (linked below) to see the current development status.
Resources:
Project Tracker here..
Resources:
Penalty Deposits
Service Agreement Protocol
Is there a list of external adapters available?
Currently, the community maintains lists of available external adapters.
Resources:
Chainlink External Adapter List
How many nodes are currently running?
The Chainlink Market keeps a list of node operators registered with them across multiple networks.
Resources:
Chainlink Market:
How to use Chainlink with Truffle
Add Chainlink to your existing project
Penalty Deposits
What wallet do I use to store LINK?
Any wallet that handles ERC20 tokens should work fine. The ERC677 token standard that the LINK token implements still retains all functionality of ERC20 tokens.
Will there be a token swap?
No. The Chainlink network’s main net will operate on top of the Ethereum main net. As additional smart contract platforms gain native support by the Chainlink network in the future, details will be released about how to transfer LINK to that blockchain.
Updated 3 months ago | https://docs.chain.link/docs/faq | 2020-03-28T11:31:21 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.chain.link |
Silverlight 1.0 - Development with JavaScript
Microsoft. | https://docs.microsoft.com/en-us/previous-versions/bb188266(v=msdn.10)?redirectedfrom=MSDN | 2020-03-28T12:48:36 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
Have you contributed to the Bisq Network? Great, and thank you! Here is how to request compensation for your work, and vote on requests from others.
If you’re not sure how to start contributing, check out our Contributor Checklist for tips on getting started and figuring out what to work on.
Background
Compensation requests are generally made for work done in the current DAO cycle—after the end of the last proposal phase, and before the end of the current one—but you can make a request for work done any time in the past.
Each DAO cycle is roughly 1 month long, but timing is based on the Bitcoin block height, so exact dates for the start & end of DAO cycles vary.
Make sure you check the Bisq DAO dashboard to get an idea of the submission deadline for the current cycle:
Block confirmation times can vary quite a bit, so keep an eye on these dates, and don’t wait until the last minute!
You must file a compensation request before the end of the current proposal phase in order to have it evaluated in the current voting cycle. If you miss it, don’t worry—you can submit your request in the next proposal phase.
Submit your compensation request
List your work in a new GitHub issue
Making compensation requests and voting takes place in the Bisq DAO, but data stored there is minimized to decrease the burden on the peer-to-peer network (and on the Bitcoin network).
Hence the GitHub issue. All the details of what work you did and why it’s valuable can get long, so you’ll put those details in a GitHub issue, and then link to that issue in your compensation request on the Bisq DAO.
Create the issue
The issue should go in the bisq-network/compensation repository, and it should be titled in the following format:
For Cycle N
Where N is the number of the current cycle. Please stick to this convention—it’s cleaner and much easier to track when looking back in time.
List your work
Your issue needs to convince Bisq stakeholders what you did, how much it’s worth, and why it’s valuable.
In order to make your case as strong as possible, your request should include the following information:
The total amount you are requesting in BSQ
Links to issues, pull requests, and other "evidence" for any work you want to be compensated for
Comments that help explain what the work is, why it is valuable, etc.
Links to role reports, if you hold any roles in the Bisq network
Even then, not all stakeholders will be familiar with your work, so it’s important to be as thorough as you can when making a compensation request so there’s enough context for those unfamiliar with your work to make an informed vote.
Value your work
As mentioned above, Bisq contributions are only eligible for compensation once delivered. For code, that means merged to master, and for non-code contributions that means already delivered. How you determine the value you request is up to you, but it should be based on the value of the contribution you made, not on the raw time you spent.
A good rule of thumb, if you are unsure about the value of your contribution: consider what you would charge for your work if you did it as a freelancer.
For example, it might be reasonable to request 50 BSQ for fixing some typos in a doc. But requesting 1000 BSQ for that same task, just because it took you a few hours to read through the doc, will probably be rejected.
See the current BSQ price, open trades, trade history and other information on the Bisq Markets page.
Wait for team lead review
Per the compensation request review process, your compensation request will be reviewed by the team lead(s) responsible for the work you delivered. You may be asked for further information or to revise your compensation request. For this reason, please do not submit your compensation request to the DAO in the next step until this review process is complete!
File your compensation request in the DAO
When the team lead review process is complete, you’re ready to file your request for DAO voting.
BSQ is issued on Bisq when a compensation request is approved through DAO voting, so your compensation request needs to be filed there in order for you to actually be paid.
Once you’ve documented all details of your request in a new issue on GitHub, make a new compensation request proposal on Bisq:
Make sure you select
Compensation request as the proposal type. Also make sure you use a name that stakeholders will recognize (or at least one they can cross-reference with your other online profiles like GitHub, Keybase, forum, etc).
For the proposal link, be sure to use this format:, where
# is the number of your GitHub issue. For example, if your compensation request’s GitHub URL is, the URL in your DAO compensation request should be. Don’t copy the GitHub link directly—it won’t work!
When you’re ready, click
Make proposal to confirm your proposal for voting in the current cycle. Proposal data cannot be edited, so make sure everything is correct, especially the amount you’re requesting.
If you need to make a change, you can delete the proposal and make a new one while the proposal phase is still active, but you’ll need to pay the proposal fee again. Once the proposal phase is over, proposals cannot be added or removed until the next proposal phase.
The BSQ and BTC is automatically added to your proposal transaction by Bisq, but the BTC needs to be in your Bisq BTC wallet. Make sure you have enough in there before making your request. Keep in mind that you’ll also need some BSQ and BTC for the other proposal phases (vote & vote reveal).
When you successfully submit your proposal in the DAO, it’ll propagate across the Bisq peer-to-peer network and be ready for stakeholders to vote on in the voting phase. If your request is approved, you will see the BSQ you requested in your wallet after the voting phase is over.
Vote on requests from others
Questions
If something doesn’t make sense, don’t hesitate to reach out. There’s a community of people to help you on Keybase, the Bisq forum, and the /r/bisq subreddit.
Learn more
BSQ is a core element of Bisq’s governance mechanism, allowing contributors and users to have a hand in crafting the strategy of the project through a voting process.
You can learn more about the overall mechanism in this doc and these videos.
Our user reference covers more practical details on using the Bisq DAO, and our technical reference covers technical details. Check out this page for all Bisq DAO resources. | https://docs.bisq.network/compensation.html | 2020-03-28T11:29:22 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['./images/check-dao-timing.png',
'Estimated timeframe for Bisq DAO Cycle 2'], dtype=object)
array(['./images/make-compensation-request.png',
'Make a new compensation request in the Bisq DAO'], dtype=object)] | docs.bisq.network |
Workflow Web Service
The Workflow Web service provides a workflow interface for remote clients to perform activities such as to get information about workflow for an item or workflow task, to start a workflow, or to get workflow templates.
To use the Workflow Web service library, you must generate a proxy class in either Microsoft Visual C# or Microsoft Visual Basic through which you can call the various Web service methods.
The Web Services Description Language (WSDL) for the Workflow Web service endpoint is accessed through workflow.asmx?wsdl.
The following example shows the format of the URL to the Workflow WSDL file.
/customsite
/_vti_bin/workflow.asmx
If you do not have a custom site, you can use the following URL:
/_vti_bin/workflow.asmx (). | https://docs.microsoft.com/en-us/previous-versions/office/developer/sharepoint-2007/aa981383(v=office.12)?redirectedfrom=MSDN | 2020-03-28T12:07:20 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.microsoft.com |
Content Identification (ids)¶
Description
Different ids, UIDs, integer ids or whatever can identify your Plone content and give access to it.
Id¶
Content id generally refers the item id within the folder.
Together with folder path this identifies the content in unique way.
Naturally, this id changes when the content is renamed or moved.
Use traversing to resolve object by path+id.
UID and UUID¶
UID is a unique, non-human-readable identifier for a content object which stays on the object even if the object is moved.
Plone uses UUIDs for
Storing content-to-content references (Archetypes, ReferenceField)
Linking by UIDs - this enables persistent links even though the object is moved
Plain UID is supported by Archetypes only and is based on reference_catalog
UUID is supported by Archetypes and Dexterity both and you should use this for new projects
UIDs are available for Archetypes content and unified UUIDs for both Archetypes and Dexterity content items since
plone.app.dexterity version 1.1.
Note
If you have pre-Dexterity 1.1 content items you must run a migration step in portal_setup to give them UUIDs.
To get object UUID you can use plone.app.uuid package.
Getting object UUID:
from plone.uuid.interfaces import IUUID # BrowserView helper method def getUID(self): """ AT and Dexterity compatible way to extract UID from a content item """ # Make sure we don't get UID from parent folder accidentally context = self.context.aq_base # Returns UID of the context or None if not available # Note that UID is always available for all Dexterity 1.1+ # content and this only can fail if the content is old not migrated uuid = IUUID(context, None) return uuid
Looking up object by UUID:
Use plone.app.uuid.utils.uuidToObject:
from plone.app.uuid.utils import uuidToObject ... obj = uuidToObject(uuid) if not obj: # Could not find object raise RuntimeError(u"Could not look-up UUID:", uuid)
More info:
UUID Acquisition Problem With Dexterity Content Types¶
Make sure your Dexterity content type has the plone.app.referenceablebehavior.interfaces.IReferenceable behavior enabled.
If not, when querying for an object’s UUID, you will get its parent UUID.
Then you can end up with a lot of objects with the same UUID as their parent.
If you run into this issue, here’s an easy upgrade step to fix it:
import transaction from plone.uuid.handlers import addAttributeUUID from Products.CMFCore.utils import getToolByName ... def recalculate_uuids(setup_tool): # Re-import types definition, so IReferenceable is enabled. setup_tool.runImportStepFromProfile( "profile-my.package:default", 'typeinfo') catalog = getToolByName(setup_tool, 'portal_catalog') for index, brain in enumerate(catalog(portal_type="my.custom.content.type")): obj = brain.getObject() if not getattr(obj, '_plone.uuid', None) is None: # If an UUID has already been calculated for this object, remove it delattr(obj, '_plone.uuid') # Recalculate object's UUID addAttributeUUID(obj, None) obj.reindexObject(idxs=['UID']) if index % 100 == 0: # Commit every 100 items transaction.commit() # Commit at the end transaction.commit()
Make sure to have the IReferenceable behavior listed in the content type XML definition before running the upgrade step.
Note
This upgrade step will recalculate the UUID for all “my.custom.content.type” objects.
intids¶
Integer ids (“intids”) are fast look-up ids provided by
plone.app.intid and
five.intid packages.
Instead of relying on globally unique identifier strings (UIDs) they use 64-bit integers, making low-level resolution faster. | https://docs.plone.org/develop/plone/content/uid.html | 2020-03-28T12:52:36 | CC-MAIN-2020-16 | 1585370491857.4 | [] | docs.plone.org |
Setting Your Preferences¶
After logging in to a Plone web site, you can change your personal preferences for information about your identity and choice of web site settings.
After logging in, your full name will show on the toolbar.
Click on your name to open the sub-menu, then click on the Preferences link to go to your personal area:
Date entry fields include:
Wysiwyg editor - Plone comes standard with TinyMCE, an easy to use graphical editor to edit texts, link to other content items and so forth. Your site administrator might have installed alternatives, though.
Language - On multilingual sites, you can select the language that you create content in most often. Plone excels at offering multilingual support.
Time zone - If you work in a different timezone than the server default, you can select it here.
Personal information¶
Now let’s switch over to the “Personal Information” tab:
Full Name- If your name is common, include your middle initial or middle name.
E-mail address - REQUIRED - You may receive emails from the website system, or from a message board, if installed, etc. When an item is required, a little red dot will show alongside the item.
Home page web address - If you have your own web site or an area at a photo-sharing web site, for instance, enter the web address here, if you wish, so people can find out more about you.
Biography text box - Enter a short description of yourself here, about a paragraph or so in length.
Location - This is the name of your city, town, state, province, or whatever you wish to provide.
Portrait photograph upload - The portrait photograph will appear as a small image or thumbnail-size image, so it is best to use a head shot or upper-torso shot for this.
You can change your preferences whenever you wish.
Changing your password¶
The last tab allows you to change your password.
Note
Plone is used by a variety of organisations. Some of these have centralized policies on where you can change your password, because this might also involve your access to other computer resources. In those cases, this screen might have been disabled.
| https://docs.plone.org/working-with-content/introduction/setting-your-preferences.html | 2020-03-28T12:40:49 | CC-MAIN-2020-16 | 1585370491857.4 | [array(['../../_images/show-preferences.png', 'Show Preferences'],
dtype=object)
array(['../../_robot/personal-preferences.png', 'Personal Preferences'],
dtype=object)
array(['../../_images/personal-information.png', 'Personal Information'],
dtype=object)
array(['../../_images/change-password.png', 'Change Password'],
dtype=object) ] | docs.plone.org |
resolve_name¶
astropy.utils.introspection.
resolve_name(name, *additional_parts)[source]¶
Resolve a name like
module.objectto an object and return it.
This ends up working like
from module import objectbut is easier to deal with than the
__import__builtin and supports digging into submodules.
- Parameters
- name
str
A dotted path to a Python object–that is, the name of a function, class, or other object in a module with the full path to that module, including parent modules, separated by dots. Also known as the fully qualified name of the object.
- additional_partsiterable, optional
If more than one positional arguments are given, those arguments are automatically dotted together with
name.
- Raises
ImportError
If the module or named object is not found.
Examples
>>> resolve_name('astropy.utils.introspection.resolve_name') <function resolve_name at 0x...> >>> resolve_name('astropy', 'utils', 'introspection', 'resolve_name') <function resolve_name at 0x...> | https://docs.astropy.org/en/stable/api/astropy.utils.introspection.resolve_name.html | 2022-06-25T08:36:58 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.astropy.org |
View web interfaces hosted on Amazon EMR clusters Control network traffic with security groups. nodes. These web sites are also only available on local web servers on the nodes.
The following table lists web interfaces that you can view on cluster instances. These Hadoop interfaces are available on all clusters. For the master instance interfaces, replace
master-public-dns-name with the Master public DNS listed on the cluster Summary tab in the EMR console. For core and task instance interfaces, replace
coretask-public-dns-name with the Public DNS name listed for the instance. To find an instance's Public DNS name, in the EMR console, choose your cluster from the list, choose the Hardware tab, choose the ID of the instance group that contains the instance you want to connect to, and then note the Public DNS name listed for the instance. for Firefox or SwitchyOmega for Chrome to manage your SOCKS proxy settings. This method lets you automatically filter URLs based on text patterns and limit the proxy settings to domains that match the form of the master node's DNS name. For more information about how to configure FoxyProxy for Firefox and Google Chrome, see Option 2, part 2: Configure proxy settings to view websites hosted on the master node.
If you modify the port where an application runs via cluster configuration,
the hyperlink to the port will not update in the Amazon EMR console. This is because the console
doesn't have the functionality to read
server.port configuration.
With Amazon EMR version 5.25.0 or later, you can access Spark history server UI from the console without setting up a web proxy through an SSH connection. For more information, see One-click access to persistent Spark history server.
Topics | https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-web-interfaces.html | 2022-06-25T08:57:05 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.aws.amazon.com |
Duende.BFF adds endpoints for managing typical session-related operations like triggering login and logout and getting information about the currently logged-on user. These endpoint are meant to be called by the frontend.
In addition we add an implementation of the OpenID Connect back-channel notification endpoint to overcome the restrictions of third party cookies in front-channel notification in modern browsers.
You enable the endpoints by adding the relevant services into the DI container:
public void ConfigureServices(IServiceCollection services) { // Add BFF services to DI - also add server-side session management services.AddBff(options => { // default value options.ManagementBasePath = "/bff"; }; // rest omitted }
Endpoint routing is used to map the management endpoints:
public void Configure(IApplicationBuilder app) { // rest omitted app.UseEndpoints(endpoints => { endpoints.MapBffManagementEndpoints(); });
MapBffManagementEndpoints adds all BFF management endpoints. You can also map every endpoint individually by calling the various MapBffManagementXxxEndpoint APIs, for example endpoints.MapBffManagementLoginEndpoint().
The following describes the default behavior of those endpoints. See the extensibility section for more information how to provide custom implementations.
The login endpoint triggers authentication with the scheme configured for challenge (typically the OpenID Connect handler).
GET /bff/login
By default the login endpoint will redirect back to the root of the application after authentication is done. Alternatively you can use a different local URL instead:
GET /bff/login?returnUrl=/page2
The user endpoint returns data about the currently logged-on user and the session.
To protect against cross-site request forgery, you need to add a static header to the GET request. Both header name and value can be configured on the options.
GET bff/user x-csrf: 1
If there is no current session, the user endpoint will return a 401 status code. This endpoint can also be used to periodically query if the session is still valid.
If your backend uses sliding cookies, you typically want to avoid that querying the session will extend the session lifetime. Adding the slide=false query string parameter to the URL will prohibit that.
This features requires either usage of server-side sessions, or .NET 6 or higher (or both).
GET bff/user?slide=false x-csrf: 1
If there is a valid session, the user endpoint returns a JSON array containing the contents of the ASP.NET Core authentication session and BFF specific management data, e.g.:
[ { "type": "sid", "value": "173E788068FFB728806501F4F46C52D6" }, { "type": "sub", "value": "88421113" }, { "type": "idp", "value": "local" }, { "type": "name", "value": "Bob Smith" }, { "type": "bff:logout_url", "value": "/bff/logout?sid=173E788068FFB728806501F4F46C52D6" }, { "type": "bff:session_expires_in", "value": 28799 }, { "type": "bff:session_state", "value": "q-Hl1V9a7FCZE5o-vH9qpmyVKOaeVfMQBUJLrq-lDJU.013E58C33C7409C6011011B8291EF78A" } ]
You can customize the contents of the ASP.NET Core session via the OpenID Connect handler’s ClaimAction infrastructure, or using claim transformation.
Duende.BFF adds three additional elements to the list:
bff:session_expires_in
This is the number of seconds the current session will be valid for
bff:session_state
This is the session state value of the upstream OIDC provider that can be use for the JavaScript check_session mechanism (if provided).
bff:logout_url
This is the URL to trigger logout. If the upstream provider includes an sid claim, the BFF logout endpoint requires this value as a query string parameter for CSRF protection. This behavior can be configured on the options.
The silent login endpoint is designed to trigger authentication much in the same way the login endpoint would, but in a non-interactive way.
The expected usage pattern would be that the application code loads in the browser and first triggers a request to the User Endpoint, and if that indicates that there is no session in the BFF backend, then the Silent Login Endpoint can be requested to automatically log the user in (assuming there is an existing session at the OIDC provider).
This non-interactive design relies upon the use of an iframe to make the silent login request. The result of the silent login request in the iframe will then use postMessage to notify the parent window of the outcome. If the result is that a session has been established, then the application logic can either re-trigger a call to the User Endpoint, or simply reload the entire page (depending on the preferred design).
To trigger the silent login, the applicaiton code must have an iframe and then set its src to the silent login endpoint. For example in your HTML:
<iframe id="bff-silent-login"></iframe>
And then in JavaScript:
document.querySelector('#bff-silent-login').src = '/bff/silent-login';
To then receive the result, the application would handle the message event in the browser and look for the data.isLoggedIn property on the event object:
window.addEventListener("message", e => { if (e.data && e.data.source === 'bff-silent-login' && e.data.isLoggedIn) { // we now have a user logged in silently, so reload this window window.location.reload(); } });
The silent login endpoint was added in version 1.2.0.
This endpoint triggers local and upstream logout. If the upstream IdP sent a session ID, this must be appended to the URL:
GET /bff/logout?sid=xyz
By default the logout endpoint will redirect back to the root of the application after logout is done. Alternatively you can use a local URL instead:
GET /bff/logout?sid=xyz&returnUrl=/loggedout
The logout endpoint will trigger revocation of the user’s refresh token (if present). This can be configured on the options.
The diagnostics endpoint returns the current user and client access token for testing purposes.
GET /bff/diagnostics
This endpoint is only enabled in Development mode.
The /bff/backchannel endpoint is an implementation of the OpenID Connect Back-Channel Logout specification.
The endpoint will call the registered session revocation service to revoke the user session when it receives a valid logout token. You need to enable server-side session for this feature to work.
By default, only the specific session of the user will be revoked. Alternatively, you can configure the endpoint to revoke every session that belongs to the given subject ID. | https://docs.duendesoftware.com/identityserver/v6/bff/session/management/ | 2022-06-25T07:47:06 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.duendesoftware.com |
Globalization and localization for Windows Phone 8
[ This article is for Windows Phone 8 developers. If you’re developing for Windows 10, see the latest documentation. ]
To develop your app for more than one language, you must globalize and localize your app. You have different options for localizing and globalizing your app, depending on whether it’s a managed app or a native app.
In managed apps, most of the globalization and localization functionality that you need to implement is already built into the .NET Framework. The Windows Phone 8 project templates in Visual Studio for managed apps also contain app code for localization support by default. Using these features, you can more easily reach customers for your apps in many other countries and regions.
Globalization and localization design for Windows Phone 8 native apps shares many of the same design principles as managed apps. Also, testing and submission procedures of native apps are similar to managed apps. However, although the Windows Phone 8 SDK provides native apps access to standard Win32 APIs that allow native apps to acquire run-time language and region context, there is no support for resource management APIs analogous to those APIs offered for managed apps.
This topic contains the following sections.
Globalizing your app
A globalized app will appear to be perfectly adapted to a user's cultural and business environment. Your app should display data, such as date information and numbers, in a way that is familiar to the user, and should correctly handle user input. Thanks to the .NET Framework, globalizing your app is a straightforward task. For more information about how to best globalize your app, see the following topics.
Localizing your app
By following a few simple steps, you can design and develop apps that can be easily localized, or adapted to, a specific local market. This process mostly involves the text strings in your app and the app bar, if the app bar menu items contain text. Additionally, you can choose to localize your app title. For more information about how to best localize your app, see the following topics.
Testing your app
Use the Windows Phone Emulator to test your app in each display language that your app targets. For more information about how to use the Windows Phone Emulator, see How to test region settings in the emulator for Windows Phone 8.
When changing a display language, verify that the language of your app UI automatically updates to that language. If it does not, you may not have provided a resource file for that language, as described in How to build a localized app for Windows Phone 8. If a resource file is not detected, a different language is displayed according to the resource fallback process described in Packaging and Deploying Resources.
For more info about changing the display language, testing localized strings, and testing localized app titles, see How to test a localized app for Windows Phone 8.
Submitting your app
When you’re ready to submit your app, you’ll need to include a few additional pieces of info in the Dev Center during the submission process:
Metadata for each language that your app supports.
A price, which will automatically determine the cost of your app in other countries/regions. For more info, see Define pricing and market selection.
You can also choose to opt in for worldwide distribution. This means that in the future, your app will be automatically distributed into any new country or region that Windows Phone supports.
Microsoft recommends that you develop and submit a single app that supports multiple languages, instead of developing and submitting separate apps for each language type.
For info about app submission, see Submit your app. | https://docs.microsoft.com/en-us/previous-versions/windows/apps/ff637522(v=vs.105)?redirectedfrom=MSDN | 2022-06-25T07:30:46 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.microsoft.com |
SilkyEvCam
Century Arks is proposing the SilkyEvCam, an industrial grade event camera featuring full compatibility with Metavision Intelligence software.
The camera is responsible for data sampling, time stamping and data packing for transmission of the sensor events over a USB 3.0 interface.
Highlights
Gen3.1 pixel-individually auto-sampling image sensor
VGA Resolution (640x480 pixels)
Wide Dynamic Range (up to 120dB)
Contrast Detection (CD) events support
Dimensions of only 30x30x36mm
Weight of 40g with highly efficient heat dissipation, electrical isolation and overall casing shielding
Supports any C/CS mount compatible lens, from 8mm objective lens to microscopes’ or telescopes’ imaging ports
Power supply and data exchange with standard USB 3.0 interface
Event time-stamping with microsecond (µs) precision
To buy SilkyEvCam or request more information, fill this form. | https://docs.prophesee.ai/stable/hw/partners/silky_ev_cam.html | 2022-06-25T07:05:03 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['../../_images/silky_ev_cam.jpg', 'SilkyEvCam'], dtype=object)] | docs.prophesee.ai |
The application profile describes your application, identifies the policy with which to evaluate the application, and provides metadata that enables a thorough analysis of security performance across all the applications in your organization.
You can also manage application profiles with the Applications REST API.
To access the Applications page, click All Applications on the Veracode Platform homepage.
From the Applications page, with the appropriate roles you can perform these actions:
- Add an application to the portfolio.
- Bulk-add several applications at one time to the portfolio.
- Edit an existing application profile.
- Delete an existing application profile. | https://docs.veracode.com/r/request_profile | 2022-06-25T08:52:02 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.veracode.com |
Merchant Generated API Token Changes - April 11, 2017
Beginning April 18 Clover will severely reduce the rate limits and regularly expire merchant generated API tokens. Merchant generated API tokens are designed for development and testing purposes only, and are not supported for production apps. There is no impact if you retrieve API tokens programmatically using OAuth or the Android SDK. Merchant generated API tokens in our sandbox environment are also not impacted. We'd like to remind developers merchant generated API token misuse may result in termination of the developer's account or API token deletion. After April 18, we reserve the right to further reduce rate limits on merchant generated tokens without notice. Please refer to our Developer Docs on how to programmatically generate API tokens using OAuth. For any additional assistance, please use or contact us via email at [email protected].
Updated 10 months ago | https://docs.clover.com/docs/changes-to-the-merchant-generated-api-token | 2022-06-25T08:29:38 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.clover.com |
Configure Microsoft Edge Legacy settings in Configuration Manager
Important
If you're using Microsoft Edge version 77 or later, and are trying to open the settings pane, enter
edge://settings/profiles in the browser address bar instead of search. For more information, see Get to know Microsoft Edge.
This article is for IT professionals to manage Microsoft Edge Legacy settings with Microsoft Endpoint Configuration Manager.
Applies to: Configuration Manager (current branch)
For customers who use the Microsoft Edge Legacy web browser on Windows 10 clients, create a Configuration Manager compliance policy to configure the browser settings.
Warning
This feature is deprecated. Support ends for the Microsoft Edge Legacy desktop application on March 9, 2021. With the April cumulative update for Windows 10, the new Microsoft Edge replaces Microsoft Edge Legacy. For more information, see New Microsoft Edge to replace Microsoft Edge Legacy with April’s Windows 10 Update Tuesday release.
This policy only applies to clients on Windows 10, version 1703 or later, and Microsoft Edge Legacy version 45 and earlier.
For more information on managing Microsoft Edge version 77 or later with Configuration Manager, see Deploy Microsoft Edge, version 77 and later. For more information on configuring policies for Microsoft Edge version 77 or later, see Microsoft Edge - Policies.
Policy.
Tip
For more information on using group policy to configure these and other settings, see Microsoft Edge Legacy group policies.
Configure Windows Defender SmartScreen settings for Microsoft Edge Legacy
This policy adds three settings for Windows Defender SmartScreen. The policy now includes the following additional settings on the SmartScreen Settings page:
Allow SmartScreen: Specifies whether Windows Defender SmartScreen is allowed. For more information, see the AllowSmartScreen browser policy.
Users can override SmartScreen prompt for sites: Specifies whether users can override the Windows Defender SmartScreen Filter warnings about potentially malicious websites. For more information, see the PreventSmartScreenPromptOverride browser policy.
Users can override SmartScreen prompt for files: Specifies whether users can override the Windows Defender SmartScreen Filter warnings about downloading unverified files. For more information, see the PreventSmartScreenPromptOverrideForFiles browser policy.
Create the browser profile
In the Configuration Manager console, go to the Assets and Compliance workspace. Expand Compliance Settings and select the Microsoft Edge Browser Profiles node. In the ribbon, select Create Microsoft Edge profile.
Specify a Name for the policy, optionally enter a Description, and select Next.
On the General Settings page, change the value to Configured for the settings to include in this policy. To continue the wizard, make sure to configure the setting to Set Edge Browser as default.
Configure settings on the SmartScreen Settings page.
On the Supported Platforms page, select the OS versions and architectures to which this policy applies.
Complete the wizard.
Deploy the policy
Select your policy, and in the ribbon select Deploy.
Browse to select the user or device collection to which to deploy the policy.
Select additional options as necessary:
Generate alerts when the policy isn't compliant.
Set the schedule by which the client evaluates the device's compliance with this policy.
Select OK to create the deployment.
Next steps
Like any compliance settings policy, the client remediates the settings on the schedule you specify. Monitor and report on device compliance in the Configuration Manager console.
Feedback
Trimiteți și vizualizați feedback pentru | https://docs.microsoft.com/ro-ro/mem/configmgr/compliance/deploy-use/browser-profiles | 2022-06-25T07:41:37 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.microsoft.com |
You can view high-level information about your virtual machines by using the Virtual Machines dashboard in the OKD web console.
Access virtual machines from the OKD web console by navigating to the Workloads → Virtualization page. The Workloads → Virtualization page contains two tabs: * Virtual Machines * Virtual Machine Templates
The following cards describe each virtual machine:
Details provides identifying information about the virtual machine, including:
Name
Namespace
Date of creation
Node name
IP address
Inventory lists the virtual machine’s resources, including:
Network interface controllers (NICs)
Disks
Status includes:
The current status of the virtual machine
A note indicating whether or not the QEMU guest agent is installed on the virtual machine
Utilization includes charts that display usage data for:
CPU
Memory
Filesystem
Network transfer
Events lists messages about virtual machine activity over the past hour. To view additional events, click View all. | https://docs.okd.io/4.10/virt/logging_events_monitoring/virt-viewing-information-about-vm-workloads.html | 2022-06-25T08:00:57 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.okd.io |
Blazor DateTime Picker Overview
The Blazor DateTime Picker component allows the user to choose both a date and a time from a visual list in a dropdown, or to type it into a date input that can accept only DateTime values. You can control the date and time format of the input, and respond to events.
The DateTime Picker component is part of Telerik UI for Blazor, a
professional grade UI library with 95+ native components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
The DateTime Picker component is part of Telerik UI for Blazor, a professional grade UI library with 95+ native components for building modern and feature-rich applications. To try it out sign up for a free 30-day trial.
Creating Blazor DateTimePicker
- Add the
TelerikDateTimePickertag to your razor page.
- Bind a
DateTimeobject to the component
- Optionally, provide custom
Format,
Minand
Maxvalues
Basic datetime picker with custom format, min and max
Selected time: @selectedTime <br /> <TelerikDateTimePicker Min="@Min" Max="@Max" @</TelerikDateTimePicker> @code { private DateTime? selectedTime = DateTime.Now; public DateTime Min = new DateTime(1990, 1, 1, 8, 15, 0); public DateTime Max = new DateTime(2025, 1, 1, 19, 30, 45); }
Increment Steps
The DateTime Picker enables the end users to change the selected value by clicking the rendered arrows. You can set the increment and decrement steps through the nested
DateTimePickerSteps tag and its parameters. Read more about the Blazor DateTime Picker increment steps...
Events
The Blazor DateTime Picker generates events that you can handle and further customize its behavior. Read more about the Blazor DateTime Picker events....
Validation
You can ensure that the component value is acceptable by using the built-in validation. Read more about input validation....
Action Buttons
When using the dropdown to edit dates, you must click the "Set" button to commit the date. It is located in the Time portion of the dropdown (you will be navigated to it automatically upon selecting a date). Clicking "Cancel", or outside of the dropdown without clicking "Set", will revert the time to the original value. You can also commit a date by clicking the "NOW" button which will choose the current time.
Format
The time format specifiers in the
Format control the tumblers available in the dropdown. For example, the
HH specifier will result in a hour selector in a 24 hour format. If you also add the
tt specifier, you will also get the AM/PM tumbler, but the 24 hour format will still be used. This means that you can also add several tumblers for the same time portion if the format string repeats them.
Parameters
Styling and Appearance
The following parameters enable you to customize the appearance of the Blazor DateTimePicker:
You can find more options for customizing the DateTimePicker styling in the Appearance article.
Format Placeholder
The
FormatPlaceholder parameter allows you to set custom strings as placeholders for each DateTime segment and is available for the following Telerik UI for Blazor components:
- DateInput
- DatePicker
- DateTimePicker
- DateRangePicker
- TimePicker
To set up the
FormatPlaceholder, use the
<*Component*FormatPlaceholder> nested tag. It allows you to set format placeholders by using the following parameters:
Day
Month
Year
Hour
Minute
Second
Weekday
By default, the value for all parameters is
null, which applies the full format specifier.
Component Reference
@using Telerik.Blazor.Components <TelerikDateTimePicker @</TelerikDateTimePicker> @code { private DateTime? selectedTime = DateTime.Now; // the datetime picker is a generic component and its type comes from the value field type Telerik.Blazor.Components.TelerikDateTimePicker<DateTime?> theDateTimePickerRef { get; set; } } | https://docs.telerik.com/blazor-ui/components/datetimepicker/overview | 2022-06-25T08:05:38 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.telerik.com |
Tracker breakout
You can download the files associated with this app note as a zip file.
Expanding the Tracker One
Using the M8 connector_1<<
The color code and pin assignments for this cable are:
This is the view looking into the female M8 8-pin connector at the end of the M8 to flying leads cable.
With the Tracker One Carrier.
Regulator
.. | https://docs.particle.io/hardware/tracker/projects/tracker-breakout/ | 2022-06-25T07:07:00 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['/assets/images/app-notes/AN015/both.jpg', 'Both Boards'],
dtype=object)
array(['/assets/images/app-notes/AN015/m8-cable.jpg', 'M8 cable'],
dtype=object)
array(['/assets/images/app-notes/AN015/M8-connector-wire-end.png',
'M8 Wire End'], dtype=object)
array(['/assets/images/app-notes/AN015/carrier-b8b-ph.png',
'Carrier Board'], dtype=object)
array(['/assets/images/app-notes/AN015/m8breakout4.png', 'B8 Breakout'],
dtype=object)
array(['/assets/images/app-notes/AN015/m8breakout-screw.jpg',
'PHR-8 to PHR-8'], dtype=object)
array(['/assets/images/app-notes/AN015/phr-8.jpg', 'PHR-8 to PHR-8'],
dtype=object)
array(['/assets/images/app-notes/AN015/m8-eval-adapter.jpg',
'M8 Eval adapter'], dtype=object)
array(['/assets/images/app-notes/AN015/schematic.png', 'Schematic'],
dtype=object)
array(['/assets/images/app-notes/AN015/board-layout.png', 'Board Layout'],
dtype=object)
array(['/assets/images/app-notes/AN015/regulator.png', 'Regulator'],
dtype=object) ] | docs.particle.io |
mars.tensor.minimum#
- mars.tensor.minimum(x1, x2, out=None, where=None, **kwargs)[source]#
Element-wise minimum of tensor elements.
Compare two tensors and returns a new tensor minimum is equivalent to
mt.where(x1 <= x2, x1, x2)when neither x1 nor x2 are NaNs, but it is faster and does proper broadcasting.
Examples
>>> | https://docs.pymars.org/en/latest/user_guide/tensor/generated/mars.tensor.minimum.html | 2022-06-25T08:26:04 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.pymars.org |
Layout
The layout of the Diagram is its automatic organization based on the way its shapes are connected.
The Diagram layout is also called an "incidence structure".
Getting Started
The
Layout() method is the gateway to a variety of layout algorithms.
The following example demonstrates how the
Layout method generates a Diagram with a tree-like layout.
@(Html.Kendo().Diagram() .Name("diagram") .DataSource(dataSource => dataSource .Read(read => read.Action("_DiagramTree", "Diagram")).Model(m => m.Children("Items")) ) .Layout(l => l .Type(DiagramLayoutType.Tree) // Set the Layout type. .Subtype(DiagramLayoutSubtype.Down) .HorizontalSeparation(30) .VerticalSeparation(20) ) .ShapeDefaults(sd => sd .Width(40) .Height(40) ) )
Layout Types
The Diagram supports the following predefined layout types:
DiagramLayoutType.Tree—Organizes a Diagram in a hierarchical way and is typically used in organizational representations. This layout type includes the radial tree layout, mind-mapping, and the classic tree diagrams.
DiagramLayoutType.Force—Represents a force-directed layout algorithm (spring-embedder algorithm). Based on a physical simulation of forces which act on the nodes whereby the links define whether two nodes act upon each other. Effectively, each link is like a spring embedded in the Diagram. The simulation attempts to find a minimum energy state in such a way that the springs are in their base-state and, in this way, do not pull or push any (linked) node. This force-directed layout is non-deterministic—each layout pass will result in an unpredictable and, therefore, not reproducible layout. The optimal length is more an indication in the algorithm than a guarantee that all nodes will be at this distance. The result of the layout is a combination of the incidence structure of the Diagram, the initial positions of the nodes (topology), and the number of iterations.
DiagramLayoutType.Layered—Organizes the Diagram with an emphasis on flow and minimizes the crossing between layers of shapes. This layout works well when few components are present and a top-down flow is present. The concept of "flow" in this context refers to a clear direction of the connections with a minimum of cycles (connections flowing back upstream). The shrinks to a standard tree layout and, in this way, can be considered as an extension to the classic tree layout. | https://docs.telerik.com/aspnet-core/html-helpers/diagrams-and-maps/diagram/layout | 2022-06-25T08:43:47 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.telerik.com |
Managing Extensions - Legacy Guide¶
Installing an Extension using the Extension Manager¶
In the backend:
Go to “ADMIN TOOLS” > “Extensions”
In the Docheader, select “Get Extensions”’.
Check whether any referrals have been made to the extension in any setup, config or other TypoScript files., especially not under time pressure.
Uninstall / Deactivate Extension via TYPO3 Backend¶
Select “Deactivate” in Extension Manager.
Additional Information¶
The following is independent of whether you install with Composer or without.
Find out the Extension Key for an Extension¶
Again, go to the Extension Repository, and search for the extension.. | https://docs.typo3.org/m/typo3/tutorial-getting-started/11.5/en-us/Extensions/LegacyManagement.html | 2022-06-25T08:08:45 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['../_images/UninstallExtension.png',
'../_images/UninstallExtension.png'], dtype=object)] | docs.typo3.org |
Vaisala Humidity Calibrator HMK15 has been developed for the calibration and checking of humidity probes and transmitters. The functioning of the calibrator is based on the fact that certain salt solutions generate a specific relative humidity in the air above them.
- 1
- Salt chamber with transit cover on
- 2
- Thermometer
- 3
- Chamber cover with rubber plugs
- 4
- Base plate
- 5
- Adapter fitting
- 6
- Ready-dosed salt package with calibration certificate (accessory)
- 7
- Calibration certificate for thermometer
- 8
- Ion-exchanged water (accessory)
- 9
- Measurement cup
- 10
- Measurement spoon
The four holes in the chamber cover are designed for Vaisala probes and transmitters with 12, 13.5 (2 holes), and 18.5 mm (0.47, 0.53, and 0.73 in) diameter.
Optional custom cover sets are available for Vaisala DMT132 dew point transmitter and Vaisala HMP60 and HMP110 humidity and temperature probes 1. For instructions on using HMK15 with DMT132 and HMP60/HMP110, see the respective user guides.
- Lithium chloride LiCl (11 %RH)
- Magnesium chloride MgCl2 (33 %RH)
- Sodium chloride NaCl (75 %RH)
- Potassium chloride KCl (85 %RH)
- Potassium sulphate K2SO4 (97 %RH)
In calibration, the sensor head is inserted into a salt chamber containing a saturated salt solution. The reading given by the probe or transmitter is then adjusted to the humidity value that the specific salt solution generates at that particular temperature.
To ensure the sensor accuracy over the entire humidity range (0 … 100 %RH), calibration is usually performed at least at two different humidities.
HMK15 is suitable for both laboratory and field use. The chambers can be tightly closed for transportation with custom-designed transit covers. The optional transit bag (item code HM27032) allows the calibrator to be transported in vertical position or to be housed during calibration.
Accessories include additional salt chambers, ion exchanged water, transit bag, and ready-dosed salt packages (LiCl 11 %RH, MgCl2 33 %RH, NaCl 75 %RH, KCl (85 %RH), and K2SO4 97 %RH). | https://docs.vaisala.com/r/M210185EN-D/en-US/GUID-3FDA29E4-CC30-4827-9D43-443B0839B996 | 2022-06-25T07:58:05 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.vaisala.com |
The Veracode Integration for Jira Cloud enables you to do one-time imports, selective imports, and automated imports of security findings from Veracode scans.
The Veracode Integration for Jira Cloud automatically sets the Priority field of an imported flaw if that field is available and has default values. The integration uses this formula to set the priority based on the severity of the flaw Cloud,. | https://docs.veracode.com/r/c_jira_cloud_import_findings | 2022-06-25T07:43:21 | CC-MAIN-2022-27 | 1656103034877.9 | [] | docs.veracode.com |
17. Economic Cost/Loss Value Plots
17.1. Description
The Economic Cost Loss Value statistic is sometimes also called the Relative value score (Richardson, 2000; Wilks, 2001). This plot produces the relative value curve for deterministic forecasts based on counts in a 2x2 contingency table along with the expected cost-to-loss ratio.
This information can help the user decide, for a cost/loss ratio C/L for taking action based on a forecast, what the relative improvement is in economic value between climatological and perfect information. The relative value is a skill score based on expected cost, with (sample) climatology as the reference forecast. Because the cost/loss ratio is different for different users of forecasts, the value is plotted as a function of cost to loss.
The ECLV score can range from -\(\infty\) to 1.
Like ROC diagrams, it gives information that can be used in decision making.
17.2. Line Type
ECLV requires the ECLV line type generated by either Point-Stat or Grid-Stat.
17.3. How-To
Selection of options to produce the ECLEclv’ tab..
For a ECLV plot, the forecast variable (“FCST_VAR”) must be selected. This is found in the “Specialized Plot Fixed Values” section. In the example below, the forecast variable is 6-hour accumulated precipitation “APCP_06”.
It usually does not make sense to mix statistics for different groups. The desired group to calculate statistics over can be specified in the “Specialized Plot Fixed Values” section. A single domain (category: “VX_MASK”, value: “FULL”), a single accumulation time (category: “FCST_LEV”, value: “A6”), and an observation type (category: “OBTYPE”, value: “METAR_SYNOP”) are chosen. If multiple domains or thresholds were chosen, the statistics would be a summary of all of those cases together, which may not always be desired.
Select the type of statistics summary by selecting either the .
17.4. Example
The figure below shows an ECLV plot. In this example, three different forecasting systems are used to predict precipitation at two different thresholds. The economic value peaks at about 0 for all forecasts. Values of the ECLV are negative for the majority of the cost to loss ratio. Low values of C/L indicate either a very small cost to protect, very high losses, or both. In these cases, it probably makes sense to protect regardless of the forecast. At the other end, the cost to protect nears the amount of the potential loss. In that case, it probably makes sense to do nothing, regardless of the forecast, so the economic value of the forecast is negative. Between those extremes, each user can determine their own C/L ratio, risk tolerance, etc. to determine the best forecasting system for their needs. In the example below, many of the forecasts are quite similar, so a user may select from a grouping based on other criteria, such as forecast latency or computational requirements.
Figure 17.1 Example ECLV plot for three models using two different thresholds.
Here is the associated xml for this example. It can be copied into an empty file and saved to the desktop then uploaded into the system by clicking on the “Load XML” button in the upper-right corner of the GUI. This XML can be downloaded from this link: eclv_xml.xml.
<?xml version="1.0" encoding="UTF-8" standalone="no"?> <plot_spec> <connection> <host>mohawk</host> <database>mv_skymet<>eclv.R_tmpl</template> <series1> <field name="model"> <val>exp01</val> <val>exp02</val> <val>exp03</val> </field> <field name="fcst_thresh"> <val>>0.1</val> <val>>2.5</val> </field> </series1> <plot_fix> <field equalize="false" name="fcst_var"> <set name="fcst_var_0"> <val>APCP_06</val> </set> </field> <field equalize="false" name="vx_mask"> <set name="vx_mask_1"> <val>FULL</val> </set> </field> <field equalize="false" name="fcst_lev"> <set name="fcst_lev_2"> <val>A6</val> </set> </field> <field equalize="false" name="obtype"> <set name="obtype_3"> <val>METAR_SYNOP</val> </set> </field> </plot_fix> <plot_stat>median</plot_stat> <tmpl> <data_file>plot_20201001_190659.data</data_file> <plot_file>plot_20201001_190659.png</plot_file> <r_file>plot_20201001_190659.R</r_file> <title>Economic Value for 6-hr APCP</title> <x_label>Cost/Loss Ratio</x_label> <y1_label>Economic Value<","none","none","none","none")</plot_ci> <show_signif>c(FALSE,FALSE,FALSE,FALSE,FALSE,FALSE)</show_signif> <plot_disp>c(TRUE,TRUE,TRUE,TRUE,TRUE,TRUE)</plot_disp> <colors>c("#ff0000FF","#ff0000FF","#0000ffFF","#0000ffFF","#008000FF","#008000FF")</colors> <pch>c(20,20,20,20,20,20)</pch> <type>c("b","b","b","b","b","b")</type> <lty>c(1,2,1,2,1,2)</lty> <lwd>c(1,1,1,1,1,1)</lwd> <con_series>c(1,1,1,1,1,1)</con_series> <order_series>c(1,2,3,4,5,6)</order_series> <plot_cmd/> <legend>c("","","","","","")</legend> <y1_lim>c()</y1_lim> <x1_lim>c()</x1_lim> <y1_bufr>0.04</y1_bufr> <y2_lim>c()</y2_lim> </plot> </plot_spec> | https://metviewer.readthedocs.io/en/develop/Users_Guide/eclvplots.html | 2022-06-25T08:02:32 | CC-MAIN-2022-27 | 1656103034877.9 | [array(['../_images/eclv_plot.png', '../_images/eclv_plot.png'],
dtype=object) ] | metviewer.readthedocs.io |
Arrays
An array is a List containing several items of the same kind.
Declaring Arrays
It is declared using
[ and
].
//Array containing "Hello" and "World" val stringArray = ["Hello", "World"] as string[]; //Array containing 1-3 val intArray = [1,2,3] as int[];
If you now think “wait, haven’t I seen these brackets before?”, you have.
Remember
recipes.add(out,[[],[],[]]);?
This uses three arrays with each containing up to three entries to define a crafting table recipe.
Casting Arrays
You surely have noticed that all arrays here have the
as statement appended.
Why you ask? This is because ZenScript sometimes cannot predict what type the items in the array are. This can be the cause of strange conversion error logs!
Better be safe than sorry and cast the Arrays to their correct types!
Also, if you cast to non-primitive types (everything except strings, ints and the same) be sure to import the corresponding package and be sure to do so at the TOP of the script:
import crafttweaker.item.IItemStack; val IArray = [<minecraft:gold_ingot>, <minecraft:iron_ingot>] as IItemStack[];
Nested Arrays
You can place Arrays in Arrays.
val stringArray1 = ["Hello","World"] as string[]; val stringArray2 = ["I","am"] as string[]; val stringArray3 = ["a","beatuful"] as string[]; val stringArrayAll = [stringArray1,stringArray2,stringArray3,["Butterfly","!"]] as string[][];
Reffering to items in an Array
You can refer to an element within an array by using it’s place in the list. The first item in an Array is No. 0, the 2nd No.1 and so on.
If you want to refer to an item in a nested Array, you need two or more referers, as each removes one layer of the lists.
/* stringArray[0] is "Hello" stringArray[1] is "World" stringArray[2] is "I" stringArray[3] is "am" */ val stringArray = ["Hello","World","I","am"] as string[]; //prints "Hello" print(stringArray[0]); //Nested Arrays val stringArray1 = ["Hello","World"] as string[]; val stringArray2 = ["I","am"] as string[]; val stringArray3 = ["a","beautiful"] as string[]; val stringArrayAll = [stringArray1,stringArray2,stringArray3,["Butterfly","!"]] as string[][]; /* stringArrayAll[0] is ["Hello","World"] stringArrayAll[1] is ["I","am"] stringArrayAll[2] is ["a","beautiful"] stringArrayAll[3] is ["Butterfly","!"] stringArrayAll[0][0] is "Hello" stringArrayAll[0][1] is "World" etc. */ //prints "World" print(stringArrayAll[0][1]);
Loops
A loop is a function that repeats itself. You can use loops to apply an action to all elements in an Array
For Loop
The main use of the for-loop is iterating through an array. Iterating means doing an action to all elements of an array.
You can use the
break keyword to break the loop prematurely.
import crafttweaker.item.IItemStack; val IArray = [<minecraft:dirt>,<minecraft:planks>,<minecraft:diamond>] as IItemStack[]; val JArray = [<minecraft:grass>,<minecraft:log>,<minecraft:gold_ingot>] as IItemStack[]; val KArray = [<minecraft:wooden_axe>,<minecraft:golden_shovel>,<minecraft:emerald>] as IItemStack[]; //for [IntegerName, ] elementName in IArray {code} for item in IArray { //defines the variable "item" with each element of IArray (i.e. <minecraft:dirt>,<minecraft:planks>,<minecraft:diamond>) //Just use this variable now! recipes.remove(item); } for i, item in IArray { //defines the variable "i" with each element Number of IArray (i.e. 0,1,2,...) //defines the variable "item" with each element of IArray (i.e. <minecraft:dirt>,<minecraft:planks>,<minecraft:diamond>) //Just use these variables now! //Crafts Item of IArray using item of JArray and KArray (i.e. Dirt with grass and wooden axe, planks with wood and golden shovel, diamond with gold ingot and emerald) recipes.addShapeless(item,[JArray[i],KArray[i]]); } for i in 0 to 10 { //defines the variable "i" with each number from 0 to 9 (i.e. 0,1,2,...,8,9) print(i); } for i in 10 .. 20 { //defines the variable "i" with each number from 10 to 19 (i.e. 10,11,12,...,18,19) print(i); } for item in loadedMods["minecraft"].items { //defines the variable "item" with each item added by the mod with the modID "minecraft" and removes its crafting recipe recipes.remove(item); }
While Loop
The while loop executes the given code as long as the given condition evaluates to
true.
Alternatively, you can stop it using the
break keyword.
var i = 0; //Will print 0 - 9, because in the iteration after that, i < 10 is false since i is 10 then. while i < 10 { print(i); i += 1; } print("After loop: " + i); //Will print 10 - 6, because in the iteration after that i == 5 and it will break. while (i > 0) { if i == 5 break; print(i); i -= 1; } print("After loop 2: " + i); for k in 1 .. 10 { if (k == 5) break; print(k); }
Adding items to an Array
While it is not recommended to do so, it is possible to add some Objects to Arrays.
You can only add single Objects to an array, you cannot add two arrays.
You use the
+ operator for array Addition:
import crafttweaker.item.IItemStack; val iron = <minecraft:iron_ingot>; var array as IItemStack[] = [iron, iron, iron]; array += iron; for item in array { print(item.displayName); } | https://crafttweaker.readthedocs.io/en/latest/AdvancedFunctions/Arrays_and_Loops/ | 2018-12-10T04:23:41 | CC-MAIN-2018-51 | 1544376823303.28 | [] | crafttweaker.readthedocs.io |
3D Editor / Basic Editing
Save Scene
To save a scene click on the save button.
Name
The name of the scene can be customized in here.
If you ordered a 3d model through Archilogic then it is possible that the name of the scene is an internal issue number.
The name can be edited freely without any consequences.
Address
The address is necessary for the map. If the address field is empty, the map menu disappears.
The correct syntax for the address is: Street Streetnumber, Postcode City, Country
Map
If a valid address is provided a click on the little map icon opens the map in the context menu.
Folder
The drop down menu lets you choose a folder to save the model into.
Sharing
There are three different sharing modes: Public, Hidden and Private.
The only person that can edit the scene is the owner of the scene, regardless of the selected sharing mode.
If the sharing mode is “Public” or “Hidden”, however, other Archilogic users may open and save a personal copy of said scene and will be able to customize it.
Public - The scene can show up in the community gallery and can be opened by everyone.
Hidden - The scene does not show up in the community gallery but can be opened by everyone who has the correct link to it.
Private - The scene can only be opened if the person that wants to look at it is logged in with the same account that was used to save the scene.
Floor Area (m²)
The floor area shows how big the apartment is in square meters.
If the 3d model was ordered through Archilogic, then this number is calculated automatically by adding the areas of all the floors together. | https://docs.archilogic.com/en/3d-editor/basic-editing/save-scene.html | 2018-12-10T05:44:38 | CC-MAIN-2018-51 | 1544376823303.28 | [array(['/assets/images/Basic-Save-Scene.jpg', 'Save Archilogic Scene'],
dtype=object) ] | docs.archilogic.com |
Adds the specified object to the Collection Source's CollectionSourceBase.Collection.
Begins update of the Collection Source's collection criteria. The criteria will not be applied to the collection until the update is complete.
Releases all the resources allocated by the current PropertyCollectionSource.
Ends update of the Collection Source's collection criteria and applies it.
Returns the number of objects contained in the Collection Source's CollectionSourceBase.Collection.
For internal use.
Tries to determine whether the specified object satisfies the criteria contained in the CollectionSourceBase.Criteria dictionary.
Reloads the current Collection Source's CollectionSourceBase.Collection.
Removes the specified object from the Collection Source's CollectionSourceBase.Collection.
Recreates a Collection Source's CollectionSourceBase.Collection.
Sets the value that specifies whether it is possible to add objects to the Collection Source's CollectionSourceBase.Collection or not.
Sets the value that specifies whether it is possible to remove objects from the Collection Source's CollectionSourceBase.Collection or not.
Adds the specified criteria expression to the CollectionSourceBase.Criteria dictionary. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.PropertyCollectionSource._methods | 2018-12-10T04:58:20 | CC-MAIN-2018-51 | 1544376823303.28 | [] | docs.devexpress.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.