content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
- Product
- Customers
- Solutions
Datadog displays individual logs following this general side panel layout:
Context refers to the infrastructure and application context which the log has generated. Information is gathered from tags, whether automatically attached (host name, container name, log file name, serverless function name, etc.) or added through custom tags (team in charge, environment, application version, etc.). Extract corresponding information from your logs and remap your attributes with standard attribute remappers..
Click on the Metrics tab and access underlying infrastructure metrics in a 30 minutes timeframe around the log.
Interact with Host in the upper reserved attributes section, the related host dashboard, or network page. Interact with Container sections to jump to the container page scoped with the underlying parameters.
In case logs comes from a serverless source, the Host Section is replaced with a Serverless section that links jump to the corresponding serverless page.
Make sure you enable trace injection in logs and follow the Unified Service Tagging best practices to benefit from all the capabilities of Logs and APM correlation.
Click on the APM Tab and see a log in the context of its whole trace, with upstream and downstream services running. Deep dive in APM data and the trace in APM feature.
Interact with the Service section to refocus the search in the Log Explorer and see all other logs from the same trace.
Interact with the attributes names and values in the lower JSON section to:
Use the Share button to share the log opened in side panel to other contexts.
Ctrl+C/
Cmd+Ccopies the log JSON to your clipboard.
Additional helpful documentation, links, and articles: | https://docs.datadoghq.com/logs/explorer/side_panel/?lang_pref=en | 2021-06-12T22:53:29 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.datadoghq.com |
Organization and personnel management workflow experience enhancements
Business value
Workflow requests and approvals are a key aspect of organization and personnel management. Enhancements in this area will help HR professionals and managers to better understand the workflow submission and approval process and the state of new hires and positions requests, or to make changes to existing employees or positions.
Feature details
This feature will enable HR professionals and managers to:
- Have a clearer intuitive view of submitting workflow requests and taking action on them as a workflow reviewer.
- Have a unified list for all the action items assigned to an employee, including workflow reviews or tasks they need to complete.
See also
Configuration option to position Work items assigned to me list (docs) | https://docs.microsoft.com/th-th/dynamics365-release-plan/2020wave2/human-resources/dynamics365-human-resources/organization-personnel-management-workflow-experience-enhancements | 2021-06-13T00:39:44 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['media/hr-workflow-quick-access.png',
'Workflow approver and available actions for them with the new Workflow experience Workflow approver and available actions for them with the new Workflow experience'],
dtype=object)
array(['media/hr-workflow-work-items-assigned-to-me.png',
'Work items assigned to me with To-do list Work items assigned to me with To-do list'],
dtype=object) ] | docs.microsoft.com |
WAF Rulesets
You can enable rules by threat category for requests and responses — these are called "rulesets." There is an option to disable individual rules within a ruleset if they are causing false positives. See Disable Individual Rules.
Rules can run on a variety of sources, for example, they can run on URIs, headers, arguments, session IDs, cookies, and request and response bodies, to name a few.
For optimal performance, enable only the rules that apply to your web application’s technology.
Request Rulesets
Request rulesets are divided into ten threat categories, which are called rulesets. You can apply one of three actions for each ruleset:
Disable ruleset - (Default) Ruleset detection is turned off.
Detect and allow violations - The violation is detected and you will get information in your logs at the INFO level, per incident.
They will also be included at the ERROR (default log level for Edge policies) level in a summary report.
See the Runtime Fabric documentation.
Detect and reject violations - The request is rejected and returns a response status of
HTTP/1.1 400 BAD REQUEST - web application firewall.
For request rulesets, DoS is notified that a rule was triggered. If DoS has been configured for WAF Errors, DoS will update its WAF-related counters and take action, if necessary. If DoS isn’t configured for WAF Errors, it ignores the notification it receives from WAF.
For both request and response rulesets, information about the violation is also sent to the log at INFO level per incident.
The ten threat categories include:
Scanner detection
Protocol enforcement
Protocol attack
Local file inclusion
Remote file inclusion
Remote code execution
PHP injection
Cross-site scripting
SQL injection
Session fixation
Response Rule Sets
Response rulesets are similar to request rulesets. Response rulesets check responses from backend servers to look for information that should not be present in responses.
The response rulesets are again divided into threat categories. The same actions apply as with the request ruleset — disable ruleset, disable and reject violations, and disable and allow violations. These have the same meaning as they do in the request ruleset, except DoS is not notified for response rulesets.
The threat categories for the response ruleset include:
Data leakages
SQL data leakages
Java data leakages
PHP data leakages
IIS data leakages
View Rules in RAML
You can understand the regular expressions that make up each rule by viewing the rules in the Anypoint Security RAML (
security-fabric-policies-api-<version>.raml). Not all OWASP CRS rules for each category are enabled for the WAF policy. A very small number of rules are not included for various reasons. View the RAML for the exact contents and rule IDs that are included.
Go to Anypoint Exchange.
Extract the ZIP files.
Navigate to
<Download_location>/security-fabric-policies-api-<version>-raml/dataTypes/policies/WafRules/Rulesets.json.
Runtime Fabric Policy Summary
The Runtime Fabric policy summary includes the WAF summary. A WAF summary log message is generated every minute. See View and Configure Logging in Runtime Fabric. The contents in the raw log message include WAF and DoS summary statistics.
For example:
If you format the raw data in JSON, from this example, you can see that rule 920210 has fired 63,193 times and that rule 920310 has fired one time, which indicates that rule 920210 is causing false positives.
... "requestProtocolEnforcement": { "920100": 0, "920120": 0, "920121": 0, "920130": 0, "920140": 0, "920160": 0, "920170": 0, "920180": 0, "920190": 0, "920200": 0, "920201": 0, "920202": 0, "920210": 63193, "920220": 0, "920230": 0, "920240": 0, "920250": 0, "920260": 0, "920270": 0, "920271": 0, "920272": 0, "920273": 0, "920274": 0, "920280": 0, "920290": 0, "920300": 0, "920310": 1, "920311": 0, "920320": 0, "920330": 0, "920340": 0, "920350": 0, "920360": 0, "920370": 0, "920380": 0, "920390": 0, "920400": 0, "920410": 0, "920420": 0, "920430": 0, "920440": 0, "920450": 0, "920460": 0 }, ... | https://docs.mulesoft.com/anypoint-security/waf-rulesets | 2021-06-12T23:05:03 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['_images/waf-summary-log-raw.png', 'waf summary log raw'],
dtype=object)
array(['_images/waf-summary-log-raw.png', 'waf summary log raw'],
dtype=object) ] | docs.mulesoft.com |
Save Searches
As an Exchange administrator, you can save searches that your users can click to access assets within a business group.
While users can view and click the searches, only an Exchange administrator can save, rename, or delete the saved searches.
Change to the business group where you want to add saved searches.
Specify a search in the search field.
Click Save This Search.
In the Save Search window, specify the name for the search, and save the search name. The saved searches appear in the left navigation bar of the Assets page under the Organization Searches heading.
If needed, click the settings button for the search to the right of the search name in the left navigation area.
To rename the search, select the text in the navigation bar field and provide a new name followed by pressing the Enter or Return key on your keyboard.
To delete the search, click Delete from the search name Setting menu. | https://docs.mulesoft.com/exchange/to-save-searches | 2021-06-12T23:35:27 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.mulesoft.com |
When.
You can review the following icons in the Manage Copies view to determine whether the backups and clones are available on the primary or secondary storage (Mirror copies or Vault copies).
You cannot rename the backups that are on the primary storage system. | https://docs.netapp.com/ocsc-40/topic/com.netapp.doc.ocsc-dpg-cpi/GUID-943ABEBB-6A30-46ED-BDAA-41765D27B7D9.html | 2021-06-13T00:00:58 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.netapp.com |
Configure maximum index size
Note: This topic is not relevant to SmartStore indexes. See Configure data retention for SmartStore indexes. settings:
#.. directory (homePath)! | https://docs.splunk.com/Documentation/Splunk/8.0.6/Indexer/Configureindexstoragesize | 2021-06-12T23:57:16 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Doing.
See Doing More With Java Rich Internet Applications for further information on advanced topics that are common to applets and Java Web Start applications (such as setting arguments and properties, using Java Network Launch Protocol API). | https://www.docs4dev.com/docs/en/java/java8/tutorials/deployment-applet-doingMoreWithApplets.html | 2021-06-13T00:15:20 | CC-MAIN-2021-25 | 1623487586465.3 | [] | www.docs4dev.com |
Accepted data format: the
Actor classes for the data type expect the data in specific formats, this column illustrates what they expect.
File formats: example file formats that data can be stored in. When bold it means that the class can load the data from files directly. Check out Using your data to see how to load data and process it for brainrender. | https://docs.brainrender.info/usage/using-your-data/supported-data-types | 2021-06-12T22:59:59 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.brainrender.info |
System Utilities
GNU time 1.8
The time command runs another program, then displays information about the resources used by that program. This version of time has changed the default output format. If the measured command fails, time will report about it on the first line of standard error output. This report can be suppressed by using the -q option. The POSIX mode (-p option) is unchanged. Users are advised to review their scripts for make sure they can parse the new time output. | https://docs.fedoraproject.org/ar/fedora/f28/release-notes/sysadmin/System_Utilities/ | 2021-06-13T00:49:28 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.fedoraproject.org |
<jms:connector
Migrating to the JMS Connector
The JMS transport was completely reshaped and evolved away from the Mule 3 transport model into an operation based connector with a simplified UX and fully DataSense enabled Message information. This means not only that using JMS now provides the same experience regarding connection and configuration that any other connector, but also that producing and consuming messages is simpler based on the structured Message information.
Also, the new version of JMS Connector provides support for the much desired JMS 2.0 spec, with all the new capabilities like shared subscriptions.
Configuring The Connector
Moving from a 3.x transport configuration to a JMS Connector configuration in Mule 4 implies basically using the same exact parameters, but declaring them in a more cohesive way. stablished (present at connection level):
This same configuration is achieved in Mule 4 setting the same parameters in the context where they actually apply, like 'noLocal' or 'durable' affecting only the 'topic-consumer':
<jms:config <jms:active-mq-connection <jms:factory-configuration </jms:active-mq-connection> <jms:consumer-config <jms:consumer-type> <jms:topic-consumer </jms:consumer-type> </jms:consumer-config> <jms:producer-config </jms:config>
Connecting To A Broker
In the JMS Connector for Mule 4, the distinction between a generic JMS connection and the simplified ActiveMQ connection still exists, but with a much more more focused set of parameters that are associated only to the connection. We’ll see first how to configure this simplified connection and then move on to the generic approach.
Connecting To ActiveMQ
Using the transport, the only way to configure ActiveMQ specific parameters was defining the ConfiguratioFactory as an spring bean and then passing on each property: .Mule 3 Example: Connecting To ActiveMQ
<spring:bean <property name="brokerURL" value="tcp://activemqserver:61616"/> <property name="redeliveryPolicy"> <spring:bean <property name="initialRedeliveryDelay" value="20000"/> <property name="redeliveryDelay" value="20000"/> <property name="maximumRedeliveries" value="10"/> </spring:bean> </property> </spring:bean> <jms:activemq-connector
With the new JMS Connector, this properties can be configured directly in the activemq connection:
<jms:config <jms:active-mq-connection> <jms:factory-configuration </jms:active-mq-connection> </jms:config>
If you need to migrate a transport configuration that is using a Connection Factory not yet supported out of the box in the JMS Connector, you can still do use the 'connectionFactory' attribute in the 'activemq-connection' to pass a reference to any Connection Factory bean.
Using A Different Broker
Both Mule 3 JMS transport and Mule 4 JMS Connector allow you to define a generic connection to any Connection Factory that you need.
We have two ways of doing this, the first one is creating a Connection Factory using spring beans and then referencing it as the Connection Factory:
<spring:bean <jms:connector
Mule 4’s version is almost identical:
<spring:bean <jms:config <jms:generic-connection </jms:config>
Another way of creating a generic connection is using a Connection Factory that is discovered using JNDI. In this case, the functionality remains the same, but syntax changes from transport to connector:
<beans> <util:properties <prop key="queue.jndi-queue-in">in</prop> <prop key="topic.jndi-topic-in">in</prop> </util:properties> </beans> <jms:connector
In Mule 4’s version you can do this configuring the JNDI inline:
>
Three main differences arise from this example:
Properties are now declared inline, no need for spring bean utils to be used.
Enforcing the lookup of destinations using JNDI is now configured as a single parameter named 'lookupDestination', which unifies the previous two parameters 'jndiDestinations' and 'forceJndiDestinations'.
Parameters are now present in the context for which they are relevant, like the 'jndiProviderUrl' being part of the 'name-resolver'.
Sending Messages
JMS Transport relied in the payload to contain the body of a JMS Message, and used Mule’s outbound properties to customize the JMS Properties and Headers. With the new Mule 4 approach, the JMS 'publish' operation relies only on its input parameters to completely build the JMS Message to be published.
For example, if we wanted to send a high priority JMS outbound message property with
priority as key to set the JMSPriority.
<4>) Set an outbound message property with
JMSXGroupID as key to set the JMSXGroupID.
<flow name="JmsTransportOutbound"> <http:listener <dw:transform-message> (1) <dw:set-payload><![CDATA[%dw 1.0 %output application/json --- { order_id: payload.id, supplier: payload.warehouse }]]></dw:set-payload> </dw:transform-message> <object-to-string-transformer/> (2) <jms:outbound-endpoint <message-properties-transformer <add-message-property (3) <add-message-property (4) </message-properties-transformer> </jms:outbound-endpoint> </flow>
The same results can be achieved in Mule 4 using the JMS Connector with the following configuration:
<flow name="JMSConnectorPublish"> <http:listener (2) <jms:publish (3) <jms:message> (1) <jms:body>#[output application/json --- { order_id: payload.id, supplier: payload.warehouse }]</jms:body> <jms:jmsx-properties (4) </jms:message> </jms:publish> </flow>
Differences to be noted:
1) There’s no need of the
transform component, since the
body of the Message is created inline, thus the payload remains unmodified.
2) The
object-to-string transformer was also removed, since the Connector can handle automatically the transformation output.
3) Priority is set as a parmeter of the
publish operation and doesn’t rely on the user knowing the exact key.
4) Group is set as part of the Message JMSX properties and doesn’t rely on the user knowing the exact header name.
As a summary, when publishing a Message in 3.x with the JMS transport, we relied on the MuleMessage payload, and outbound properties to configure the creation of the JMS Message, which meant a deeper knowledge of how the transport worked. In 4.x, the JMS Connector exposes every configurable element as a parameter in the scope were it belongs, thus exposing all the JMS functionality in a clearer way.
Consuming Messages
Listening For New Messages
The JMS transport
inbound-endpoint allows you to wait for new Messages on a given topic or queue. The output of this listener will contain the body of the message in the payload, and all the JMS headers and properties as
inboundProperties.
<flow name="JmsTransportInbound"> <jms:inbound-endpoint <jms:selector (1) </jms:inbound-endpoint> <dw:transform-message> (2) <dw:set-payload><![CDATA[%dw 1.0 %output application/json --- { items: payload, costumer: message.inboundProperties.'costumer_id', type: message.inboundProperties.'JMSType' }]]></dw:set-payload> </dw:transform-message> <object-to-string-transformer/> (3) <jms:outbound-endpoint (4) </flow>
In this case, we are listening for high priority Messages and then adapting them to the new format required by version 2 of priority orders:
1) Filter incomming messages by priority. 2) Transform the MuleMessage using the metadata contained in the inboundProperties so the payload matches the new JSON format we need for the new API. 3) Convert the transformed payload to a JSON String. 4) Publish the payload to the proxied queue.
Implementing the same in Mule 4 looks like this:
<flow name="JMSConnectorPublish"> <jms:listener (1) <jms:publish (2) <jms:message> <jms:body>#[output application/json --- { items: payload, costumer: attributes.properties.userProperties.costumer_id, (3) type: attributes.headers.type }]</jms:body> </jms:message> </jms:publish> </flow>
Now, the flow has fewer components and is not required to modify the Message payload to publish with a different format:
Consuming Messages
Consuming Messages mid-flow from a given destination was not supported by Mule’s 3 JMS transport, and the way to go was also adding the 'Mule Requester Module' to your application, which would then handle the mid-flow message consume.
So, for example, if you wanted to expose your JMS Queue behind a new REST API, your application would be similar to this:
<mulerequester:config <jms:activemq-connector <flow name="ordersFromJMS"> <http:inbound-endpoint <mulerequester:request <logger level="INFO" message="CorrelationId: #[message.inboundProperties.'JMSCorrelationId']"/> </flow>
Some things to notice here are:
All metadata regarding JMS Message is completely lost, so logging the CorrelationId relies on you knowing the syntax for obtaining the Header.
Dynamic filterying by 'selector' has to be done in the 'resource' url of the requester, so multiple arguments end up with an error prone configuration.
We need both the JMS and Mule Requester configurations.
Mule 4 comes out of the box with the capability of consuming messages mid-flow by using thr 'consume' operation. This operation is very similar to the Listener we saw before, with the difference that it can be used anywhere in the flow:
<flow name="ordersFromJMS"> <http:listener <jms:consume </flow>
Now we only needed a the JMS Connector, configured the 'consume' operation with the 'selector' parameter using the metadata from the listener, and also were able to log the correlationId with metadata support in the Message attributes.
Handling Topic Subscriptions
Topics used as inbound endpoints in 3.x allowed the user to configure if the subscription to the Topic had to be done as a
durable subscription or not. There were different ways of doing so, and it had the issue of exposing the
durable configuration for
queues too, which made no sense.
A Topic subscription in 3.x would look like this:
<jms:inbound-endpoint
For Mule 4, the subscription mechanism was reviewed, leaving the option of subscriptions scoped down to Topics only, and adding more functionality thanks to the support of JMS 2.0.
Same example as before, but in 4.x will be:
<jms:listener <jms:consumer-type> <jms:topic-consumer </jms:consumer-type> </jms:listener>
But in this case, the
topic-consumer configuration allows us to also set a
shared subscription (only if using a JMS 2.0 Connection) that allows the processing of messages from at topic subscription by multiple threads, connections or JVMs:
<jms:listener <jms:consumer-type> <jms:topic-consumer </jms:consumer-type> </jms:listener>
Responding To Incomming Messages
When the listener for new JMS Messages receives a Message with the 'JMSReplyTo' header configured, then it is expected that a response is emitted to the reply destination once the processing of the Message is completed.
For Mule 3, this means configuring the transport with
exchange-pattern="request-response"`, where the result of the flow will automatically become the payload of the response. Headers of the response Message were configured using the
outbound-properties, while the body of the Message was taken from the
payload at the end of the Flow.
<flow name="jmsBridge"> <jms:inbound-endpoint <message-properties-transformer <add-message-property <add-message-property </message-properties-transformer> </jms:inbound-endpoint> <http:request <set-payload </flow>
Mule 4 instead allows you to configure all the parameters associated to the response, directly inline as a part of the
listener component, leaving behind the need of a transformation when reaching the end of the flow.
<flow name="jmsBridge"> <jms:listener <jms:response <jms:body>#['BRIDGED']</jms:body> </jms:response> </jms:listener> <http:request </flow>
Doing Request-Reply
JMS allows you to use the
JMSReplyTo header to perform a synchronous communication. This can be done either with a temporary destination that is created on the fly by the client, or using an already existing destination.
Request Reply With Temporary Destinations
In Mule 3, for the first case where the reply destination is a temporary queue that will be discarded once the message arrives, we have the "request-response" exchange-pattern in the outbound endpoint:
<flow name="jmsRequest/> <jms:outbound-endpoint <logger level="INFO" message="Status: #[payload]"> </flow>
Instead, in Mule 4 you have a brand new operation called
publish-consume which aims to solve this specific use case:
:publish-consume> <logger level="INFO" message="#['Status: ' ++ payload]"> </flow>
You may see that, again, the building of the Message is donde inline of the operation, in the
message element, and any transformation or configuration that affects the outgoing Message will be done as part of that element.
Request Reply With Explicit Destinations
Doing a request-reply with an explicit
reply-to destination was a little bit more tricky in 3.x, since a new component was required, the
requet-reply Scope:
<flow name="JMS-request-reply"> <jms:inbound-endpoint <dw:transform-message> <dw:set-payload><![CDATA[%dw 1.0 %output application/xml --- { data: payload, costumer: message.inboundProperties."http.query.params".costumer_id }]]></dw:set-payload> </dw:transform-message> <object-to-string-transformer/> <request-reply> (1) <jms:outbound-endpoint <jms:inbound-endpoint </request-reply> <logger level="INFO" message="#['Status: ' ++ payload]"> </flow>
This scope (1) allowed you to set an inbound and outbound transport to do the request-reply pattern. This way, it would inject the
JMSReplyTo header automatically in the outgoing Message and then started listening in the inbound endpoint
For the case of Mule’s 4 JMS Connector with the new
publish-consume operation, it requires you to do almost no changes to the flow. If you want an specific destination for the reply to be sent, just configure the
reply-to header in the Message builder directly, as you would in any other case of either a publish or a response:
:reply-to (1) </jms:publish-consume> <logger level="INFO" message="#['Status: ' ++ payload]"> </flow>
In this example we set the reply destination header (<1>) to a well-known Topic, to ilustrate that a known destination may be used by others to do things like event tracking or post-processing triggers.
Using Transactions
Transactions support is quite similar in its configuration when moving from 3.x to 4.x, with the expected change from it being configured in the
inbound-endpoint and
outbound-endpoint to the normalized Mule 4 approach for operations transactions:
<flow name="transactedJmsFlow"> <jms:inbound-endpoint <jms:transaction (1) </jms:inbound-endpoint> <set-variable (2) <dw:transform-message> (3) <dw:set-payload><![CDATA[%dw 1.0 %output application/xml --- payload ]]></dw:set-payload> </dw:transform-message> <object-to-string-transformer/> <jms:outbound-endpoint (4) <jms:transaction </jms:outbound-endpoint> <default-exception-strategy> <commit-transaction (5) <set-payload (6) <jms:outbound-endpoint (7) <jms:transaction </jms:outbound-endpoint> </default-exception-strategy> </flow>
Things to note are:
Same scenarion can be implemented in Mule 4 with the following approach:
<flow name="transactedJmsFlow"> <jms:listener (1) <jms:publish (2) <jms:message> <jms:body>#[output application/xml --- payload</jms:body> </jms:message> </jms:publish> <error-handler> <on-error-continue (3) <jms:publish (4) </on-error-continue> </error-handler> </flow> | https://docs.mulesoft.com/mule-runtime/4.3/migration-connectors-jms | 2021-06-12T23:10:03 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.mulesoft.com |
Struct google_orgpolicy2::
api::[−][src] OrganizationPolicyGetEffectivePolicyCall.
A builder for the policies.getEffectivePolicy method supported by a organization resource.
It is not used directly, but through a
OrganizationMethods instance.
Example
Instantiate a resource method builder
// You can configure optional parameters by calling the respective setters at will, and // execute the final call using `doit()`. // Values shown here are possibly random and not representative ! let result = hub.organizations().policies_get_effective_policy("name") .doit().await;
Implementations
impl<'a> OrganizationPolicyGetEffectivePolicyCall<'a>[src]
pub async fn doit([src]
self
) -> Result<(Response<Body>, GoogleCloudOrgpolicyV2Policy)>
self
) -> Result<(Response<Body>, GoogleCloudOrgpolicyV2Policy)>
Perform the operation you have build so far.
pub fn name([src]
self,
new_value: &str
) -> OrganizationPolicyGetEffectivePolicyCall<'a>
self,
new_value: &str
) -> OrganizationPolicyGetEffectivePolicyCall<'a>
Required. The effective policy to compute. See
Policy for naming rules.
Sets the name path property to the given value.
Even though the property as already been set when instantiating this call, we provide this method for API completeness.
pub fn delegate([src]
self,
new_value: &'a mut dyn Delegate
) -> OrganizationPolicyGetEffectivePolicyCall<'a>
self,
new_value: &'a mut dyn Delegate
) -> OrganizationPolicyGetEffectivePolicyCall<'a>
The delegate implementation is consulted whenever there is an intermediate result, or if something goes wrong while executing the actual API request.
It should be used to handle progress information, and to implement a certain level of resilience.
Sets the delegate property to the given value.
pub fn param<T>([src]
self,
name: T,
value: T
) -> OrganizationPolicyGetEffectivePolicyCall<'a> where
T: AsRef<str>,
self,
name: T,
value: T
) -> OrganizationPolicyGetEffectivePolicyCall<'a> where
T: AsRef<str>,
Set any additional parameter of the query string used in the request. It should be used to set parameters which are not yet available through their own setters.
Please note that this method must not be used to set any of the known parameters which have their own setter method. If done anyway, the request will fail.
Additional Parameters
- $.xgafv (query-string) - V1 error format.
- access_token (query-string) - OAuth access token.
- alt (query-string) - Data format for response.
- callback (query-string) - JSONP
- fields (query-string) - Selector specifying which fields to include in a partial response.
- key (query-string) - API key. Your API key identifies your project and provides you with API access, quota, and reports. Required unless you provide an OAuth 2.0 token.
- oauth_token (query-string) - OAuth 2.0 token for the current user.
- prettyPrint (query-boolean) - Returns response with indentations and line breaks.
- quotaUser (query-string) - Available to use for quota purposes for server-side applications. Can be any arbitrary string assigned to a user, but should not exceed 40 characters.
- uploadType (query-string) - Legacy upload protocol for media (e.g. “media”, “multipart”).
- upload_protocol (query-string) - Upload protocol for media (e.g. “raw”, “multipart”).
pub fn add_scope<T, S>([src]
self,
scope: T
) -> OrganizationPolicyGetEffectivePolicyCall<'a> where
T: Into<Option<S>>,
S: AsRef<str>,
self,
scope: T
) -> OrganizationPolicyGetEffectivePolicyCall<'a> where
T: Into<Option<S>>,
S: AsRef<str>,
Identifies the authorization scope for the method you are building.
Use this method to actively specify which scope should be used, instead the default
Scope variant
Scope::CloudPlatform.
The
scope will be added to a set of scopes. This is important as one can maintain access
tokens for more than one scope.
If
None is specified, then all scopes will be removed and no default scope will be used either.
In that case, you have to specify your API-key using the
key parameter (see the
param()
function for details).
Usually there is more than one suitable scope to authorize an operation, some of which may encompass more rights than others. For example, for listing resources, a read-only scope will be sufficient, a read-write scope will do as well.
Trait Implementations
impl<'a> CallBuilder for OrganizationPolicyGetEffectivePolicyCall<'a>[src]
Auto Trait Implementations
impl<'a> !RefUnwindSafe for OrganizationPolicyGetEffectivePolicyCall<'a>
impl<'a> Send for OrganizationPolicyGetEffectivePolicyCall<'a>
impl<'a> !Sync for OrganizationPolicyGetEffectivePolicyCall<'a>
impl<'a> Unpin for OrganizationPolicyGetEffectivePolicyCall<'a>
impl<'a> !UnwindSafe for OrganizationPolicyGetEffectivePolicyCall<'a>>, | https://docs.rs/google-orgpolicy2/2.0.4+20210330/google_orgpolicy2/api/struct.OrganizationPolicyGetEffectivePolicyCall.html | 2021-06-12T23:48:29 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.rs |
updated 2021-05-21
PCS Vaults strategy created ✅
Venus Vaults strategy created ✅
Verified and published all sources codes for all contracts ✅
ACS used as governance token ✅
Core DAO created ✅
Coinmarketcap listing ✅
Coingecko listing ✅
Defistation listing ✅
DappRadar listing ✅
Mathwallet dapp store listing ✅
Coinbase prices listing ✅
Whitelist ACS and ACSI on OpenOcean ✅
ACS max supply cap set by governance vote ✅
ACSI max supply cap set by governance vote ✅
1st ACS emission cut ✅
ACSI added as governance token ✅
Audit #1 - by Defiyield ✅
Audit #2 - by Hacken (V2 farm contracts audited) ✅
Hotbit listing ✅
Dfox.cc portfolio tracker listing ✅
Yieldwatch.net portfolio tracker listing ✅
Adjusted Stableswap fees to further lead the way on BSC ✅
Continuous addition of new high quality Vaults&Farms PCS/Venus ✅
Migration of all ACS v1 Farms to v2 Farms ✅
Add SwipeSwap Vaults&Farms ✅
Audit #3 - by Certik (Cake Vault Strategy audited) ✅
Audit #4 - by Hacken (Complete audit) ✅
Whitelist ACS and ACSI on 1inch ✅
Whitelist ACS and ACSI on Swipe Swap ✅
Whitelist ACS and ACSI on MDEX ✅
Whitelist ACS on Unifi ✅
Addition of ACS-SXP native farm on SwipeSwap ✅
Additional whitelisting for ACS and ACSI 🌗
Create 3BTC Metapool ✅
ACryptoS UI v1.5 (Landing Page) ✅
Continuous addition of new high quality Vaults&Farms 🌗
ACryptoS UI v2 🌗
Stableswap UI v2 🌗
2nd ACS emission reduction ✅
1st ACSI emission reduction to match ACS emission curve
Soteria insurance integration ✅
Additional audits 🌗
Migration of all ACSI farms to v2 ✅
Use vaulted ACSI as yield booster for ACSI farms ✅
Addition of ACS-BNB native farm on MDEX ✅
SCV.finance portfolio tracker listing ✅
Continuous addition of new high quality Vaults&Farms
Additional audits
3rd ACS emission reduction
2nd ACSI emission reduction | https://docs.acryptos.com/roadmap | 2021-06-12T22:25:14 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.acryptos.com |
简介¶
顶点组可能有很多关联的顶点,因此有大量的权重(每个顶点指定一个权重)。权重绘制 是一种以非常直观的方式保持大量权重信息的方法。
它主要用于绑定的网格,其中顶点组用于定义骨骼在网格上的影响区域。但是我们也使用它来控制粒子的发射,头发密度,许多修改器,形状关键帧等等。
可以通过按 Ctrl-Tab 键从模式菜单进入 权重绘制模式 。所选的网格物体显示为带有彩虹色谱。该颜色可视化显示活动顶点组中每个顶点关联的权重。默认情况下,蓝色 表示无权重, 红色 表示满权重。
可以通过权重笔刷绘制为物体的顶点指定权重。开始在网格上绘制会自动向活动的顶点组添加权重(如果需要,将创建一个新的顶点组)。
权重颜色代码¶
权重通过使用冷/热颜色系统的渐变显示,使得低值(权重接近0.0)的区域以蓝色(冷)显示,高值(权重接近1.0)的区域以红色(热)显示。所有中间值都以彩虹色谱(蓝色,绿色,黄色,橙色,红色)显示。
除了上面描述的颜色代码外,Blender对于未引用的顶点还有一个特殊的视觉标记(作为选项):它们被绘制(显示)成黑色。因此,您可以同时看到引用区域(冷/热颜色)和未引用区域(黑色)。当你查找权重错误时,这是最实用的。请参见 选项.
Normalized Weight Workflow¶
In order to be used for things like deformation, weights usually have to be normalized, so that all deforming weights assigned to a single vertex add up to 1. The Armature modifier in Blender does this automatically, so it is technically not necessary to ensure that weights are normalized at the painting stage.
However, while more complicated, working with normalized weights has certain advantages, because it allows use of certain tools designed for them, and because when weights are normalized, understanding the final influence of the current group does not require knowing weights in other groups on the same vertex.
These tools are provided to aid working with normalized weights:
- 全部标准化
In order to start working with normalized weights it is first necessary to normalize the existing weights. The Normalize All tool can be used for that. Make sure to select the right mode and disable Lock Active.
- 自动规格化
Once the weights are initially normalized, the Auto Normalize option can be enabled to automatically maintain normalization as you paint. This also tells certain tools that the weights are supposed to be already normalized.
- Vertex group locking
Any vertex group can be locked to prevent changes to it. This can be done via the lock icon in the vertex group list, or using bone selection and the locks pie menu.
This setting prevents accidental edits to groups. However, since it is also respected by Auto Normalize, in the normalized weight workflow it has a more significant meaning of locking the current influence of chosen bones, so that when you paint other bones, the weight is redistributed only between the unlocked groups.
In locations affected by more than two bones this allows more precise tweaking and re-balancing of weights by temporarily focusing on a subset of bones. This can also be aided by the Lock Relative option, which displays weights as if re-normalized with the locked groups deleted, thus making it appear as if the locked groups did not even exist.
- 多次绘制
Finally, the Multi-Paint option allows treating multiple selected bones as if they were one bone, so that the painting operations change the combined weight, preserving the ratio within the group. Combined with locking, this allows balancing between one set of bones versus the rest, excluding a third set that has its influence not affected in any way due to locks.
Technically, this option does not require the normalized workflow, but since non-normalized weights can add to more than 1, the weight display behaves best with Auto Normalize enabled.
Tip
For example, when dealing with a bone loop, e.g. mouth or an eye, selecting the loop with Multi-Paint exposes the falloff between the loop as a whole and surrounding bones, while locking the surrounding bones and using Lock Relative displays the falloff between bones within the loop. Thus the complex two-dimensional falloff of each bone can be viewed and edited as two independent one-dimensional gradients. | https://docs.blender.org/manual/zh-hans/dev/sculpt_paint/weight_paint/introduction.html | 2021-06-12T23:48:21 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.blender.org |
CS-Cart is developed to meet most server configurations, ranging from shared hosting accounts to dedicated servers.
There are two core requirements for your host to run CS-Cart:
PHP version 5.6 or 7. CS-Cart supports SAPI mod_php, FPM, FastCGI.
We recommend not to move your store to PHP 5.6, if you used PHP 7.0 and newer versions. This could lead to users not being able to sign.
The following PHP commands should be enabled:
The following PHP extensions should be installed (required extensions are marked with *):
Notes:
GD is available almost at any hosting. But we recommend Imagick, because it offers better performance and quality of the processed images.. | https://docs.cs-cart.com/4.12.x/install/system_requirements.html | 2021-06-12T23:15:14 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.cs-cart.com |
Consuming Software with Modularity
This is a high-level overview of how users can install software with Fedora Modularity and how is it different from the traditional workflow(s).
Understanding the Delivery Channels
Modular Fedora will ship with two sets of repositories:
The traditional base repository representing the distribution as we know it today — there are no user-visible changes in this part.
A new modular repository (often referred to as the "Application Stream" or AppStream for short) including all the additional versions delivered as modules.
The Modules repository will be optional for users.
Consuming the traditional packages
There are no changes to the traditional user experience. Packages from the traditional repository will be installed and updated using the same methods as before. Everything keeps working as it used to.
Consuming the modular packages
If a user desires to use the optional Modular repositories in order to consume non-default versions of software, there will be some new concepts introduced in the client tooling to manage them. We outline these below.
Enabling a module
Enabling a module makes its packages available on the system. Packages delivered as part of a module always have a priority over the ones from the traditional base, regardless of their actual version. Packages in modules are often replacements of the ones in the traditional base.
Modularity brings parallel availability, not parallel installability. Only one stream of a given module can be enabled on a system — so it is always clear which version gets installed. Installing and running multiple versions of software can be achieved by using existing technologies, such as containers.
Installing a module
To make installation easy, some modules can be also installed as a unit, without the need of enabling them first and then installing individual packages. Installing a module doesn’t necessarily mean installing all of its packages. Modules can define something called an "installation profile" to help users with the installation.
Installation profile
Installation profiles are essentially lists of packages that help users with the module installation. To give a specific example, a database module could have two profiles: server and client. This helps the user to install what they need without the need of thinking about the package names. However, installation profiles are just an optional feature and users can still install packages directly.
Updating the system
Updating the system always respects user’s module choices (or lack of module) even when there are multiple (and possibly higher) versions available.
If the user doesn’t enable any modules, all packages on their system get updated to the latest versions provided by the traditional base repository. However, if the user enables a module stream, packages get updated to the newest version provided by the module.
Thanks to this mechanism, the user has better control over the versions of packages on their system while receiving updates (such as security patches) for the whole system.
Running multiple versions using containers
Modularity brings parallel availability, not parallel installability. There are other technologies e.g. Linux containers or software collections that deal with this.
All the steps described above can be used in a container the same way as on a traditional system. Producing up-to-date containers with multiple versions of software in an automated way is also one of the goals. | https://docs.fedoraproject.org/ur_PK/modularity/architecture/consuming/ | 2021-06-13T00:46:42 | CC-MAIN-2021-25 | 1623487586465.3 | [array(['../../_images/mod-doc-repositories.png', 'mod doc repositories'],
dtype=object) ] | docs.fedoraproject.org |
Customer Attributes
Customer attributes provide the information that is required to support the order, fulfillment, and customer management processes. Because your business is unique, you might need fields in addition to those provided by the system. You can add custom attributes to the Account Information, Address Book, and Billing Information sections of the customer’s account. Customer address attributes can also be used in the Billing Information section during checkout, or when guests register for an account.
Customer Attributes
Step 1: Complete the Attribute Properties
On the Admin sidebar, go to Stores > Attributes > Customer.
In the upper-right corner, click Add New Attribute.
Customer Attribute Properties
In the Attribute Properties section, do the following:
Enter a Default Label to identify the attribute during data entry.
Enter an Attribute Code to identify the attribute within the system..
Complete the Data Entry Properties.
To determine the type of input control that is used for data entry, set Input Type to one of the following:
If the customer must enter a value in the field, set Values Required to
Yes.
To assign an initial value to the field, enter a Default Value.
To check the data entered into the field for accuracy before the record is saved, set Input Validation to the type of data to be allowed in the field. The available values depend on the Input Type specified.
To limit the size of Text Field and Text Area input types, enter the Minimum Text Length and Maximum Text Length.
To apply a preprocessing filter to values entered in a text field, text area, or multiple line input type, set Input/Output Filter to one of the following:
Data Entry Properties
Complete the Customers Grid and Segment Properties.
To be able to include the column in the Customers grid, set Add to Column Options to
Yes.
To filter the Customers grid by this attribute, set Use in Filter Options to
Yes.
To search the Customers grid by this attribute, set Use in Search Options to
Yes.
To make this attribute available to customer segments, set Use in Customer Segment to
Yes.
Customer Grid and Segment Properties
Step 2: Complete the storefront properties
To make the attribute visible to customers, set Show on Storefront to
Yes.
Enter a number in the Sort Order field to determine its order of appearance when listed with other attributes.
Set Forms to Use to each form that is to include the attribute. To choose multiple options, hold the Ctrl key down and click each form.
Storefront Properties
Step 3: Complete the labels/options
In the left panel, choose Manage Labels/Options.
Under Manage Titles, enter a label to identify the attribute for each store view.
When complete, click Save Attribute.
Manage Labels/Options | https://docs.magento.com/user-guide/stores/attributes-customer.html | 2021-06-13T00:21:40 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.magento.com |
<jms:listener
To Listen For New Messages
The On New Message (Listener) source in the JMS connector provides ability to consume Messages as they arrive to the Destination.
Listening for New Messages
The syntax to listen for new messages from a queue is:
The source above listens for new messages in the Queue identified by the destination, returning a
MuleMessage each time a JMS Message is available in the Queue.
The MuleMessage will have:
The message’s content as payload.
The message’s metadata in the message attributes.
By default, the Message consumed will be ACKed only when the execution of the flow receiving the message completes successfully. If instead, an error occurs during the execution of the flow, the Session will be recovered and the Message will be redelivered.
For more information regarding a Message ACK, check the How To Handle Message Acknowledgement.
Configuring Message Throughput
When extra processing power is needed, the JMS Listener allows you to configure the
numberOfConsumers that a given listener will use to consume messages concurrently.
By default, each listener will use four consumers that will produce messages concurrently. Since each consumer will wait for the Message to be processed,that means that you’ll have a maximum of four messages in-flight at the same time.
If you need to increase the concurrent message processing, just increase the
numberOfConsumers in the Listener.
Filtering Incoming messages
The JMS connector provides full support for filtering which Messages should be consumed based on the standard JMS
selector language.
For example, if you have a priority Queue with Messages that need to be processed faster than the others, you can do:
<flow name="consumer"> <jms:listener </flow>
inboundContentType parameter.
The same process works for encoding. By default, the connector will assume that the runtime’s default encoding matches the one in the Message if no other information is provided. You can set this by using the
inboundEncoding parameter.
Replying to Incoming Messages
When an incoming JMS Message declares a REPLY_TO destination, the JMS Listener will automatically produce a response when the Message is processed successfully, meaning that no error occurs during the flow execution. In that case, when the flow is completed a response will be published to the destination specified in the processed Message header.
Responses can be configured in the Listener with the following syntax:
. | https://docs.mulesoft.com/jms-connector/1.5/jms-listener | 2021-06-13T00:08:26 | CC-MAIN-2021-25 | 1623487586465.3 | [] | docs.mulesoft.com |
The Blog page
Let’s set the Blog Page..
Now you can create posts the normal way. If you want to assign the blog to the main navigation menu, you must put the page you just created at the menu.
Setting the “Continue Reading” Link for posts
At the blog page, you will see that each post has a short description under it, and there is a Continue Reading link that leads you to the full post page. In order to define this “breakpoint” you must use the Insert More Tag button which is located at the text editor, as highlighted at the screenshot below.
Place the cursor to the point of the content where you wish to show the Continue Reading link, and click on the button shown at the above screenshot. | https://docs.codestag.com/article/223-the-blog-page | 2019-07-16T00:29:31 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/53fe2047e4b0cc8b06322ff0/file-v0cMx3C9Tf.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/53f5a925e4b01c9afd10e8df/images/53fda0b7e4b0eda518c428bd/file-cqs4Sq0dgo.jpg',
None], dtype=object) ] | docs.codestag.com |
All content with label client+distribution+eventing+events+gridfs+infinispan+query+rehash+release.
Related Labels:
publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, listener, state_transfer, cache,
amazon, s3, memcached, grid, test, jcache, api, xsd, maven, documentation, write_behind, ec2, 缓存, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, out_of_memory, jboss_cache, import, index, batch, configuration, hash_function, buddy_replication, loader, colocation, write_through, cloud, remoting, mvcc, tutorial, notification, »
( - client, - distribution, - eventing, - events, - gridfs, - infinispan, - query, - rehash, - release )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+distribution+eventing+events+gridfs+infinispan+query+rehash+release | 2019-07-16T01:34:53 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.jboss.org |
Version: 2.0
Upgrading VseWss Solutions Lab.
Lab Time: 45 minutes
Lab Folder: C:\Student\Labs\xx_UpgradeVseWSSSolutions
Lab Overview: In this lab exercise you will upgrade a SharePoint 2007 solution that was created using VSeWSS 1.3 to SharePoint 2010 and the new SharePoint Tools in Visual Studio 2010. You will first import the VSeWSS project into Visual Studio 2010 using a special project template. Importing the project maintains artifact layout, project settings and source code. Next you will address any inconsistencies resulting from the import.
Before you begin this lab, you must run the batch file named SetupLabXX.bat. This batch file runs a PowerShell script which creates a new SharePoint site collection at the location. You must also run the VSeWSS_Upgrade_Sample_Beta_20100108.msi file located in the Lab Folder >> Starter Files Folder. Accept all default choices for the installation. This installs the installer files to C:\Program Files (x86)\Microsoft\VSeWSS Upgrade\. Navigate to this location and run the VSeWSSUpgrade_Beta2_20091203.msi file accepting all defaults. When this is finished you must navigate to Start menu >> All Files >> Microsoft Visual Studio 2010 >> Visual Studio Tools and right-click on the Visual Studio Command Prompt and select Run as administrator. On the Visual Studio Command Prompt you must run the following command DEVENV /installvstemplates ; this installs the required project templates into Visual Studio 2010 to allow for the upgrade of VSeWSS projects from SharePoint 2007 to SharePoint 2010.
Download The Offline Training Kit
Download Hands-on Lab Document (DOCX)
Download Hands-on Lab Source Files | https://docs.microsoft.com/en-us/previous-versions/gg620611(v%3Dmsdn.10) | 2019-07-16T00:35:51 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.microsoft.com |
Patch Reporting Software Titles
The Jamf Software Server (JSS) includes more than 30 third-party macOS software titles that can be used for patch reporting and patch notifications. These third-party software titles represent software that is not available in the App Store. (For the list of software titles provided in the JSS, see the Patch Reporting Software Titles Knowledge Base article.)
When you configure a patch reporting software title, you are able to receive a notification when a third-party macOS software title update has been released by the vendor and added to the JSS, the JSS, you save time by not having to track down the required information.
Requirements
To configure third-party macOS software titles and enable them to automatically update, the JSS must have outbound access to port 443 to access the patch server and the software title definitions which are hosted on Amazon CloudFront.
To configure a software title that requires an extension attribute, you must use a JSS user account that has full access. A JSS user account with site access only will not be able to configure a software title that requires an extension attribute.
Configuring a Patch Reporting Software Title
Log in to the JSS with a web browser.
Click Computers at the top of the page.
Click Patch Reporting.
Click New
.
Choose a software title.
Use the General pane to configure basic settings for the software title, including whether to receive a JSS.
Note: You cannot configure a patch reporting software title if it uses an extension attribute that has the same name as an existing extension attribute. You must first rename the existing extension attribute so that you can save the new one.
(Optional) Click the Definition tab to review information about the supported software title versions and attributes about each version.
Click Save.
Related Information
For related information, see the following sections in this guide:
About Patch Management
Learn about patch management for Apple Updates and for third-party updates.
Patch Reporting
Learn how to create a patch report for a macOS software title.
For related information, see the following Knowledge Base article:
Jamf Process for Updating Patch Reporting Software Titles
Learn about the contents of a software definition file in the JSS and the process used by Jamf to add software title updates to the JSS. | https://docs.jamf.com/9.99.0/casper-suite/administrator-guide/Patch_Reporting_Software_Titles.html | 2019-05-19T14:48:17 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.jamf.com |
Introduction McAfee® Drive Encryption delivers powerful encryption that protects data from unauthorized access, loss, and exposure. With data breaches on the rise, it is important to protect information assets and comply with privacy regulations. Comprehensive protectionThe McAfee Drive Encryption suite provides multiple layers of defense against data loss with several integrated modules that address specific areas of risk. The suite provides protection for individual computers and roaming laptops with Basic Input Output System (BIOS) and Unified Extensible Firmware Interface (UEFI). What is McAfee Drive EncryptionMcAfee Drive Encryption is a strong cryptographic utility for denying unauthorized access to data stored on any system or disk when it is not in use. How McAfee Drive Encryption worksMcAfee Drive Encryption protects the data on a system by taking control of the hard disk or self-encrypting drive (Opal) from the operating system. When used with self-encrypting drives, Drive Encryption manages the disk authentication keys; with non-self-encrypting drives. The Drive Encryption driver encrypts all data written to the disk and decrypts the data read off the disk. Product componentsEach McAfee Drive Encryption component or feature plays a part in protecting your systems. FeaturesThese features of Drive Encryption are important for your organization's system security and protection. RequirementsTesting for client system requirementsClient systems must meet the requirements for Drive Encryption before the product can be installed. | https://docs.mcafee.com/bundle/drive-encryption-7.2.0-product-guide-epolicy-orchestrator/page/GUID-3F155DB3-9AA4-4CE2-9810-491100A1720E.html | 2019-05-19T14:58:40 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mcafee.com |
7.18.4
Release date: April 12th, 2019Download
Fixes
- We fixed an out-of-memory issue on cluster slaves caused by persistent session objects. (Ticket 80910)
7.18.3
Release date: December 20th, 2018Download
Fixes
- We fixed an issue where closing the current page and opening a new one from a microflow resulted in going to the previous page. (Tickets 66534, 76966)
- We fixed an issue where clicking a microflow button twice was possible even if Disabled during action was set to true. (Ticket 76020)
7.18.2
Release date: December 11th, 2018Download
New Features
- You can now configure the maximum number of concurrent connections for REST service calls and web service calls by adding
http.client.MaxConnectionsPerRouteand
http.client.MaxConnectionsTotalin your custom settings.
- We upgraded our internal server Jetty to 9.4.12.
7.18.1
Release date: September 18th, 2018Download
Fixes
- We fixed an issue (introduced in Mendix 7.17) where users were removed in the case of concurrent login attempts in the same anonymous session. (Ticket 67964)
- We fixed an issue where consistency errors remained visible in the Errors pane even when they had been fixed. This was also impacting Web Modeler users. (Ticket 68234)
- We now use parameterized values for literals participating in data retrieval queries to make sure that databases can execute the same SQL statements more efficiently and with improved performance. This change might result in situations where the JDBC drivers for HSQLDB and IBM DB2 databases complain about numeric value out of range when a numeric attribute with a lower size data type is compared with a higher size data type. (Ticket 68083)
- We fixed a bug where the Desktop Modeler complained about a “text already in occurrences.” (Ticket 68102)
- We fixed an issue with creating branches from tags that was preventing the proper selection of a tagged version by constantly refreshing the dialog box.
- We fixed a bug that caused performance degradation when adding entities to a reference set. (Tickets 67117, 67904, 67947, 68051, 65985)
Known Issues
-)
7.18.0
Release date: August 28th, 2018
A software error in this version has resulted in an issue where under specific circumstances, users can be deleted from an application.
The issue can occur when anonymous users are enabled for an app, a user is being authenticated, and a new login occurs for that same user before the authentication process is completed. For example, the user clicks the login button multiple times, and the first click triggers the request that starts the authentication process. If the request triggered by the second click comes in before the authentication process is completed, the user will be deleted.
Because of this issue, this version is no longer available for download. For the next version available for download, please see Mendix 7.18.1 above.
Productivity Improvements
- We greatly reduced the time it takes Desktop Modeler to update the Errors in the project after you make a change. You can expect the check time to be well below a second now, which is a more than 10-fold win for bigger projects.
- We improved the dialog box used to create new branch lines. It no longer fetches all tags before opening and the default has been switched to Main line.
- The Select Ports dialog box now can be resized horizontally to make long names readable.
Show WSDL and XSD
In consumed web services and XML schemas, we added a Show button that shows the content of the WSDL or XML schema file that was imported.
This is based on an upvoted idea from Bart Rikers submitted to the Mendix Ideas Forum. Thanks, Bart!
Choose Not to Commit an Import Mapping
For import mappings in the import mapping action, call REST service action, and call web service action, you can now specify whether you want to commit the changes or not.
This is based on a suggestion by Martin Leppen. Cheers, Martin!
Align and Distribute Microflow Activities
You can now select several microflow activities and align them or distribute them evenly.
This is based on an upvoted idea from Edwin Huitema submitted to the Mendix Ideas Forum. Thanks, Edwin!
Java Action Parameter Categories
You can now group the parameters of Java actions into categories. This gives you a better overview in Java action calls, especially where there are many parameters.
New Features
- It is now possible to use templates as the label of an input widget. This is based on an upvoted idea from Fabian Recktenwald submitted to the Mendix Ideas Forum. Nice one, Fabian!
- We added support for using a private version control server in combination with a proxy server. If your private version control server is not behind the proxy server, you can make an exception for it in the Internet Options control panel (Connections > LAN settings > Advanced > Exceptions). (Tickets 57171, 59983)
- In import and export mappings, we added an icon to the structure that you are mapping to, so you can easily see whether it comes from an imported web service, XML schema, JSON structure, or message definition.
- In published REST services, you can now check a box called Enable CORS that will allow other websites to access your REST service.
- We now support the creation of MYSQL databases through the Desktop Modeler. (Ticket 64894)
You are now able to access and edit associations from an entity details form. (Ticket 65040)
This is based on an upvoted idea from Chek Heng To submitted to the Mendix Ideas Forum. Thanks, Chek!
Fixes
- We have fixed an issue about the high DPI scaling of the Desktop Modeler Version Selector and removed the old splash screen. (Ticket 50101)
- We fixed the Team Server connection issues that occurred after switching an account or changing your password. (Ticket 67205)
- We fixed the issues with committing and updating that occurred after a timeout of an hour (the error details mentioned
SvnOperationCanceledException). (Ticket 67735)
- We fixed the behavior of the Team Server App drop-down menu so that it no longer closes when navigating with the keyboard. Also, the list of branches will not contain duplicates anymore.
- We fixed an issue that occurred when opening a project that was created online. When syncing with the Web Modeler, the Desktop Modeler complained about there already being local changes. No more.
- When a Create button was placed on a data grid with a specialization configured as an entity, the generalization would be instantiated instead. The button creates the correct entity again. (Ticket 67014)
- Offline file synchronization can fail because of connection issues, and situations arose where the synchronization was not attempted again, which led to a partial synchronization. We fixed this by implementing a temporary upload, which will be discarded in case of a synchronization failure. (Ticket 64456)
- When the default action on a grid on a select page was to select the item, this was not communicated properly to the corresponding reference selector. Instead, the reference was cleared. This and a similar issue with the (input) reference set selector were fixed. (Ticket 65193)
- We fixed the issue causing the built-in admin user to retain old roles when a new role was selected in the Security configuration of the app project. (Ticket 65664)
- We fixed a problem where using Microsoft Azure file storage caused the configured certificates not to be used anymore in Mendix. (Ticket 66176)
- We updated our version of the Microsoft Azure Storage Library from 7.0.0 to 8.0.0.
- We improved error handling in the interrupt-request admin action. A failure when logging a log message will still attempt interruption of the thread.
- We upgraded the xerces.xercesImpl.jar library to version 2.12.
- We fine-tuned the colors of microflow activities to give them more contrast. (Ticket 65699)
- We fixed an issue where the default value was inserted every time for an AutoNumber attribute having no access rights. (Ticket 63071)
- It is now possible to add variables to
createXPathQueryin the Core Java API. (Ticket 67019)
- We fixed an issue where sorting on the search field in a data grid resulted in a database exception. (Ticket 64852)
- We fixed a bug in the behavior of
COUNTon an attribute aggregate query. Previously, when a count on an attribute was defined along with other aggregate functions, count was always rendered as
COUNT(*). As of this release, attribute-specific aggregate functions are always rendered in terms of an associated attribute. As this may change the result of the
COUNTaggregate function in your application, a temporary custom property has been introduced called
DataStorage.CountOnAttribute. Setting this property to False will allow you to fall back to the old
COUNTbehavior. We plan to remove the
DataStorage.CountOnAttributecustom property in Mendix version 8.
- We upgraded the HSQLDB (built-in database) driver version from 2.3.4 to 2.4.1.
Known Issues
- There is a known issue where users in an application are deleted under specific circumstances. Anonymous users have to be enabled for this to occur, and we are investigating the root cause of this issue. (Ticket 67964)
- There is a known issue with consistency error checking. In some cases, consistency errors remain shown in the Errors pane, even though they have been fixed. (Ticket 68234)
-) | https://docs.mendix.com/releasenotes/studio-pro/7.18 | 2019-05-19T14:18:01 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mendix.com |
Backing up volumes to a SnapVault backup involves starting the baseline transfers, making scheduled incremental transfers, and restoring data upon request.
In general, baseline transfers work as follows:
A baseline transfer occurs when you initialize the SnapVault relationship. When you do this, Data ONTAP creates a new Snapshot copy. Data ONTAP transfers the Snapshot copy from the primary volume to the secondary volume. This Snapshot copy is the baseline of the volume at the time of the transfer and is a complete transfer, not an incremental transfer. As a result, none of the other Snapshot copies on the primary volume are transferred as part of the initial SnapVault transfer, regardless of whether they match rules specified in the SnapVault policy.
The source system creates incremental Snapshot copies of the source volume as specified by the Snapshot policy that is assigned to the primary volume. Each Snapshot copy for a specific volume contains a label that is used to identify it.
The SnapVault secondary system selects and retrieves specifically labeled incremental Snapshot copies, according to the rules that are configured for the SnapVault policy that is assigned to the SnapVault relationship. The Snapshot label is retained to identify the backup Snapshot copies.
Snapshot copies are retained in the SnapVault backup for as long as is needed to meet your data protection requirements. The SnapVault relationship does not configure a retention schedule, but the SnapVault policy does specify number of Snapshot copies to retain.
At the end of each Snapshot copy transfer session, which can include transferring multiple Snapshot copies, the most recent incremental Snapshot copy in the SnapVault backup is used to establish a new common base between the primary and secondary volumes and is exported as the active file system.
If data needs to be restored to the primary volume or to a new volume, the SnapVault secondary transfers the specified data from the SnapVault backup. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-63E316E9-6F13-40E3-B820-67261A9C392C.html | 2019-05-19T14:49:34 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netapp.com |
Server-Side Rendering
- Overview
- Environment
- Command Line Usage
- Web Server Usage
- How to Set the Path to PhantomJS
- How to Set the Path to Export Server
- Contribution
Overview
Server-side rendering is a tool that helps a lot in some cases: for example, when you've got an automated email system, and you need to include charts in letters; when you need charts in reports, which are generated on servers; when you need to show the charts on a Smart TV with a stripped version of a browser; etc. In such cases we offer you to use AnyChart Export Server.
AnyChart Export server is also used to provide Export chart to CSV, Excel, JSON and XML options.
Environment
AnyChart Export Server uses PhantomJS which emulates a browser on the server (WebKit), runs our charts in it, gets SVG and converts it into *.PNG, *.JPG or *.PDF files, using Apache Batik. Export to Excel uses Apache POI. Exporting to CSV, JSON and XML doesn't require PhantomJS, server serves only as an intermediary to allow file to be saved using a browser. AnyChart Export Server itself is a jar-file that runs using Java so it works Windows, Linux, MacOS or any other OS where Java is available.
To run the AnyChart Export Server, do the following:
- Install PhantomJS: instructions and downloads at)
- Install Java: version above 6.0 - )
- Download AnyChart Export Server binary file
Command Line Usage
If you want to use the AnyChart Export Server from the Command Line mode you have to set "cmd" (Command Line) as the first parameter, then define the path to the chart or insert the chart code as a string and then set the parameters of the image (dimensions, quality, extension and so on). You'll find the full list of parameters below.
Sample command line:
java -jar anychart-export.jar cmd --script "var chart = anychart.line([1,2,5]); chart.container('container'); chart.draw();" --output-path YOUR_OUT_PATH
Full list of the parameters available:
Web Server Usage
AnyChart Export Server is also used when you use AnyChart Export methods and by default AnyChart component uses server hosted at..
To run Export server in server mode set "server" as the first parameter. Host and port params are optional. The usual http web server is run, it recieves POST requests and sends the result as a base64-line or as a Byte Array with the "File Transfer" header.
When you stop the server, you must stop the PhantomJS process too.
The sample of server running:
java -jar anychart-export.jar server
The sample of a command written in console:
curl -X POST -H "Accept: application/json" --data "responseType=base64&dataType=script&data=var chart = anychart.line([1,2,5]); chart.container('container'); chart.draw();" localhost:2000/png
Full list of server parameters that can be set:
There's a list of URL's which export server responds to:
- /status
- /png
- /jpg
- /svg
- /csv
- /xlsx
- /xml
- /json
Request parameters (required):
- data - script or svg that should be transformed into a picture.
- data-type - a field that contains the information about the data, it might be "script" or "svg".
- response-type - a field that tells how to export the result (file or as base64)
Optional request parameters:
- file-name - file name
- width - picture width
- height - picture height
- quality - picture quality
- force-transparent-white - make the background white or leave it transparent
- pdf-size - the *.pdf-document sheet size
- pdf-x - x of the chart in the *.pdf document
- pdf-y - y of the chart in the *.pdf document
- pdf-width - pdf width
- pdf-height - pdf height
- landscape - the document orientation
How to Set the Path to PhantomJS
As it was mentioned before, export server needs PhantomJS. If you have installed it somewhere different from the default or you've got Windows OS, check the place where Phantom JS is installed and set the right path for the export server. Like this:
java -Dphantomjs.binary.path=PATH_TO_YOUR_PHANTOMJS -jar
How to Set the Path to Export Server
If you have decided to use your own server, use the anychart.exports.server() method and set the address of your server as a parameter:
anychart.exports.server("");
Contribution
If you've got any suggestions or ideas about export server work and improvements, welcome to our open repository on github. | https://docs.anychart.com/v7/Common_Settings/Server-side_Rendering | 2019-05-19T15:12:36 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.anychart.com |
/
at this 4 minute Python tutorial.
Ignore #doctest: +SKIP in the examples. That has to do with internal testing of the examples.
The equals sign (=) is the assignment operator, >>> eq = sin(2*x) - 2*sin(x)*cos(x) >>> simplify(eq) 0 >>> expand(eq,') # Symbol, `a`, stored as variable "a" >>> b = a + 1 # an expression involving `a` stored as variable "b" >>> print(b) a + 1 >>> a = 4 # "a" now points to literal integer 4, not Symbol('a') >>> print(a) 4 >>> print(b) # "b" is still pointing at the expression involving = symbols('f g h', cls=Function).
>>>'> in Python 2.x, <class 'float'> in Py3k <... : + sqrt(2), -sqrt* ‘base’,) | https://docs.sympy.org/0.7.4.1/gotchas.html | 2019-05-19T14:41:46 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.sympy.org |
11.13. New Mail
Several people have sent me methods for checking whether or not they had new e-mail. Most of them relied on programs that aren't on every system. Then I received the following code from Henrik Veenpere: cat $MAIL |grep -c ^Message-. This is simple and elegant, and I like it. | http://tldp.docs.sk/howto/bash-prompt/x806.html | 2019-05-19T15:08:14 | CC-MAIN-2019-22 | 1558232254889.43 | [] | tldp.docs.sk |
What's new McAfee® MVISION Endpoint Detection and Response (MVISION EDR) is security software that allows you to detect, investigate, and contain threats. You can access MVISION EDR from McAfee® MVISION ePO (MVISION ePO) or McAfee® ePolicy Orchestrator® (McAfee® ePO™) . Simplified deployment When you log on to MVISION EDR for the first time, detailed instructions are provided to help you install and deploy the software quickly. You can configure the software using your local McAfee ePO or use MVISION ePO to have a faster deployment experience. Continuous real-time monitoring The Monitoring dashboard displays potential threats and their severity level. When a potential threat is selected, details such as affected devices, process activity, device trace information, process attributes, and threat behavior are displayed. The data from devices is displayed as alerts. You can view the alerts that are assessed and categorized as High (red), Medium (orange), and Low (yellow). You can view the suspicious indicators for any suspicious activities and use the Mitre ATT&CK™ matrix framework to view the different techniques involved. You can analyze specific threat behaviors using the sequential, summary, or timeline view. You can dismiss a threat and exclude a hash from being detected in the future, or create an investigation for further analysis. Artificial intelligence guided investigation The Investigating dashboard displays the number of investigations that are currently being analyzed, the number of closed investigations, and the number of high priority investigations. View the summary of an investigation to determine how a threat might have affected devices. View the notes and status of an investigation. Use the investigation guides to view questions and hypotheses on malware alert triage, outbound network alert triage, threat intelligence alert triage, and phishing alert triage. Use the graph view to understand the critical details that are identified as part of security findings or to enrich the investigation with more data obtained through manual actions. Link and compare similar investigations for effective and efficient investigative methods. Threat containment The Devices and Process Details panels on the Monitoring dashboard allow you to contain threats on devices. You can select one or more devices and apply an action, such as quarantine the affected device, or dismiss the threat. Real-time search You can use the Real-time Search page to search for information about a specific threat or alert in real time . You can obtain information about processes currently running on devices using real-time search queries. Real-time searches run queries directly on devices to obtain current data. Historical search You can use the Historical Search page to do a historical search in the cloud to get visibility of the information that was collected from the devices over the selected period including process execution, files creation, file archives creation, scripts written, admin/hacking tools executed, services changed, auto-run entries created or modified, scheduled task modified, DNS requests, user logon activities, and loaded DLLs. Performance metrics You can use the Performance Metrics dashboard to quickly get an overall status of all ongoing investigations. The trend graphs can help in assessing the allocation of resources and effort required in a Security Operations Center (SOC) to investigate and analyze potential threats. Track action history You can use the Action History page to view the details of all containment actions taken on a threat or device from the Monitoring and Investigating dashboards. | https://docs.mcafee.com/bundle/mvision-endpoint-detection-and-response-release-notes/page/GUID-688C63F6-E632-437D-A7E0-3028C1354B97.html | 2019-05-19T15:21:44 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mcafee.com |
Comparing custom images and formulas in DevTest Labs
Both custom images and formulas can be used as bases for created new VMs. However, the key distinction between custom images and formulas is that a custom image is simply an image based on a VHD, while a formula is an image based on a VHD in addition to preconfigured settings - such as VM Size, virtual network, subnet, and artifacts. These preconfigured settings are set up with default values that can be overridden at the time of VM creation. This article explains some of the advantages (pros) and disadvantages (cons) to using custom images versus using formulas.
Custom image pros and cons
Custom images provide a static, immutable way to create VMs from a desired environment.
Pros
- VM provisioning from a custom image is fast as nothing changes after the VM is spun up from the image. In other words, there are no settings to apply as the custom image is just an image without settings.
- VMs created from a single custom image are identical.
Cons
- If you need to update some aspect of the custom image, the image must be recreated.
Formula pros and cons
Formulas provide a dynamic way to create VMs from the desired configuration/settings.
Pros
- Changes in the environment can be captured on the fly via artifacts. For example, if you want a VM installed with the latest bits from your release pipeline or enlist the latest code from your repository, you can simply specify an artifact that deploys the latest bits or enlists the latest code in the formula together with a target base image. Whenever this formula is used to create VMs, the latest bits/code are deployed/enlisted to the VM.
- Formulas can define default settings that custom images cannot provide - such as VM sizes and virtual network settings.
- The settings saved in a formula are shown as default values, but can be modified when the VM is created.
Cons
- Creating a VM from a formula can take more time than creating a VM from a custom image.
New to Azure? Create a free Azure account.
Already on Azure? Get started with your first lab in DevTest Labs.
Related blog posts
Next steps
Feedback
Send feedback about: | https://docs.microsoft.com/en-gb/azure/lab-services/devtest-lab-comparing-vm-base-image-types | 2019-05-19T14:33:15 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
Only specific SELECT statements—simple accesses of a single table—allow you to update or delete rows as you step through them.
To create an updatable result set, you must use a JDBC peer client connection to execute a query that specifies the FOR UPDATE clause. For more information, see SELECT and FOR UPDATE Clause.
See also SQL Language Limitations for information about limitations with updatable result sets. | http://gemfirexd.docs.pivotal.io/docs/1.0.0/userguide/developers_guide/topics/resultsets/cdevconcepts30811.html | 2019-05-19T15:49:29 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
Creates a backup of operational disk stores for all members running in the distributed system. Each member with persistent data creates a backup of its own configuration and disk stores.
Use the mcast-port and -mcast-address, or the -locators options, on the command line to connect to the GemFire XD cluster.
gfxd backup [-baseline=<baseline directory>] <target directory> [-J-D<vmprop>=<prop-value>] [-mcast-port=<port>] [-mcast-address=<address>] [-locators=<addresses>] [-bind-address=<address>] [-<prop-name>=<prop-value>]*
Alternatively, you can specify these and other distributed system properties in a gemfirexd.properties file that is available in the directory where you run the gfxd command.
The table describes options for gfxd backup.
An online backup saves the following:
gfxd backup ./gfxdBackupLocation -locators=warsaw.pivotal.com[26340]
See also Backing Up and Restoring Disk Stores.
gfxd backup -baseline=./gfxdBackupLocation/2012-10-01-12-30 ./gfxdBackupLocation -locators=warsaw.pivotal.com[26340]
When you run gfxd backup, it reports on the outcome of the operation.
The backup may be incomplete. The following disk stores are not online: DiskStore at hostc.pivotal.com /home/dsmith/dir3A complete backup can still be performed if all table data is available in the running members.
The tool reports on the success of the operation. If the operation is successful, you see a message like this:
Connecting to distributed system: locators=warsaw.pivotal.com26340 The following disk stores were backed up: DiskStore at hosta.pivotal.com /home/dsmith/dir1 DiskStore at hostb.pivotal.com /home/dsmith/dir2 Backup successful.
If the operation does not succeed at backing up all known members, you see a message like this:
Connecting to distributed system: locators=warsaw.pivotal.com26357 The following disk stores were backed up: DiskStore at hosta.pivotal.com /home/dsmith/dir1 DiskStore at hostb.pivotal.com /home/dsmith/dir2 The backup may be incomplete. The following disk stores are not online: DiskStore at hostc.pivotal.com /home/dsmith/dir3
A member that fails to complete its backup is noted in this ending status message and leaves the file INCOMPLETE_BACKUP in its highest level backup directory.
Below is the structure of files and directories backed up in a distributed system:
2011-05-02-18-10 /: pc15_8933_v10_10761_54522 2011-05-02-18-10/pc15_8933_v10_10761_54522: config diskstores README.txt restore.sh 2011-05-02-18-10/pc15_8933_v10_10761_54522/config: gemfirexd.properties 2011-05-02-18-10/pc15_8933_v10_10761_54522/diskstores: GFXD_DD_DISKSTORE 2011-05-02-18-10/pc15_8933_v10_10761_54522/diskstores/GFXD_DD_DISKSTORE: dir0 2011-05-02-18-10/pc15_8933_v10_10761_54522/diskstores/GFXD_DD_DISKSTORE/dir0: BACKUPGFXD-DD-DISKSTORE_1.crf BACKUPGFXD-DD-DISKSTORE_1.drf BACKUPGFXD-DD-DISKSTORE_2.crf BACKUPGFXD-DD-DISKSTORE_2.drf BACKUPGFXD-DD-DISKSTORE.if
The restore script (restore.sh or restore.bat) copies files back to their original locations. You can do this manually if you wish:
The restore copies these back to their original location. | http://gemfirexd.docs.pivotal.io/docs/1.4.0/userguide/reference/gfxd_commands/gfxd-backup.html | 2019-05-19T15:46:44 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
Upgrade Guide¶
Kubernetes Cilium Upgrade¶
Cilium should be upgraded using Kubernetes rolling upgrade functionality in order to minimize network disruptions for running workloads.
The safest way to upgrade Cilium to version “v1.0” is by updating the
RBAC rules and the DaemonSet file provided, which makes sure the ConfigMap,
initially set up by
cilium.yaml, already stored in the cluster will not be
affected by the upgrade.
Both files are dedicated to “v1.0” for each Kubernetes version.
$
You can also substitute the desired Cilium version number for vX.Y.Z in the command below, but be aware that copy of the spec file stored in Kubernetes might run out-of-sync with the CLI flags, or options, specified by each Cilium version.
kubectl set image daemonset/cilium -n kube-system cilium-agent=cilium/cilium:vX.Y.Z
To monitor the rollout and confirm it is complete, run:
kubectl rollout status daemonset/cilium -n kube-system
To undo the rollout via rollback, run:
kubectl rollout undo daemonset/cilium -n kube-system
Cilium will continue to forward traffic at L3/L4 during the roll-out, and all endpoints and their configuration will be preserved across the upgrade rollout. However, because the L7 proxies implementing HTTP, gRPC, and Kafka-aware filtering currently reside in the same Pod as Cilium, they are removed and re-installed as part of the rollout. As a result, any proxied connections will be lost and clients must reconnect.
Downgrade¶
Occasionally, when encountering issues with a particular version of Cilium, it may be useful to alternatively downgrade an instance or deployment. The above instructions may be used, replacing the “v1.0” version with the desired version.
Particular versions of Cilium may introduce new features, however, so if Cilium is configured with the newer feature, and a downgrade is performed, then the downgrade may leave Cilium in a bad state. Below is a table of features which have been introduced in later versions of Cilium. If you are using a feature in the below table, then a downgrade cannot be safely implemented unless you also disable the usage of the feature. | https://cilium.readthedocs.io/en/v1.0/install/upgrade/ | 2019-05-19T15:01:40 | CC-MAIN-2019-22 | 1558232254889.43 | [] | cilium.readthedocs.io |
Ghost comes with automatic RSS feeds for your content, but you can also create a custom feed using the flexible dynamic routing layer to support specific content types, like videos and podcasts.
Adding
/rss/ to most URLs in Ghost produces an automatically generated RSS feed for your content. If you're publishing a podcast on your Ghost site then you'll probably want to create a custom RSS feed to distribute your podcast episodes to places like iTunes.
This tutorial walks you through how to create a custom RSS route using dynamic routing, as well as a Handlebars template for your RSS feed that is fully optimised for a podcast and iTunes.
Add a new route for your RSS feed
The first thing to do is add a new route where your RSS feed will exist in the using the dynamic routing layer in Ghost. Download the most up to date version of your
routes.yaml file from Ghost Admin settings menu and open it in your code editor of choice.
For the purposes of this example, we're adding this to our podcast collection - here's what it looks like:
routes: /podcast/rss/: template: podcast/rss content_type: text/xml
Note that this assumes we already have a collection in place for the podcast content, which would appear under collections in the
routes.yaml file like so:
collections: /blog/: permalink: /blog/{slug}/ filter: tag:blog+tag:-podcast /podcast/: permalink: /podcast/{slug}/ filter: tag:podcast+tag:-blog
If you haven't yet setup your podcast content in Ghost, then you can use this tutorial on content collections as a guide.
Create a new template for an iTunes RSS feed
Now you've updated the
routes.yaml, you'll need to create a new Handlebars template in your theme. This requires a little bit of coding, but you can use the example provided in this tutorial as a starting point.
Define your podcast channel for iTunes
In order for iTunes to understand your RSS feed, there a few basic requirements that you need to include at the start of your new template, which defines the type of RSS feed and some basic information about your podcast. Below is an example of the required information for an iTunes podcast, which you can copy into your own template and replace with necessary information:
<?xml version="1.0" encoding="UTF-8"?> <rss version="2.0" xmlns: <channel> <title>{{@blog.title}}</title> <link>{{@blog.url}}</link> <description>{{@blog.description}}</description> <language>{{lang}}</language> <copyright>{{@blog.title}} Copyright {{date <lastBuildDate>{{date <itunes:category</itunes:category> {{#get "posts" filter="tag:podcast" include="tags,authors" as |episode|}} {{#foreach episode}} <item> <title>{{title}}</title> <link>{{url{{id}}</guid> <category><![CDATA[ {{primary_tag}} ]]></category> <description>{{custom_excerpt}}</description> <content:encoded><![CDATA[ {{content}} ]]></content:encoded> <enclosure url="{{og_description}}" length="0" type="audio/mpeg"/> <itunes:subtitle>{{custom_excerpt}}</itunes:subtitle> <itunes:summary><![CDATA[ {{content}} ]]></itunes:summary> </item> {{/foreach}} {{/get}} </channel> </rss>
You can copy and paste this exact implementation for your site, then go ahead and customise it to suit your needs! There are a couple of static variables which need to be adjusted, like your name, email, and the iTunes category you want to appear in.
There's also one very small hack/workaround that makes all of this work: The feed requires that you specify the podcast mp3/audio file URL for each episode. Because Ghost doesn't have custom fields, we can repurpose the
Facebook Description field for each post to store the link to the audio file.
So wherever you upload your audio to, just paste the provided URL into the Facebook Description and you should be all set.
Update
routes.yaml and your active theme
Once you're happy with your work, upload a new version of
routes.yaml and update your active theme in Ghost Admin to enable your new RSS feed. Once you have done this, you should be able to visit the feed at
/podcast/rss/ to ensure it's working as desired and submit it to iTunes.
Summary
Success! You should have implemented your very own custom iTunes RSS feed for podcast content on your Ghost site using dynamic routing and a Handlebars template. Don't forget you can get more in-depth information about the Handlebars theme layer in Ghost in the docs. | https://docs.ghost.org/tutorials/custom-rss-feed/ | 2019-05-19T14:43:08 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.ghost.org |
Action disabled: source
Video Tutorials and Audio Podcasts
This page has links to various videos and podcasts.
Some videos are made using a system called “Loom” The links here will directly take you to getloom.com In their player, you can take the mouse to the bottom right corner and increase the speed of the video; if you think it is going too slow
-
Press F1 inside the application to read context-sensitive help directly in the application itself
← ∈
Last modified: le 2019/04/21 19:29 | http://docs.teamtad.com/doku.php/video_tutorials?do=edit | 2019-05-19T15:32:39 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
You use JDBC connection properties, connection boot properties, and Java system properties to configure GemFire XD members and connections.
The names of GemFire XD system properties always include the "gemfirexd." connection and boot property names when you define those properties as Java system properties. The Prefix row in each property table lists a prefix value ("gemfirexd." or "gemfire.") when one is required. Do not use an indicated prefix when you specify the property in a connection string or with the FabricServer API.
If no prefix is specified, use only the indicated property name in all circumstances. For example, use "host-data" whether you define this property in gemfirexd.properties, as a Java system property, or as a property definition for FabricServer.
s1=hello there s2=\u3053\u3093\u306b\u3061\u306fFor example, in gemfirexd.
The following table lists all of the GemFire XD configuration properties names and provides links to each property reference page. | http://gemfirexd.docs.pivotal.io/docs/1.3.1/userguide/reference/configuration/ConnectionAttributes.html | 2019-05-19T15:50:48 | CC-MAIN-2019-22 | 1558232254889.43 | [] | gemfirexd.docs.pivotal.io |
Debugging LoginDebugging Login
Admin RouterAdmin Router
Admin Router hands over login requests to the IAM. Confirm that the request is received by Admin Router on any of the master nodes first using the following command.
sudo journalctl -u dcos-adminrouter.service
Identity and Access ManagerIdentity and Access Manager
The IAM is the only entity emitting DC/OS authentication tokens. To debug login problems, check the IAM (Bouncer) on the masters using the following command.
sudo journalctl -u dcos-bouncer.service | https://docs.mesosphere.com/1.13/security/oss/login/troubleshooting/ | 2019-05-19T14:22:38 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mesosphere.com |
GoogleSiteMap
Last edited by YJ Tso on Apr 13, 2016.
What is GoogleSiteMap?
GoogleSiteMap is a snippet that will display a Google-customized SiteMap for your site.
Requirements
- MODx Revolution 2.2.x or later
- PHP5.4 or later
History and Info
In 2016 GoogleSiteMap was completely re-written by YJ Tso (@sepiariver) based on blazing-fast sitemap code by Garry Nutting (@garryn), after it was found that the legacy Snippet would time-out when more than several thousand nodes needed to be generated.
The trade-off was that some of the legacy features could not be supported. An attempt was made to maintain backwards compatibility, by calling the legacy Snippet if a legacy feature is required.
The legacy GoogleSiteMap Snippet was originally written by Shaun McCormick (splittingred) as a Snippet to display a Google SiteMap, and first released on June 23rd, 2009.
You can view the roadmap here.
Download
It can be downloaded from within the MODx Revolution manager via [Package Management], or from the MODx Extras Repository, available here:
Development and Bug Reporting
GoogleSiteMap source code is managed in GitHub, and can be found here:
Usage
GoogleSiteMap can be called via the Snippet tags.
Snippets
GoogleSiteMap comes with two snippets:
Examples
Display a Google SiteMap for tens of thousands of Resources.
[[!GoogleSiteMap]]
Display a Google SiteMap for a more modest number of Resources, using a custom item template Chunk.
[[!GoogleSiteMap? &itemTpl=`myCustomTpl`]]
See Also
- GoogleSiteMap.GoogleSiteMap
- GoogleSiteMapVersion1
- GoogleSiteMap.Roadmap
Suggest an edit to this page on GitHub (Requires GitHub account. Opens a new window/tab) or become an editor of the MODX Documentation. | https://docs.modx.com/extras/revo/googlesitemap | 2019-05-19T15:26:56 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.modx.com |
Zoom Down
What?
Zoom Down is the same as the Zoom Window feature found in many CAD systems – it simply zooms so that the rectangle that you have dragged out on the screen becomes the next view that you would see.
This means that you see lesser of your of your project but now you see the portion in view bigger
How?
Click on the 1st button on the right vertical view bar to switch the mouse into Zoom down mode | http://docs.teamtad.com/doku.php/actzoomdn | 2019-05-19T15:28:55 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
Workflows¶
In order to run the example workflows, make sure that you have datmo properly installed with the latest stable or development version. You can install the latest stable version with the following command:
$ pip install datmo
Listed below are actions you might want to take with Datmo. For each we have listed if there are any example for each type of flow. You can navigate to the specific flow folder to find the exact instructions for each example.
Creating a Snapshot¶
Note: All of the following flows involve using the CLI to some extent, even in conjunction with the python SDK. The standalone CLI version, while the most manual method, is compatible with any language and files, even those not listed here.
You can view the latest examples on the master branch on Github | https://datmo.readthedocs.io/en/latest/workflows.html | 2019-05-19T14:18:08 | CC-MAIN-2019-22 | 1558232254889.43 | [] | datmo.readthedocs.io |
. Release build - McAfee® Drive Encryption 7.2.3.29 This release was developed for use with: McAfee® ePolicy Orchestrator® (McAfee® ePO™) 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.9.0, 5.9.1 McAfee® ePO Deep Command 2.2.0, 2.3.0, 2.4.0 Purpose The Drive Encryption 7.2.3 release fixes problems that were reported in previous versions. Rating High Priority – McAfee rates this release as a high priority for all environments to avoid a potential business impact. Apply this update as soon as possible. For more information about patch ratings, see KB51560. | https://docs.mcafee.com/bundle/drive-encryption-7.2.3-release-notes-epolicy-orchestrator/page/GUID-CBDA9699-A34E-46F0-AAEE-4F1B74FE95EB.html | 2019-05-19T14:27:31 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mcafee.com |
Using 1:1 NAT on a WAN IP Address¶
Yes, 1:1 NAT may be used from the WAN IP address to an internal IP address. But be aware that this maps every port and services on the firewall will no longer be reachable from the outside. To reach the firewall from the outside, port forward entries must be added to negate the 1:1 NAT for the specific ports on the firewall to be reached.
If there is only one WAN IP, and a Linksys (or other SOHO router) style “DMZ” [1] configuration is being attempted, consider only forward the ranges of ports and protocols that are absolutely necessary, and use appropriate firewall rules to protect access to these ports.
Footnotes | https://docs.netgate.com/pfsense/en/latest/nat/using-1-1-nat-on-a-wan-ip-address.html | 2019-05-19T15:00:12 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netgate.com |
Authentication
Once you register; an email would be sent to you for confirmation. You must confirm the account.
The confirmed credentials are verified at the time of installing TAD and also at the time of using.
When TAD starts the very first time; it would detect that your credentials have not yet been verified. You would be asked to provide credentials to verify that you are indeed a registered user. Filling the registration form also gives you an install code which is needed for the installation.
Please ensure that you do a proper Registration first! Also, do not skip the verification email sent to your email address. Your email MUST be confirmed in order to use TAD.
Once you have registered, this is what you should do when you start TAD in the dialog that is shown to you:
If you are registered at our own TAD Community Chat
Provide the email address and the password that you used for your account at the TAD Community chat.
The following is not yet active. Testing is going on
If you are registered at the externally hosted Zulip system for TAD
Provide the email address you used at Zulip. And , MOST importantly, in the field which asks for the password, provide the API Key that for your Zulip account. Do NOT give the password you used at Zulip. The API Key is available from your account settings at Zulip.
Note
The same process of verifying your credentials would happen once every few weeks (approx once a month)
Press F1 inside the application to read context-sensitive help directly in the application itself
← ∈ | http://docs.teamtad.com/doku.php/authentication | 2019-05-19T15:28:22 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.teamtad.com |
Configure the Legacy LDAP Implementation
If you prefer to use the legacy Acrolinx LDAP implementation instead of the JAAS LDAP login module, you must update your overlay of the coreserver.properties file with two sets of LDAP configuration properties:
- Properties that define the connection to the LDAP server.
- Properties that define how users are authenticated.
Configuring the Legacy LDAP Server Connection Properties
The LDAP server connection properties configure the legacy LDAP authentication module that is implemented by Acrolinx. The preferred way to configure LDAP is with JAAS . However, you can still use these legacy properties if JAAS does not suit your requirements.
To configure the LDAP server connection properties, follow these steps:
- Open your overlay of the core server properties file.
You find the overlay for the core server properties file in the following location:
%ACROLINX_CONFIGURATION_ROOT%\server\bin\coreserver.properties
Add the following properties:
TABLE 1. LDAP SERVER CONNECTION PROPERTIES
The following example contains sample values for all of the properties described in the previous table:
authentication.useExternal=true authentication.external=ldap authentication.external.ldap.protocol=ldap,ldaps authentication.external.ldap.ldapUrl=ldap://ldapserver.company.com authentication.external.ldap.userName=acrolinx authentication.external.ldap.password=test authentication.external.ldap.base=directoryname=maindir,companydivision=company.com authentication.external.ldap.secureUrl=ldaps://ldapserver.company.com
- Save your changes and restart the core server.
Configuring Distinguished Name Detection
Most LDAP servers require a distinguished name (DN) to authenticate a user. The DN is the unique identifier for each an entry in the directory.
Example of a DN for a user:
CN=Alex Lee,CN=User,DC=company,DC=local
Users can enter their distinguished name as their Acrolinx user ID, or you can configure the Acrolinx Server to resolve the DN for each user based on another identifier that a user enters as their Acrolinx user ID.
If your LDAP server does not require a DN for authentication, or if you prefer your users to enter their distinguished names as their user ID, you must disable distinguished name detection.
To configure DN detection, follow these steps:
- Open your overlay of the core server properties file.
You find the overlay for the core server properties file in the following location:
%ACROLINX_CONFIGURATION_ROOT%\server\bin\coreserver.properties
- Add the following properties:
authentication.external.ldap.distinguishedNameEntrySearchKey=<FIELD_NAME>
This property defines the type of information that users must enter as their user ID when logging on to Acrolinx. The information entered for the Acrolinx user ID is then used to find the correct LDAP entry for the user.
For example, if your users enter their e-mail address as the Acrolinx user ID, add the line:
authentication.external.ldap.distinguishedNameEntrySearchKey=e-mailaddress
In this example, when a user logs on to Acrolinx with the user ID
[email protected], the Acrolinx server searches for LDAP entries where the field e-mailaddress has the value
[email protected] finds the entry for "Jane Smith".
authentication.external.ldap.distinguishedNamePattern
This property defines the pattern of the DN used to log on to your LDAP server. This pattern consists of fields in the LDAP user entry. The variables in the pattern take the following form, separated by a comma:
<FIELD_NAME>=%<FIELD_NAME>%
During authentication, the variables are replaced with the value of field found in the user entry. For DN detection to work correctly, every field in the pattern must only occur once within each user entry.
For example, if your login DN consists of the user identifier, country, and employee code, add the line:
authentication.external.ldap.distinguishedNamePattern=uid=%uid%,country=%country%,employeecode=%employeecode%
In a previous example, the e-mail address of the user "Jane Smith" was used to detect her LDAP entry. When detecting the login DN for Jane Smith, the Acrolinx Server looks in the entry for her
uid,country,and
employeecodeand generates the login DN
uid=jsmith,country=US,employeecode=JS153672.
By default, the server escapes any special characters in the resolved field values. If a field value contains special characters that must be treated literally, you must prefix the field name with an equals sign.
For example, suppose you have a user entry that contains the following field and value pair
seeAlso=ou=system
The field contains a reference to the organizational unit field "ou". This reference uses the equals sign to indicate that "system" is the value for the "ou" field. However, the equals sign is also part of the value for the seeAlso field and would be escaped by default.
To ensure that the equals sign is correctly interpreted as being part of the reference, you must add the seeAlso field using the following syntax.
seeAlso=%=seeAlso%
The equals sign before the field name variable ensures that all special characters in the resolved field values are treated literally.
- Save your changes and restart the core server.
Disabling Distinguished Name Detection
You might want to disable automatic distinguished name (DN) detection if your LDAP server does not require a DN for authentication, or if you prefer your users to enter their DN when logging in.
To disable distinguished name detection, follow these steps:
- Open your overlay of the core server properties file.
You find the overlay for the core server properties file in the following location:
%ACROLINX_CONFIGURATION_ROOT%\server\bin\coreserver.properties
Add the following property:
authentication.external.ldap.useUserNameMapping=false
- Save your changes and restart the core server.
Configuring Group Membership
You configure LDAP user authentication to restrict Acrolinx Server access to members of specific LDAP groups.
To configure authentication based on group membership, follow these steps:
- Open your overlay of the core server properties file.
You find the overlay for the core server properties file in the following location:
%ACROLINX_CONFIGURATION_ROOT%\server\bin\coreserver.properties
- Add the following properties:
authentication.external.ldap.requireGroupMembership=true
to activate group membership authentication.
authentication.external.ldap.groupAttributes
to enter a semicolon-separated list of LDAP entry attributes which contain information about which groups a user belongs to:
Example:
authentication.external.ldap.groupAttributes=maingroups;secondarygroups
authentication.external.ldap.permittedGroups
to enter a semicolon-separated list of the distinguished names for each permitted group. The user must be a member of at least one of the listed groups. Example:
authentication.external.ldap.permittedGroups=commonName=TechDoc,orgUnit=usergroup,org=company.com;commonName=Marketing,orgUnit=usergroup,org=company.com .
- Save your changes and restart the core server.
- After you've configured the legacy LDAP settings, configure the general authentication settings. | https://docs.acrolinx.com/coreplatform/latest/en/acrolinx-on-premise-only/server-security/configure-the-legacy-ldap-implementation | 2019-05-19T15:03:39 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.acrolinx.com |
3. Working toolbar¶
Working toolbar
The Working toolbar allows for displaying the Input image, creating temporary ROIs and classification previews.
The functions are described in detail in the following paragraphs, using these conventions:
[I] = Input text
= Configuration stored in the active project of QGIS
= Configuration stored in QGIS registry
3.1. Image control¶
: show the Main Interface Window;
: zoom the map to the extent of Input image;
RGB=
: use the button to show/hide the Input image in the map; from the list select a Color Composite that is applied to the Input image; new color composites can be entered typing the band numbers separated by
-or
;or
,(e.g. RGB = 4-3-2 or RGB = 4;3;2 or RGB = 4,3,2);
: display the input image stretching the minimum and maximum values according to cumulative count of current map extent;
: display the input image stretching the minimum and maximum values according to standard deviation of current map extent;
3.2. Temporary ROI¶
A temporary ROI is a temporary polygon displayed in the map, which can be saved permanently in the Training input. A temporary ROI can be drawn manually or using a Region Growing Algorithm.
: zoom the map to the extent of temporary ROI;
ROI: use the button to show/hide the temporary ROI and the Training input in the map;
: activate the pointer to create a temporary ROI by drawing a polygon in the map; left click on the map to define the ROI vertices and right click to define the last vertex closing the polygon; press the keyboard button
CTRLto add a multipart polygon; press the keyboard buttons
CTRL + Zfor removing the last multipart polygon;
: activate the pointer to create a temporary ROI using the region growing algorithm; left click on the map for creating the ROI; right click on the map for displaying the spectral signature of a pixel of the Input image in the Spectral Signature Plot; press the keyboard button
CTRLto add a multipart polygon (new parts are not created if overlapping to other parts); press the keyboard buttons
CTRL + Zfor removing the last multipart polygon;
: create a temporary ROI using the region growing algorithm at the same seed pixel as the previous one; it is useful after changing the region growing parameters;
- Region growing parameters: the following parameters are required for the ROI creation using a region growing algorithm on the Input image:
- Dist
: set the interval which defines the maximum spectral distance between the seed pixel and the surrounding pixels (in radiometry unit);
- Min
: set the minimum area of a ROI (in pixel unit); this setting overrides the
Range radiusuntil the minimum ROI size is reached; if
Rapid ROI on bandis checked, then ROI will have at least the size defined
Min ROI size; if
Rapid ROI on bandis unchecked, then ROI could have a size smaller than
Min ROI size;
- Max
: set the maximum width of a ROI (i.e. the side length of a square, centred at the seed pixel, which inscribes the ROI) in pixel unit;
3.3. Classification preview¶
Classification preview allows for displaying temporary classifications (i.e. classification previews). Classification previews are useful for testing the algorithm in a small area of the Input image, before classifying the entire image which can be time consuming (see Classification output).
Classification preview is performed according to the parameters defined in Classification algorithm.
In addition to the classification raster, an Algorithm raster can be displayed, which is useful for assessing the distance of a pixel classified as
class X from the corresponding spectral signature X.
In Classification previews, black pixels are distant from the corresponding spectral signature (i.e. probably a new ROI, or spectral signature, should be collected in that area) and white pixels are closer to the corresponding spectral signature (i.e. probably the spectral signature identifies correctly those pixels).
After the creation of a new preview, old previews are placed in QGIS Layers inside a layer group named
Class_temp_group (custom name can be defined in Temporary group name) and are deleted when the QGIS session is closed.
WARNING: Classification previews are automatically deleted from disk when the QGIS session is closed; a QGIS message (that can be ignored) could ask for the path of missing layers when opening a previously saved project.
: zoom the map to the extent of the last Classification preview;
Preview: use the button to show/hide the last Classification preview in the map;
: activate the pointer for the creation of a Classification preview; left click the map to start the classification process and display the classification preview; right click to start the classification process and show the Algorithm raster of the preview;
: create a new Classification preview centred at the same pixel as the previous one;
- T
: change dynamically the classification preview transparency, which is useful for comparing the classification to other layers;
- S
: size of the preview in pixel unit (i.e. the side length of a square, centred at the clicked pixel);
: remove from QGIS the classification previews that are archived in the Class_temp_group; | https://semiautomaticclassificationmanual-v5.readthedocs.io/en/latest/working_toolbar.html | 2019-05-19T14:56:21 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['_images/toolbar.jpg', '_images/toolbar.jpg'], dtype=object)] | semiautomaticclassificationmanual-v5.readthedocs.io |
Developer Operations (DevOps) is the part of the development cycle related to building and deploying application artifact changes from a build machine to a test system and thence to production.
LANSA has implemented the part of the DevOps cycle that takes a built object and deploys it into a fully working test or production system. This has been implemented as the simple action 'Deploy' on a Repository List.
This feature leverages the LANSA Deployment Tool technology for packaging table and Web Page changes and installing them on a target system. The transport mechanism used is a git repository.
The diagram below shows the use of git for deploying from the IDE to the target system. All the repositories are git clones. The IDE Deploy action first packages up the tables and web pages using the Deployment Tool. The current changes are then committed to git and pushed to GitHub. All those tasks are performed by the Deploy action.
GitHub then fires off a web hook on the target system's Git Deploy Hub (Hub). The Hub pulls down the changes from GitHub, checks out the current branch and then runs the Package Install to deploy the tables and web pages. All the other LANSA object types like server modules and reusable parts are handled by git alone as simple file changes.
Only 64-bit Windows environments are supported.
Note that the git repository used here is for distributing the executable artifacts, not the source code. Git is used here as a highly efficient transport mechanism. Git also has a very high market share. There are few, if any, Windows developers who have not used a git repository, even if it's only on GitHub. The source code would be kept in a different repository. They are completely separate and indeed the source code repository could be in an entirely different type of VCS.. | https://docs.lansa.com/14/en/lansa022/content/lansa/vldtooldevops_0015.htm | 2019-05-19T14:52:54 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.lansa.com |
Connecting from Apple iOS Devices with OpenVPN¶. The most recent version of the OpenVPN Client Export Package on pfSense may be used to export an Inline Configuration, and then transfer the resulting .ovpn file to the target device. Then use iTunes to transfer the files into the app or e-mail it to the device. Another option is to use a cloud sync tool, dropbox for example.
Getting The Configuration Onto the Device¶
E-mail the VPN file to the device, open the Mail app and then open the attachment and click OpenVPN from there.
Using cloud sync application like dropbox, googledrive, box, etc. Same thing just click open with OpenVPN from that application on the .ovpn file.
Using iTunes to transfer the configuration to the iOS device is rather simple.
- Make sure the most recent version of the OpenVPN Client Export Package is loaded on pfSense
- Export the Inline Configuration file for the VPN
- Connect the iOS device to a computer and open iTunes
- Find and install the OpenVPN Connect app
- Inside of iTunes, click the device icon in the toolbar
- Click on the Apps tab to open the list of apps on the device
- Under File Sharing, click OpenVPN
- Drag and drop the .ovpn file into this area, or click Add and locate the file that way
Importing the Configuration¶
- Open the OpenVPN Connect app and it should offer to import the profile
- Click the + button, and the profile should import
Using the VPN¶
- Enter a username/password if authentication is enabled on the VPN. Toggle save if to store the password
- Slide the bottom slider to On and it will connect
- If needed, the status indicator (Disconnect/Connect) will show the logs when tapped
- Slide to Off to disconnect
Other Notes¶
This OpenVPN client does not fully support IPv6 in our current configuration, but may in the future. It has some other limitations, such as not supporting tap mode or the tls-remote option, but ultimately it works well for what most people need.
If manually building a configuration file for this client, it requires either an inline configuration style or separate CA, Cert, Cert Key, and if used, TLS key files. It does not appear to accept .p12 files containing the CA and client certificate/keys.
If attempting to import a previously exported inline style config, first edit the file and remove any lines containing “[inline]” and also “tls-remote”. It should then be possible to import the configuration file. | https://docs.netgate.com/pfsense/en/latest/vpn/openvpn/connecting-from-apple-ios-devices-with-openvpn.html | 2019-05-19T14:38:38 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netgate.com |
Use fields to search
You cannot take advantage of the advanced search features in Splunk Enterprise without understanding what fields are and how to use them.
About fields
When you look at the Data Summary in the search view, you see tabs for the Hosts, Sources, and Sourcetypes that described the type of data you added to your Splunk index.
These are also default fields (
host,
source,
sourcetype) that Splunk Enterprise extracts from the data during indexing. They help to specify exactly which events you want to retrieve from the index.
What are fields?
Fields exist in machine data in many forms. Often, a field is a value (with a fixed, delimited position on the line) or a name and value pair, where there is a single value to each field name. A field can be multivalued, that is, it can appear more than once in an event and has a different value for each appearance.
Some examples of fields are
clientip for IP addresses accessing your Web server,
_time for the timestamp of an event, and
host for domain name of a server. One of the more common examples of multivalue fields is email address fields. While the
From field will contain only a single email address, the
To and
Cc fields have one or more email addresses associated with them.
In Splunk Enterprise, fields are searchable name and value pairings that distinguish one event from another because not all events will have the same fields and field values. Fields let you write more tailored searches to retrieve the specific events that you want.
See "About fields" in the Knowledge Manager Manual.
Extracted fields
Splunk extracts fields from event data at index-time and at search-time. See "Index time versus search time" in Managing Indexers and Clusters of Indexers.
Default and other indexed fields are extracted for each event that is processed when that data is indexed. Default fields include
host,
source, and
sourcetype. For a list of the default fields, see "Use default fields" in the Knowledge Manager Manual.
Splunk Enterprise extracts different sets of fields, when you run a search. See "When Splunk Enterprise extracts fields" in the Knowledge Manager Manual.
You can also use the field extractor to create custom fields dynamically on your local Splunk instance. The field extractor lets you define any pattern for recognizing one or more fields in your events. See "Build field extractions with the field extractor" in the Knowledge Manager Manual.
Find and select fields
1. Go to the Search dashboard and type the following into the search bar:
sourcetype="access_*"
fieldname="fieldvalue" . Field names are case sensitive, but field values are not. You can use wildcards in field values. Quotes are required when the field values include spaces.
This search indicates that you want to retrieve only events from your web access logs and nothing else.
This search uses the wildcard
access_* to match any Apache web access
sourcetype, which can be access_common, access_combined, or access_combined_wcookie.
2. In the Events tab, scroll through the list of events.
If you are familiar with the access_combined format of Apache logs, you, such as Arcade, Simulation, productId, categoryId, purchase, addtocart, and so on.
To the left of the events list is the Fields sidebar. As Splunk Enterprise retrieves the events that match your search, the Fields sidebar updates with Selected fields and Interesting fields. These are the fields that Splunk Enterprise extracted from your data.
Selected Fields are the fields that appear in your search results. The default fields host, source, and sourcetype are selected. These fields appear in all the events. The numbers next to the selected fields represent the number of different values for those fields that appear in the events returned from your search.
You can hide and show the fields sidebar by clicking Hide Fields and Show Fields.
3. Click All Fields.
In the Select Fields dialog box, you can select the fields to show in the events list.
You see more default fields, which includes fields based on each event's
timestamp (everything beginning with
date_*), punctuation (
punct), and location (
index).
Other field names apply to the web access logs. For example,
clientip, method, and
status. These are not default fields. They are extracted at search time.
Other extracted fields are related to the Buttercup Games online store. For example,
action,
categoryId, and
productId.
4. Select
action,
categoryId, and
productId and close the Select Fields dialog box.
The three fields appear under Selected Fields in the sidebar. The selected fields appear under the events in your search results if they exist in that particular event. Every event might not have the same fields.
The fields sidebar displays the number of values that exist for each field. These are the values that Splunk Enterprise indentifies from the results of your search.
5. Under Selected Fields, click the
action field.
This opens the field summary for the action field.
In this set of search results, Splunk Enterprise found five values for
action, and that the
action field appears in 49.9% of your search results.
6. Close this window and look at the other two fields you selected,
categoryId (what types of products the shop sells) and
productId (specific catalog number for products).
7. Scroll through the events list.
If you click on the arrow next to an event, it opens up the list of all fields in that event.
Use this panel to view all the fields in a particular event and select or deselect individual fields for an individual event.
Run more targeted searches
The following are search examples using fields.
Example1: Search for successful purchases from the Buttercup Games store.
sourcetype=access_* status=200 action=purchase
This search uses the HTTP status field,
status, to specify successful requests and the
action field to search only for purchase events.
You can search for failed purchases in a similar manner using
status!=200, which looks for all events where the HTTP status code is not equal to 200.
sourcetype=access_* status!=200 action=purchase
Example 2: Search for general errors.
(error OR fail* OR severe) OR (status=404 OR status=500 OR status=503)
This doesn't specify a source type. The search retrieves events in both the secure and web access logs.
Example 3: Search for how many simulation games were bought yesterday.
Select the Preset time range, Yesterday, from the time range picker and run:
sourcetype=access_* status=200 action=purchase categoryId=simulation
The count of events returned are the number of simulation games purchased.
To find the number of purchases for each type of product sold at the shop, run this search for each unique categoryId. For the number of purchases made each day of the previous week, run the search again for each time range.
Next steps
Fields also let you take advantage of the search language, create charts, and build charts. Continue to "Use the search language" to learn how to use the search! | https://docs.splunk.com/Documentation/Splunk/6.3.11/SearchTutorial/Usefieldstosearch | 2019-05-19T14:53:26 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Client/Server Configuration
Several applications allow for Client/Server connections. In order to remotely connect to your cluster on the Rescale platform using the software application client, you will need to set up and run the desired job. Following the job set up you will either need to begin a Rescale Desktop session your own workstation, that will directly connect to the server without SSH. Finally, you will use the desired software GUI to connect to this server. Using this feature, you can connect to a running job submitted through Rescale platform. carry out your workflow, and disconnect/shut-down when finished.
Note that cluster connection in this fashion is available only with certain softwares, and the procedure for connecting with these program varies slightly.
The following links will take you to pages that provide comprehensive details regarding Client/Server configuration for the available applications : | https://docs.rescale.com/articles/ssh-client-server-configuration/ | 2019-10-14T08:36:37 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.rescale.com |
Tag¶
Tags are not mandatory but make it easier to structure news records.
Add a tag¶
- Switch to any sysfolder in the backend of your TYPO3 installation
- Click on the + icon.
- Select “Tag” which can be found in the section News system.
Tip
A new tag can also be directly created within a news record. TODO link | https://docs.typo3.org/p/georgringer/news/master/en-us/UsersManual/Records/Tag/Index.html | 2019-10-14T09:30:22 | CC-MAIN-2019-43 | 1570986649841.6 | [array(['../../../_images/record-tag.png', 'img-record-tag'], dtype=object)] | docs.typo3.org |
WM_CAP_DRIVER_CONNECT message
The WM_CAP_DRIVER_CONNECT message connects a capture window to a capture driver. You can send this message explicitly or by using the capDriverConnect macro.
WM_CAP_DRIVER_CONNECT wParam = (WPARAM) (iIndex); lParam = 0L;
Parameters
iIndex
Index of the capture driver. The index can range from 0 through 9.
Return Value
Returns TRUE if successful or FALSE if the specified capture driver cannot be connected to the capture window.
Remarks
Connecting a capture driver to a capture window automatically disconnects any previously connected capture driver. | https://docs.microsoft.com/en-us/windows/win32/multimedia/wm-cap-driver-connect | 2019-10-14T09:50:08 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.microsoft.com |
The kwxsync command synchronizes issue status updates and comments, along with the ID of the user who made the changes, among projects that you specify. All of the updates are merged, so that identical issues in multiple projects have an identical history.
This is useful when projects share source code, as in the case of branches. Developers need to cite detected issues only once for each source file; kwxsync can then apply their changes to other projects that contain the same source file. You can also use the --storage option to run kwxsync incrementally; for more information, see the example below.
The projects can be in different projects_root directories, as well as on different servers. A server is defined as a distinct combination of a machine name and a port number.
When you attempt to perform the synchronization, kwxsync proceeds only if the user has the permissions to do so on each of the specified machines. If the user does not have the proper permissions, the system will log a list of each of the machines on which it did not pass along with the user name that did not have the proper permissions, and the kwxsync process will exit.
When connecting to a machine that does not have an access control method in place, and where there is no entry for the target machine in the ltoken file, the name of the user that is logged into.
Between synchronizations, it is possible for multiple users to set different statuses on a single defect. This is more common in organizations that synchronize many projects and have a large number of users, but it can occur in any organization. Klocwork enables organizations to elegantly resolve status conflicts during synchronization by referencing a status priority file. Normally, during synchronization, the system uses the timestamps of each comment and status change to create a shared chronological history among the projects. The status with the most recent timestamp is the status applied across all projects. When you use the status priority option, if an issue has had multiple statuses set since the last synchronization, the system compares each of the statuses with the priorities specified in the file and if needed, applies the highest priority to the issue. Here's how it works:
Within the status priority file, the lowest priority issue status has the lowest number and the highest priority issue status has the highest number. The status priority file must be a UTF-8 text file that
For example, if you create a file that contains the following syntax, the system will apply Fix as the final status when resolving a conflict with any other status:
FIX=8 FIX_IN_LATER_RELEASE=7 FIX_IN_NEXT_RELEASE=6 DEFER=5 FILTER=4 IGNORE=3 NOT_A_PROBLEM=2 ANALYZE=1
The following image shows the status history of an issue that was updated from "Defer" to "Fix" by kwxsync. Each time the system changes an issue status, it adds an entry to the history that includes "<kwxsync>" so that it is clear that the system changed the status during synchronization.
kwxsync [<options>] <project_name_1>|<project_URL_1> <project_name_2>|<project_URL_2> [...]
where <options> are any of the options from the table below
Specify a space-separated list of projects that you want to synchronize. All of the updates in all of the projects you specify will be merged, so that identical issues in all projects will have an identical history.
You can specify project names or project URLs. The project URL has the following format:
Use https:// if a secure Klocwork Server connection has been configured.
kwxsync Project1 Project2 --last-sync "03-04-2015 18:31:00"
In this example, only two projects are specified. The projects share a Klocwork Server that is running on the default host and port (localhost:8080), so we only need to specify the project names. kwxsync will find all issue status updates and comments applied since April 3, 2015 at 6:31 p.m. in Project1, and copy them to Project2; it will also do the same from Project 2 to Project 1.
kwxsync
In this example, we want to synchronize three projects, located on two Klocwork Servers. The easiest way to specify these projects is with the project URL, which clearly identifies the host, port and project name for each. kwxsync synchronizes the three projects, so that all identical issues in the projects share the same history. To perform the synchronization when authentication is enabled, the user running the command must be authenticated on all affected servers and must have the 'Perform cross-project synchronization' permission. The person executing the kwxsync command must have a local ltoken file which contains the authentication data for each affected server. To gain proper authentication, the user must run kwauth for each relevant server; kwauth then stores a token in the user's home directory.
kwxsync --storage my_storage_file
In this example, you specify a file in which to persistently store synchronization information as well as synchronizing three projects. The first time you run the command, the system will synchronize the history of all issue status updates and comments, along with the ID of the user who made the changes. When you run the command subsequently, only the most recent set of status updates and comments are synchronized, which saves a substantial amount of time. By using the --storage option, you can run the tool frequently and ensure that each developer sees an up-to-date list of issues.
kwxsync --status-priority status-priority-file
In this example, you point to a status priority file that enables the system to resolve status conflicts during synchronization by referring to a ranked list of statuses.
When you run the kwxsync command, the system generates an exit code. In release 11.0, the exit codes changed. See the table below for more information. | https://docs.roguewave.com/en/klocwork/current/kwxsync | 2019-10-14T08:22:05 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.roguewave.com |
How to White Label the WordPress Admin
This tutorial explains some ways in which you can modify the WordPress admin to add your logo or custom links using WordPress Core hooks. These can be added to a child theme or extension intended for a single site install –
Caviats: Removing Layers logos, links, text etc in child themes or extensions intended for public distribution is a violation of the GPL license. We do not recommend adding WordPress white-label functions to commercial themes or extensions as it may cause them to fail the submission process. See our tutorials for adding a custom welcome box to the Dashboard and how to add your own onboarding pages instead.
If you design or setup websites for clients, you know how important it is to offer a smooth and branded, yet client-proof administration experience. White-labeling is a term that describes re-branding or removing 3rd party branding from your website’s front and back-end. Doing so increases brand recognition for your design company or service. While it is not possible to completely re-brand a WordPress site without extensive customization, you can white-label or brand the most essential and visible aspects of the WordPress Admin interface without having to resort to custom Admin themes or other coding hacks.
You can take white-labeling one step further and also customize certain aspects of the WordPress Admin through menu tools and role editors to simplfy its use and keep your clients from dangerous options such as updates or code editors. Here we’ll touch on all the ways you can white-label your client site through core hooks. If you are implementing this via a plugin for your site rather than a child theme, take care to use the correct file path constants.
For a look at how to do all of these things using free plugins such as AG Custom Admin and Admin Menu Editor, see our blog post on Easy White-labeling WordPress for your clients.
Disabling Layers Footer Icon
Layers will add a small Layers badge to the footer of your site by default which you can disable from the Customizer under the Footer Styling panel. Of course, you get extra-awesome points for leaving it on and helping spread the word about the Layers framework.
Adding Your Own Credit & Copyright Text
The text footer of Layers contains a default copyright that you may customize from the Customizer under the Footer Styling panel. This field also accepts basic HTML, so adding an icon or logo of your own is as simple as grabbing the file url to your image in the Media Library, and linking it from a standard <img> tag. See How to Add Images to the Footer Copyright area for a detailed tutorial on adding images in the Layers footer.
Replacing WordPress Login and Admin Logos
If your clients or staff are logging in through the traditional WordPress admin page, they will be greeted by a WordPress logo that links to the site main page. This can be confusing for less-savvy users, especially if they click that logo and are whisked away from the login without knowing quite what happened.
If you have created a custom child theme for your client site, the admin logo can be changed with a custom function added to your functions.php that hooks login_head and login_headerurl
And if you want to remove the link:
Change WordPress Admin Bar
Once logged in, your client is going to see the WordPress Dashboard with the default Howdy! greeting and a WordPress branded admin bar.
You can use the same custom function + hook technique as changing the login logo to change the admin icon from your child theme via admin_head.
Change Admin Colors
To do this on your own with a child theme, you would need to write a custom admin.css file, then enqueue it normally from your Child Theme functions:
You can also write the CSS directly to the function and have it output in the admin head:
For a detailed walkthrough on building admin CSS, see Customizing the WordPress Admin by our friends at Tuts+
Change the WordPress Admin Footer
At the bottom of every page in the WordPress admin you will find a Thank You for Creating with WordPress! message and a version number. If your clients don’t know or care that you are using WordPress for the site, or if you just want more control over outgoing links and branding, you can change or remove this easily.
This time hook a custom function to admin_footer_text and output a simple string. Simple HTML can be included to display an icon or link a bit of text.
Customizing the Admin Dashboard
Now that you have customized the visual aspects of the WordPress Admin, you can start thinking about the content you want your clients and site users to see. The WordPress Dashboard is the first content a logged in user sees, and while anyone can customize their own view via the Screen Options menu, you may need to remove existing metaboxes, or add some new ones to deliver the best user experience for your client.
This snippet removes the Quick Press, At a Glance, Blog Feed, Layers AddOns, Layers StoreKit and WooCommerce Status boxes from the Dashboard, leaving only Activity, Comments, Incoming Links and whatever other meta boxes your plugin selection may add. To remove other boxes. you need to grab the widget’s ID using your browser inspector, used as the first argument in the remove_meta_box() function.
Custom Dashboard Widgets
In this snippet, line 2 has you using wp_add_dashboard_widget() to add a widget named Theme Details with an ID of my_dashboard_widget containing the content output by the layers_child_theme_info() custom function you define next.
Note: To remove the Welcome Panel specifically, simply remove the welcome_panel hook:
You can completely customize the Welcome panel rather than remove it by filtering welcome_panel to add your own HTML. See How to Replace the WordPress Welcome Panel in the Admin Dashboard
Customizing the Admin Menu & Client Access
In most cases where you are maintaining your client websites, it is a good idea to restrict access to certain admin areas, both for the safety of the site overall, and simply to reduce confusion for clients unfamiliar with the WordPress Admin. Examples might be hiding links to plugin settings (such as AG Custom Admin), the Appearance > Editor screen or Layers Marketplace. Customizing the Admin menu is a task truly best left to a plugin so you don’t have to maintain code updates along with the frequently morphing WordPress security standards. Both AG Custom Admin and specialized plugins like Admin Menu Editor are actively developed and updated, and can get the job done for free.
If using AG Custom Admin, you may edit all admin menu labels and access rights from the Admin Menu tab of the settings. Checking a menu item will remove it from access for Editors, so this plugin depends on you ensuring your clients User Role is set to Editor and not Administrator. If your client is the type that likes to see the Admin role on their account, but is too dangerous to be left with full rights, you can use another very handy free plugin called Members, written by WordPress Core developer Justin Tadlock. This is the best approach if you choose to move forward with using the Admin Menu Editor plugin instead, and here’s why.
Creating a Super User and Editing Client Privileges
- With the Members plugin active, go to Users > Roles.
- Click Add New to add a new role called “Super-Admin.”
- Check the boxes on all the capabilities and Save
- Now back out to the main Users list and edit yourself. Select the new Super Admin role from the Role drop-down and Save.
- Return to the Users > Roles page and edit Administrator. Here is where you can remove some capabilities from this role. Most of these are self-explanatory and organized within the admin areas they refer to. Common capabilities you want to Deny would be:
- General: edit_roles: deny ability to change your custom roles
- Appearance: switch_themes: deny ability to switch themes from your set active theme.
- Plugins: activate_plugins: deny ability to install new plugins that may potentially break the site.
The important step here is that your clients are assigned the Admin role while you have Super Admin with all capabilities. When using Admin Menu Editor for example, you can now turn off specific Admin Menu items or sub-links by setting them to the Super Admin role. This is useful for hiding menu links such as Marketplace, Tools, Appearance > Editor and other plugin items you might have such as Performance or SEO. You can also hide the entire Layers menu from your clients if you wish – they can still access the New Page and Edit Layout options from Pages or Appearance > Customize.
With both AG Custom Admin and Admin Menu Editor, you may also customize menu icons and labels and rearrange them with drag and drop if needed.
Adding Links To the Admin Menu
Easily add new links or entire Admin Menus from the Admin Menu tab of AG Custom Admin (bottom of page) or using the Add Menu button in the Admin Menu Editor options. This allows you to build helpful menus that link to most-used admin pages or offsite resources.
If you have a Child Theme, you can do this directly without a plugin. See Paul Lunds wonderful article on Custom Admin Menus
Creating Custom Onboarding Pages
If you have developed a Layers Child Theme for your client project or are thinking about using one instead of plugins, you can add a custom onboarding function that displays the first time your client logs in with your child theme active, or which they can revisit from a custom menu option. We wrote a full developer article on how to do this in a Layers Child Theme as part of the Child Theme Author Guide: How to Add Help Pages & Onboarding to Layers Themes or Extensions
Changing Text in Layers
In extreme cases where you need to remove Layers branding from things like Widgets, you must use a translation tool to remove each instance of “Layers.” See our guide on translating Layers for plugin suggestions and tips. | http://docs.layerswp.com/how-to-white-label-the-wordpress-admin/ | 2019-10-14T08:40:55 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.layerswp.com |
PDFs
The following Cumulus NetQ user documentation is available in PDF for offline viewing or printing:
NetQ 2.3.0
- Cumulus NetQ Deployment Guide PDF
- Cumulus NetQ Integration Guide PDF
- Cumulus NetQ UI User Guide PDF
- Cumulus NetQ CLI User Guide PDF
Many command line examples have very wide output which can compromise readability in the above documents. | https://docs.cumulusnetworks.com/cumulus-netq/More-Documents/PDFs/ | 2019-10-14T08:00:03 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.cumulusnetworks.com |
Whenever you work with video feeds you may eventually want to save your image processing result in a form of a new video file. For simple video outputs you can use the OpenCV built-in VideoWriter class, designed for this.
As a simple demonstration I’ll just extract one of the BGR color channels of an input video file into a new video. You can control the flow of the application from its console line arguments:
For example, a valid command line would look like:
video-write.exe video/Megamind.avi R Y
You may also find the source code and these video file in the samples/cpp/tutorial_code/highgui/video-write/ folder of the OpenCV source library or download it from here.
For start, you should have an idea of just how a video file looks. Every video file in itself is a container. The type of the container is expressed in the files extension (for example avi, mov or mkv). This contains multiple elements like: video feeds, audio feeds or other tracks (like for example subtitles). How these feeds are stored is determined by the codec used for each one of them. In case of the audio tracks commonly used codecs are mp3 or aac. For the video files the list is somehow longer and includes names such as XVID, DIVX, H264 or LAGS (Lagarith Lossless Codec). The full list of codecs you may use on a system depends on just what one you have installed.
As you can see things can get really complicated with videos. However, OpenCV is mainly a computer vision library, not a video stream, codec and write one. Therefore, the developers tried to keep this part as simple as possible. Due to this OpenCV for video containers supports only the avi extension, its first version. A direct limitation of this is that you cannot save a video file larger than 2 GB. Furthermore you can only create and expand a single video track inside the container. No audio or other track editing support here. Nevertheless, any video codec present on your system might work. If you encounter some of these limitations you will need to look into more specialized video writing libraries such as FFMpeg or codecs as HuffYUV, CorePNG and LCL. As an alternative, create the video track with OpenCV and expand it with sound tracks or convert it to other formats by using video manipulation programs such as VirtualDub or AviSynth. open function. Either way, the parameters are the same: 1. The name of the output that contains the container type in its extension. At the moment only avi is supported. We construct this from the input file, add to this the name of the channel to use, and finish it off with the container extension.
const string source = argv[1]; // the source file name string::size_type pAt = source.find_last_of('.'); // Find extension point const string NAME = source.substr(0, pAt) + argv[2][0] + ".avi"; // Form the new name with container
The codec to use for the video track. Now all the video codecs have a unique short name of maximum four characters. Hence, the XVID, DIVX or H264 names. This is called a four character code. You may also ask this from an input video by using its get function. Because the get function is a general function it always returns double values. A double value is stored on 64 bits. Four characters are four bytes, meaning 32 bits. These four characters are coded in the lower 32 bits of the double. A simple way to throw away the upper 32 bits would be to just convert this value to int:
VideoCapture inputVideo(source); // Open input int ex = static_cast<int>(inputVideo.get(CV_CAP_PROP_FOURCC)); // Get Codec Type- Int form
OpenCV internally works with this integer type and expect this as its second parameter. Now to convert from the integer form to string we may use two methods: a bitwise operator and a union method. The first one extracting from an int the characters looks like (an “and” operation, some shifting and adding a 0 at the end to close the string): advantage of this is that the conversion is done automatically after assigning, while for the bitwise operator you need to do the operations whenever you change the codec type. In case you know the codecs four character code beforehand, you can use the CV_FOURCC macro to build the integer:
If you pass for this argument minus one than a window will pop up at runtime that contains all the codec installed on your system and ask you to select the one to use:;
Extracting a color channel from an BGR image means to set to zero the BGR values of the other channels. You can either do this with image scanning operations or by using the split and merge operations. You first split the channels up into different images, set the other channels to zero images of the same size and type and finally merge them. | https://docs.opencv.org/2.4.13/doc/tutorials/highgui/video-write/video-write.html | 2019-10-14T08:41:31 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.opencv.org |
Introduction¶
The idea behind the ASpecD framework: feasible ways for reproducible and reliable data processing and analysis in spectroscopy.
Reproducible data analysis¶
Reproducibility of data acquisition and processing is at the heart of proper science. To achieve this goal, documenting all parameters (of an experiment) is one necessary prerequisite. In large companies working in the field of chemistry, pharmacy and medicine, a strict quality management has led to developing large-scale, commercial, and highly integrated systems taking a maximum effort to ensure a gap-less record and documentation of each processing step. In fundamental research in academia, though, those systems are rare at best, if not simply inexisting. In contrast, documenting the research directly correlates with the motivation of the individual scientist to cope with aspects of reproducibility as well as with her or his discipline to follow appropriate rules and workflows. In reality, experiments performed are usually insufficiently documented, let alone the following processing of the acquired data, even though missing information can often be inferred retrospectively thanks to experience and “informed guessing” of those directly involved in data acquisition.
Personal freedom vs. reproducible science¶
At the same time, individuality, independence and personal responsibility of scientists are emphasised in the academic context for good reason. Generally, scientists should not be limited by too many and unnecessary rules. Even more, developing an individual concept for documenting experimental parameters necessary for a sensible and adequate analysis (and reproducibility) of the acquired data is often seen as an integral part of scientific education in universities.
Particularly in spectroscopy, home-built setups and lab-written, individual acquisition software are widespread. Therefore, it is of outstanding importance in this very context to document all experimental parameters, especially given that the acquisition software will usually not take care of this by itself. As this means extra effort for the experimenter, this documentation should be as easy and comfortable as possible. Ideally, it will be written in a format easily processable by computers, thus being accessible by software for data processing and analysis.
Scientists are not software engineers¶
Another important aspect: Usually, programs for data processing and analysis get written by the individual scientist who is at the same time their main user. However, scientists normally are not familiar with nor know about key aspects of software engineering developed over the last decades. These aspects – ranging from tips for code formatting and naming to patterns of application architecture – are essential to create robust and reliable software of the necessary complexity that can be used by others as well and is sufficiently future-proof.
There are good reasons why developing and taking care of (more) complex software is usually in the realm of professional software engineers that can focus solely on this task. However, for most research groups in the academic context, it is not possible to hire professionals for this purpose. Besides that, the individual scientists should not use “black boxes” whose inner workings they neither know about nor understand. One strategy to overcome this problems may be to make scientists familiar with both, proven rules (“best practices”) for programming as well as general concepts for data handling that have been developed by scientists and have been proven useful in day-to-day work. This would free scientists to focus on their actual science and would be beneficial in the long run.
Long-term storage and availability¶
Usually, most data nowadays are acquired and processed digitally using computers. This raises questions about long-term storage and availability. Whereas an increasing number of funding agencies in science require applicants to provide concepts for these aspects, developing appropriate solutions and actually implementing them in a working manner is far from easy. Given that often not only raw data are of interest, but detailed analyses as well, this becomes even more difficult.
The goal: concepts for reproducibility¶
All the aspects mentioned above have led to the development of the ASpecD framework (and its predecessors). The goal of the ASpecD framework is not to solve once and for all the problems mentioned. Rather, the idea is to provide concepts and show ways to tackle many of the individual aspects. The ultimate goal remains to have a system for data processing and analysis ensuring complete reproducibility (and replicability wherever possible) starting from the raw data and ending with final representations (figures, tables).
The different components and underlying principles of the ASpecD framework are the result of more than a decade of practical day-to-day work in experimental sciences (mostly spectroscopy), combined with deliberating on the requirements for its appropriate documentation. It started out with a set of individual short routines for analysing time-resolved EPR data. Nowadays, the concepts developed are but much more general and can be most certainly applied beyond spectroscopy as well. | https://docs.aspecd.de/introduction.html | 2019-10-14T08:54:50 | CC-MAIN-2019-43 | 1570986649841.6 | [] | docs.aspecd.de |
All content with label as5+cache+configuration+gridfs+infinispan+listener+test.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, intro, archetype, pojo_cache, lock_striping, jbossas, nexus,
guide, schema, s3, amazon, grid, jcache, api, xsd, ehcache, maven, documentation, wcm, youtube, userguide, write_behind, 缓存, ec2, s, hibernate, aws, getting, interface, clustering, setup, eviction, fine_grained, concurrency, out_of_memory, jboss_cache, examples, import, index, events, hash_function, batch, buddy_replication, loader, pojo, cloud, mvcc, tutorial, notification, presentation, xml, read_committed, jbosscache3x, distribution, started, cachestore, data_grid, cacheloader, hibernate_search, resteasy, integration, cluster, development, adaptor, transaction, async, interactive,, store, whitepaper, jta, faq, 2lcache, spring, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - as5, - cache, - configuration, - gridfs, - infinispan, - listener, - test )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+cache+configuration+gridfs+infinispan+listener+test | 2019-05-19T17:13:03 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.jboss.org |
All content with label client+distribution+eviction+gridfs+infinispan+jta+query.
Related Labels:
expiration, publish, datagrid, interceptor, server, rehash, replication, recovery, transactionmanager,, concurrency, out_of_memory, jboss_cache, import, index, configuration, hash_function, batch, buddy_replication, loader, colocation, xa, write_through, cloud, remoting, mvcc, tutorial, notification, murmurhash2, xml, read_committed, jbosscache3x, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, async, transaction, interactive, xaresource, build, hinting, searchable, demo, installation, command-line, non-blocking, migration, rebalance, filesystem, jpa, tx, gui_demo, eventing, shell, client_server, testng, murmurhash, infinispan_user_guide, standalone, snapshot, webdav, hotrod, docs, batching, consistent_hash, store, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, hot_rod
more »
( - client, - distribution, - eviction, - gridfs, - infinispan, - jta, - query )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/client+distribution+eviction+gridfs+infinispan+jta+query | 2019-05-19T17:09:54 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.jboss.org |
Report Template Layout
This section explains the general report template layout.
The SpeechMiner report template has a maximum of three parameter rows. Each row deals with different functions in the report.
In addition, most report templates have a Data Set filter section on the left side of the screen.
This section explains the report template layout using the Agent Comparison template as an example. The Agent Comparison report represents the most common template layout.
First Row
The first row contains:
- Controls for working with report results (see Create a New Report)
- Template field, in which you can select the type of report
- Report Name field, used to name the report.
Second Row
The second row contains fields for defining the report title and an optional report description. These items are displayed at the top of the report results. By default, the name of the report template is used as the report title and you can modify it as necessary. Some templates also have a Version parameter in this row. The Version parameter can be used to select the size or format of the report output.
Third Row
The third row contains the Items on Report parameters. That is, the fields that determine which items will appear graphically on the report. In some reports, one or more of the parameters may also have statistical functions.
Data Seat Filters
The left side of numerous report templates contains Data Set filters. These filters specify which data will be included in the report's analyses. For additional information about the filters and how to configure them, see Report Parameters. The current filter settings are displayed on the right side of the template below the first row.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PSAAS/latest/Recording/reporttemplateslayout | 2019-05-19T16:17:55 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.genesys.com |
Configuring Advanced LDAP Authentication
The default configuration computes the bind Distinguished Name . | https://docs.hortonworks.com/HDPDocuments/HDP3/HDP-3.1.0/configuring-proxy-knox/content/configuring_advanced_ldap_authentication.html | 2019-05-19T17:32:44 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.hortonworks.com |
Build navigation menus
This article is part of the Storefront UI Development Guide.
- Previous Task: Build a product detail page
- Next Task: Add a way to add an item to a cart
If you are building your storefront in the recommended order, at this point you have a product listing page and a product detail page. It's probably about time to add some navigation UI, a home page, and perhaps some additional pages.
You're free to add any additional pages you want using whatever method your router prescribes. A home page may be a specific view of a product list, multiple product lists, static content, or whatever your storefront design spec requires.
After you have created several types of pages, you're ready to add links to them in a navigation component. For a very simple storefront with few navigation links, you may want to design this as a static component. This may be easier in the short term, but remember that it will require a code change and redeployment every time navigation changes are needed.
Most storefronts require more complex and dynamic navigation menus. For this purpose, the Reaction operator UI allows those with proper permissions to build navigation menus and then publish them to one or more storefronts.
On the storefront UI side, you only need to query for the navigation menu that you want when initially loading the UI. Then use that data to dynamically build whatever menu design you need.
You can get the default navigation tree for a shop when you query the shop, which you'll likely want to do on initial UI load anyway, in order to get other shop details for display.
fragment NavigationItemFields on NavigationItemData { contentForLanguage classNames url isUrlRelative shouldOpenInNewWindow } query shop($id: ID!, $language: String! = "en") { shop(id: $id) { defaultNavigationTree(language: $language) { items { navigationItem { data { ...NavigationItemFields } } items { navigationItem { data { ...NavigationItemFields } } items { navigationItem { data { ...NavigationItemFields } } } } } } description name } }
As you can see, the response is a tree with up to three levels. Your component should support rendering all three levels of navigation unless you have made a business decision that you will only have a certain number of levels.
Each navigation item has the following information:
contentForLanguage: Display this as the content the shopper sees, e.g., the name of the page the navigation item links to. It will be in whatever language you requested with the
languagevariable to your query.
classNames: Optionally set the
classNameproperty of the navigation item element to this. Your organization may choose not to implement this. It is available as a convenience if you need it.
url: Use this as the navigation item link URL
isUrlRelative: This will be
trueor
false. Use this to build your navigation item component's click handling logic. Relative URLs may need to be handled internally by your router while absolute URLs could be handled in the normal browser way.
shouldOpenInNewWindow: This will be
trueor
false. Use this to build your navigation item component's click handling logic. If this is
true, typically you would add
target="_blank"attribute or do the equivalent in code.
Next Task: Add a way to add an item to a cart | https://docs.reactioncommerce.com/docs/storefront-nav-menus | 2019-05-19T16:39:30 | CC-MAIN-2019-22 | 1558232255071.27 | [] | docs.reactioncommerce.com |
Creating a Subnet Group:
Copy to clipboard
aws elasticache create-cache-subnet-group \ --cache-subnet-group-name
mysubnetgroup\ --cache-subnet-group-description
"Testing"\ --subnet-ids
subnet-53df9c3a
For Windows:
Copy to clipboard
aws elasticache create-cache-subnet-group ^ --cache-subnet-group-name
mysubnetgroup^ --cache-subnet-group-description
"Testing"^ --subnet-ids
subnet-53df9c3a
This command should produce output similar to the following:
Copy to clipboard
SUBNETGROUP mysubnetgroup Testing vpc-5a2e4c35 SUBNET subnet-53df9c3a us-west-2b
Copy to clipboard ?Action=CreateCacheSubnetGroup &CacheSubnetGroupDescription=Testing &CacheSubnetGroupName=mysubnetgroup &SignatureMethod=HmacSHA256 &SignatureVersion=4 &SubnetIds.member.1=subnet-53df9c3a | http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/SubnetGroups.Creating.html | 2017-03-23T00:21:56 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.aws.amazon.com |
⚡️ Webhooks¶
You can push data to Tingbot using webhooks.
Here is an example that displays SMS messages using If This Then That. See our tutorial video to see how to set up IFTTT with webhooks.
import tingbot from tingbot import * screen.fill(color='black') screen.text('Waiting...') @webhook('demo_sms') def on_webhook(data): screen.fill(color='black') screen.text(data, color='green') tingbot.run()
@
webhook(webhook_name…)¶
This decorator calls the marked function when a HTTP POST request is made to the URL. To avoid choosing the same name as somebody else, you can add a random string of characters to the end.
The POST data of the URL is available to the marked function as the
dataparameter. The data is limited to 1kb, and the last value that was POSTed is remembered by the server, so you can feed in relatively slow data sources.
You can use webhooks to push data to Tingbot, or to notify Tingbot of an update that happened elsewhere on the internet. | http://tingbot-python.readthedocs.io/en/latest/webhooks.html | 2017-03-23T00:10:40 | CC-MAIN-2017-13 | 1490218186530.52 | [] | tingbot-python.readthedocs.io |
public class HeatmapSurface extends Object
The Heatmap surface is computed as a grid (raster) of values representing the surface. For stability, the compute grid is expanded by the kernel radius on all four sides. This avoids "edge effects" from distorting the surface within the requested envelope.
The values in the output surface are normalized to lie in the range [0, 1].
clone, equals, finalize, getClass, hashCode, notify, notifyAll, toString, wait, wait, wait
public HeatmapSurface(int kernelRadius, Envelope srcEnv, int xSize, int ySize)
kernelRadius- the kernel radius, in grid units
srcEnv- the envelope defining the data space
xSize- the width of the output grid
ySize- the height of the output grid
public void addPoint(double x, double y, double value)
x- the X ordinate of the point
y- the Y ordinate of the point
value- the data value of the point
public float[][] computeSurface() | http://docs.geotools.org/stable/javadocs/org/geotools/process/vector/HeatmapSurface.html | 2017-03-23T00:18:08 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.geotools.org |
Selecting Regions and Availability Zones
AWS cloud computing resources are housed in highly available data center facilities. To provide additional scalability and reliability, these data center facilities are located in (for example, creating clusters) runs only in your current default region.
To create or work with a cluster in a specific region, use the corresponding regional service endpoint. For service endpoints, see Supported Regions & Endpoints.
Regions and Availability Zones
Locating Your Redis Read Replicas and Memcached Nodes
Amazon ElastiCache supports locating all of a cluster's members in a single or multiple Availability Zones (AZs). Further, if you elect to locate a cluster's members in multiple AZs (recommended), ElastiCache enables you to either select the AZ for each member, or allow ElastiCache to select them for you.
By locating the clusters or nodes in different Availability Zones, you eliminate the chance that a failure, such as a power outage, in one Availability Zone will cause your entire system to fail. Testing has demonstrated that there is no significant latency difference between locating all nodes in one Availability Zone or spreading them across multiple Availability Zones.
To specify an Availability Zone for your Memcached nodes, create a Memcached cluster as you normally do. On the Cluster Details page of the Launch Cluster wizard, use the Preferred Zone list to specify an Availability Zone for this node.
To specify an Availability Zone for your Redis read replica, you first create a replication group and then add from one to five read replicas to the replication group. You can specify a different Availability Zone for each read replica. For more information on creating a Redis read replica in an Availability Zone different from the primary Redis cache cluster, see Creating a Redis Cluster with Replicas and Adding a Read Replica to a Redis Cluster.
Supported Regions & Endpoints
Amazon ElastiCache is available in multiple regions so that you can launch ElastiCache clusters in locations that meet your requirements, such as launching in the region closest to your customers or to meet certain legal requirements.
By default, the AWS SDKs, AWS CLI, ElastiCache API, and ElastiCache console reference the US-West (Oregon) region. As ElastiCache expands availability to new regions, new endpoints for these regions are also available to use in your HTTP requests, the AWS SDKs, AWS CLI, and the console.
Each region is designed to be completely isolated from the other regions. Within each region are multiple availability zones (AZ). By launching your nodes in different AZs you are able to achieve the greatest possible fault tolerance. For more information on regions and availability zones, go to Selecting Regions and Availability Zones at the top of this topic.
Regions where ElastiCache is supported
For a table of AWS products and services by region, see Products and Services by Region. | http://docs.aws.amazon.com/AmazonElastiCache/latest/UserGuide/RegionsAndAZs.html | 2017-03-23T00:21:12 | CC-MAIN-2017-13 | 1490218186530.52 | [array(['images/ElastiCache-RegionsAndAZs.png',
'Image: Regions and Availability Zones'], dtype=object)] | docs.aws.amazon.com |
Install and Configure the Amazon Redshift ODBC Driver on Microsoft Windows Operating Systems
System Requirements
You install the Amazon Redshift ODBC driver on client computers accessing an Amazon Redshift data warehouse. Each computer where you install the driver must meet the following minimum system requirements:
Microsoft Windows Vista operating system or later
55 MB of available disk space
Administrator privileges on the client computer
An Amazon Redshift master user or user account to connect to the database
Installing the Amazon Redshift Driver on Windows Operating Systems
Use the steps in this section to download the Amazon Redshift ODBC drivers for Microsoft Windows operating systems. You should only use a driver other than these if you are running a third-party application that is certified for use with Amazon Redshift and that requires a specific driver for that application.
To install the ODBC driver
Download one of the following, depending on the system architecture of your SQL client tool or application:
32-bit:
The name for this driver is Amazon Redshift (x86).
64-bit:
The name for this driver is Amazon Redshift (x64).
Note
Download the MSI package that corresponds to the system architecture of your SQL client tool or application. For example, if your SQL client tool is 64-bit, install the 64-bit driver.
Then download and review the Amazon Redshift ODBC Driver License Agreement. If you need to distribute these drivers to your customers or other third parties, please email [email protected] to arrange an appropriate license.
Double-click the .msi file, and then follow the steps in the wizard to install the driver.
Previous ODBC Driver Versions
Download a previous version of the Amazon Redshift ODBC driver only if your tool requires a specific version of the driver. For information about the functionality supported in previous versions of the drivers, go to the Amazon Redshift ODBC Driver Release Notes.
The following are the previous 32-bit drivers:
The following are the previous 64-bit drivers:
Creating a System DSN Entry for an ODBC Connection on Microsoft Windows
After you download and install the ODBC driver, you need to add a data source name (DSN) entry to the client machine or Amazon EC2 instance. SQL client tools use this data source to connect to the Amazon Redshift database.
To create a system DSN entry
In the Start menu, in your list of programs, locate the driver folder or folders.
Note
If you installed the 32-bit driver, the folder is named Amazon Redshift ODBC Driver (32-bit). If you installed the 64-bit driver, the folder is named Amazon Redshift ODBC Driver (64-bit). If you installed both drivers, you'll have a folder for each driver.
Click ODBC Administrator, and then type your administrator credentials if you are prompted to do so.
Select the System DSN tab if you want to configure the driver for all users on the computer, or the User DSN tab if you want to configure the driver for your user account only.
Click Add. The Create New Data Source window opens.
Select the Amazon Redshift ODBC driver, and then click Finish. The Amazon Redshift ODBC Driver DSN Setup window opens.
Under Connection Settings, enter the following information:
Data Source Name
Type a name for the data source. You can use any name that you want to identify the data source later when you create the connection to the cluster. For example, if you followed the Amazon Redshift Getting Started, you might type
exampleclusterdsnto make it easy to remember the cluster that you will associate with this DSN.
Server
Specify the endpoint for your Amazon Redshift cluster. You can find this information in the Amazon Redshift console on the cluster’s details page. For more information, see Configuring Connections in Amazon Redshift.
Port
Type the port number that the database uses. By default, Amazon Redshift uses 5439, but you should use the port that the cluster was configured to use when it was launched.
Database
Type the name of the Amazon Redshift database. If you launched your cluster without specifying a database name, type
dev; otherwise, use the name that you chose during the launch process. If you followed the Amazon Redshift Getting Started, type
dev.
Under Credentials, enter the following information:
User
Type the user name for the database user account that you want to use to access the database. If you followed the Amazon Redshift Getting Started, type
masteruser.
Type the password that corresponds to the database user account.
Under SSL Settings, specify a value for the following:
SSL Authentication
Select a mode for handling Secure Sockets Layer (SSL). In a test environment, you might use
prefer, but for production environments and when secure data exchange is required, use
verify-caor
verify-full. For more information about using SSL, see Connect Using SSL.
Under Additional Options, select one of the following options to specify how to return query results to your SQL client tool or application:
Single Row Mode. Select this option if you want query results to be returned one row at a time to the SQL client tool or application. Use this option if you plan to query for large result sets and don't want the entire result in memory. Disabling this option improves performance, but it can increase the number of out-of-memory errors.
Use Declare/Fetch. Select this option if you want query results to be returned to the SQL client tool or application in a specified number of rows at a time. Specify the number of rows in Cache Size.
Use Multiple Statements. Select this option to return results based on multiple SQL statements in a query.
Retrieve Entire Result Into Memory. Select this option if you want query results to be returned all at once to the SQL client tool or application. The default is enabled.
In Logging Options, specify values for the following:
Log Level. Select an option to specify whether to enable logging and the level of detail that you want captured in the logs.
Important
You should only enable logging when you need to capture information about an issue. Logging decreases performance, and it can consume a large amount of disk space.
Log Path. Specify the full path to the folder where you want to save log files.
In Data Type Options, specify values for the following:
Use Unicode. Select this option to enable support for Unicode characters. The default is enabled.
Show Boolean Column As String. Select this option if you want Boolean values to be displayed as string values instead of bit values. If you enable this, "1" and "0" display instead of 1 and 0. The default is enabled.
Text as LongVarChar. Select this option to enable showing text as LongVarChar. The default is enabled.
Max Varchar. Specify the maximum value for the Varchar data type. A Varchar field with a value larger than the maximum specified will be promoted to LongVarchar. The default value is 255.
Max LongVarChar. Specify the maximum value for the LongVarChar data type. A LongVarChar field value that is larger than the maximum specified will be truncated. The default value is 8190.
Max Bytea. Specify the maximum value for the Bytea data type. A Bytea field value that is larger than the maximum specified will be truncated. The default value is 255.
Note
The Bytea data type is only used by Amazon Redshift system tables and views, and otherwise is not supported.
Then click OK.
Click Test. If the client computer can connect to the Amazon Redshift database, you will see the following message: Connection successful.
If the client computer fails to connect to the database, you can troubleshoot possible issues. For more information, see Troubleshooting Connection Issues in Amazon Redshift. | http://docs.aws.amazon.com/redshift/latest/mgmt/install-odbc-driver-windows.html | 2017-03-23T00:26:06 | CC-MAIN-2017-13 | 1490218186530.52 | [] | docs.aws.amazon.com |
First steps with bayesloop¶
bayesloop models feature a two-level hierarchical structure: the low-level, observation model filters out measurement noise and provides the parameters, that one is interested in (volatility of stock prices, diffusion coefficient of particles, directional persistence of migrating cancer cells, rate of randomly occurring events, ...). The observation model is, in most cases, given by a simple and well-known stochastic process: price changes are Gaussian-distributed, turning angles of moving cells follow a von-Mises distribution and the number of rare events within a given interval of time is Poisson-distributed. The aim of the observation model is to describe the measured data on a short time scale, while the parameters may change on longer time scales. The high-level, transition model describes how the parameters of the observation model change over time, i.e. whether there are abrupt parameter jumps or gradual variations. The transition model may itself depend on so-called hyper-parameters, for example the likelihood of parameter jumps, the magnitude of gradual parameter variations or the slope of a deterministic linear trend. The following tutorials show how to use the bayesloop module to infer both time-varying parameter values of the observation model as well as the hyper-parameter values of the transition model and compare different hypotheses about the parameter dynamics by approximating the model evidence, i.e. the probability of the measured data, given the observation model and transition model.
The first section of the tutorial introduces the main class of the
module,
Study, which enables fits of time-varying parameter models
with fixed hyper-parameter values and the optimization of such
hyper-parameters based on the model evidence. We provide a detailed
description of how to import data, set the observation model and
transition model, and perform the model fit. Finally, a plotting
function to display the results is discussed briefly. This tutorial
therefore provides the basis for later tutorials that discuss the
extended classes
HyperStudy,
ChangepointStudy and
OnlineStudy.
Study class¶
To start a new data study/analysis, create a new instance of the
Study class:
In [1]:
%matplotlib inline import matplotlib.pyplot as plt # plotting import seaborn as sns # nicer plots sns.set_style('whitegrid') # plot styling import bayesloop as bl S = bl.Study()
+ Created new study.
This object is central to an analysis conducted with bayesloop. It stores the data and further provides the methods to perform probabilistic inference on the models defined within the class, as described below.
Data import¶
In this first study, we use a simple, yet instructive example of heterogeneous time series, the annual number of coal mining accidents in the UK from 1851 to 1962. The data is imported as a NumPy array, together with corresponding timestamps. Note that setting timestamps is optional (if none are provided, timestamps are set to an integer sequence: 0, 1, 2,...).
In [2]:
import numpy as np data = np.array(, 3, 3, 0, 0, 0, 1, 4, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0]) S.loadData(data, timestamps=np.arange(1852, 1962))
+ Successfully imported array.
Note that this particular data set is also hard-coded into the
Study
class, for convenient testing:
In [3]:
S.loadExampleData()
+ Successfully imported example data.
In case you have multiple observations for each time step, you may also
provide the data in the form
np.array([[x1,y1,z1], [x2,y2,z2], ..., [xn,yn,zn]]). Missing data
points should be included as
np.nan.
Observation model¶
The first step to create a probabilistic model to explain the data is to define the observation model, or likelihood. The observation model states the probability (density) of a data point at time \(t\), given the parameter values at time \(t\) and possibly past data points. It therefore resembles the low-level model, in contrast to the transition model which describes how the parameters of the observation model change over time.
As coal mining disasters fortunately are rare events, we may model the number of accidents per year by a Poisson distribution. In bayesloop, this is done as follows:
In [4]:
L = bl.observationModels.Poisson('accident rate', bl.oint(0, 6, 1000)) S.setObservationModel(L)
+ Observation model: Poisson. Parameter(s): ['accident rate']
We first define the observation model and provide two arguments: A name
for the only parameter of the model, the
'accident rate'. We further
have to provide discrete values for this parameter, as bayesloop
computes all probability distributions on grids. As the Poisson
distribution expects its parameter to be greater than zero, we choose an
open interval between 0 and 6 with 1000 equally spaced values in
between, by using the function
bl.oint(). For closed intervals, one
can also use
bl.cint(), which acts exactly like the function
linspace from NumPy. To avoid singularities in the probability
values of the observation model, it is however recommended to use
bl.oint() in most cases. Finally, we assign the defined observation
model to our study instance with the method
setObservationModel().
As the parameter boundaries depend on the data at hand, bayesloop will estimate appropriate parameter values, if one does not provide them:
In [5]:
L = bl.observationModels.Poisson('accident rate') S.setObservationModel(L)
+ Estimated parameter interval for "accident rate": [0.00749250749251, 7.49250749251] (1000 values). + Observation model: Poisson. Parameter(s): ['accident rate']
Note that you can also use the following short form to define
observation models:
L = bl.om.Poisson(). All currently implemented
observation models can be looked up in the API Docs or
directly in
observationModels.py. bayesloop further supports all
probability distributions that are included in the
scipy.stats
as well as the
sympy.stats module.
See this tutorial for instructions on
how to build custom observation models from arbitrary distributions.
In this example, the observation model only features a single parameter.
If we wanted to model the annual number of accidents with a Gaussian
distribution instead, we have to supply two parameter names (
mean
and
std) and corresponding values:
L = bl.om.Gaussian('mean', bl.cint(0, 6, 200), 'std', bl.oint(0, 2, 200)) S.setObservationModel(L)
Again, if we are not sure about parameter boundaries, we may assign
None to one or all parameters, and bayesloop will estimate them:
L = bl.om.Gaussian('mean', None, 'std', bl.oint(0, 2, 200)) S.setObservationModel(L)
The order has to remain
Name, Value, Name, Value, ..., which is why
we cannot simply omit the values and have to write
None instead.
Transition model¶
As the dynamics of many real-world systems are the result of a multitude of underlying processes that act on different spatial and time scales, common statistical models with static parameters often miss important aspects of the systems’ dynamics (see e.g. this article). bayesloop therefore calls for a second model, the transition model, which describes the temporal changes of the model parameters.
In this example, we assume that the accident rate itself may change
gradually over time and choose a Gaussian random walk with the standard
deviation \(\sigma=0.2\) as transition model. As for the observation
model, we supply a unique name for hyper-parameter \(\sigma\) (named
sigma) that describes the standard deviation of the parameter
fluctuations and therefore the magnitude of changes. Again, we have to
assign values for
sigma, but only choose a single fixed value of
0.2, instead of a whole set of values. This single value can be
optimized, by maximizing the model evidence, see
here. To analyze and compare a set
of different values, one may use an instance of a
HyperStudy that is
described in detail here. in this first example,
we simply take the value of 0.2 as given. As the observation model may
contain several parameters, we further have specify the parameter
accident rate as the target of this transition model.
In [6]:
T = bl.transitionModels.GaussianRandomWalk('sigma', 0.2, target='accident rate') S.setTransitionModel(T)
+ Transition model: Gaussian random walk. Hyper-Parameter(s): ['sigma']
Note that you can also use the following short form to define transition
models:
M = bl.tm.GaussianRandomWalk(). All currently implemented
transition models can be looked up in the API Docs or
directly in
transitionModels.py.
Model fit¶
At this point, the hierarchical time series model for the coal mining
data set is properly defined and we may continue to perform the model
fit. bayesloop employs a forward-backward algorithm that is based on
Hidden Markov models. It
basically breaks down the high-dimensional inference problem of all time
steps into many low-dimensional ones for each individual time step. The
inference algorithm is implemented by the
fit method:
In [7]:
S.fit()
+ Started new fit: + Formatted data. + Set prior (function): jeffreys. Values have been re-normalized. + Finished forward pass. + Log10-evidence: -74.63801 + Finished backward pass. + Computed mean parameter values.
By default,
fit computes the so-called smoothing distribution of
the model parameters for each time step. This distribution states the
probability (density) of the parameter value at a time step \(t\),
given all past and future data points. All distributions have the same
shape as the parameter grid, and are stored in
S.posteriorSequence
for further analysis. Additionally, the mean values of each distribution
are stored in
S.posteriorMeanValues, as point estimates. Finally,
the (natural) logarithmic value of the model evidence, the probability
of the data given the chosen model, is stored in
S.logEvidence (more
details on evidence values follow).
To simulate an on-line analysis, where at each step in time \(t\),
only past data points are available, one may provide the
keyword-argument
forwardOnly=True. In this case, only the
forward-part of the algorithm in run. The resulting parameter
distributions are called filtering distributions.
Plotting¶
To display the temporal evolution of the model parameters, the
Study
class provides the plot method
plotParameterEvolution that displays
the mean values together with the marginal distributions for one
parameter of the model. The parameter to be plotted can be chosen by
providing its name.
Here, we plot the original data (in red) together with the inferred
disaster rate (mean value in black). The marginal parameter distribution
is displayed as a blue overlay, by default with a gamma correction of
\(\gamma=0.5\) to enhance relative differences in the width of the
distribution (this behavior can be changed by the keyword argument
gamma):
In [8]:
plt.figure(figsize=(8, 4)) # plot of raw data plt.bar(S.rawTimestamps, S.rawData, align='center', facecolor='r', alpha=.5) # parameter plot S.plotParameterEvolution('accident rate') plt.xlim([1851, 1961]) plt.xlabel('year');
From this first analysis, we may conclude that before 1880, an average
of \(\approx 3\) accidents per year were recorded. This changes
significantly between 1880 and 1900, when the accident-rate drops to
\(\approx 1\) per year. We can also directly inspect the
distribution of the accident rate at specific points in time, using the
getParameterDistribution() method:
In [9]:
plt.figure(figsize=(8, 4)) r1, p1 = S.getParameterDistribution(1880, 'accident rate', plot=True, facecolor='r', alpha=0.5, label='1880') r2, p2 = S.getParameterDistribution(1900, 'accident rate', plot=True, facecolor='b', alpha=0.5, label='1900') plt.legend() plt.xlim([0, 5]);
Without the
plot=True argument, this method only returns the
parameter values (
r1,
r2, as specified when setting the
observation model) as well as the corresponding probability values
p1 and
p2. Note that the returned probability values are always
normalized to 1, so that we may easily evaluate the probability of
certain conditions, like the probability of an accident rate < 1 in the
year 1900:
In [10]:
np.sum(p2[r2 < 1.])
Out[10]:
0.4219825605704649
Here, we first created a mask for all parameter values smaller than one
(
r2 < 1.), then extracted the corresponding probability values
(
p2[r2 < 1.]) and finally evaluated the sum to compute p(accident
rate < 1) = 42%.
Saving studies¶
As the
Study class instance (above denoted by
S) of a conducted
analysis contains all information about the inferred parameter values,
it may be convenient to store the entire instance
S to file. This
way, it can be loaded again later, for example to refine the study,
create different plots or perform further analyses based on the obtained
results. bayesloop provides two functions,
bl.save() and
bl.load() to store and retrieve existing studies:
bl.save('file.bl', S) ... S = bl.load('file.bl) | http://docs.bayesloop.com/en/stable/tutorials/firststeps.html | 2017-03-23T00:20:56 | CC-MAIN-2017-13 | 1490218186530.52 | [array(['../_images/tutorials_firststeps_15_0.png',
'../_images/tutorials_firststeps_15_0.png'], dtype=object)
array(['../_images/tutorials_firststeps_17_0.png',
'../_images/tutorials_firststeps_17_0.png'], dtype=object)] | docs.bayesloop.com |
Managing static files¶
Django developers mostly concern themselves with the dynamic parts of web applications – the views and templates that render anew for each request. But web applications have other parts: the static files (images, CSS, Javascript, etc.) that are needed to render a complete web page.
For small projects, this isn’t a big deal, because you can just.
Note.
Using django.contrib.staticfiles¶
Basic usage¶" />
See Referring to static files in templates for more details, including an alternate method using a template tag.
Deploying static files in a nutshell¶..
Referring to static files in templates¶
At some point, you'll probably need to link to static files in your templates. You could, of course, simply hardcode the path to you assets in the templates:
<img src="" />!
staticfiles includes two built-in ways of getting at this setting in your templates: a context processor and a template tag.
With a context processor¶.
With a template tag¶
The second option is the get_static_prefix template tag. You can use this if you're not using RequestContext, or if you need more control over exactly where and how STATIC_URL is injected into the template. Here's an example:
{% load static %} <img src="{% get_static_prefix %}images/hi.jpg" />
There's also a second form you can use to avoid extra processing if you need the value multiple times:
{% load static %} {% get_static_prefix as STATIC_PREFIX %} <img src="{{ STATIC_PREFIX }}images/hi.jpg" /> <img src="{{ STATIC_PREFIX }}images/hi2.jpg" />
Serving static files in development.
Serving other directories¶.).
Serving static files in production¶.
Serving the app and your static files from the same server¶
If you want to serve your static files from the same server that's already serving your site, the basic outline gets modified to look something like:
- Push your code up to the deployment server.
- On the server, run collectstatic to copy all the static files into STATIC_ROOT.
- Point your web server at STATIC_ROOT. For example, here's how to do this under Apache and mod_wsgi.')
Serving static files from a dedicated server¶ locally.
- Push your local STATIC_ROOT up to the static file server into the directory that's being served. rsync is a good choice for this step since it only needs to transfer the bits of static files that have changed.ysnc_project( remote_dir = env.remote_static_root, local_dir = env.local_static_root, delete = True )).
Upgrading from django-staticfiles¶
django.contrib.staticfiles began its life as django-staticfiles. If you're upgrading from django-staticfiles older than 1.0 (e.g. 0.3.4) to django.contrib.staticfiles, you'll need to make a few changes:
- Application files should now live in a static directory in each app (django-staticfiles used the name media, which was slightly confusing).
- The management commands build_static and resolve_static are now called collectstatic and findstatic.
- The settings STATICFILES_PREPEND_LABEL_APPS, STATICFILES_MEDIA_DIRNAMES and STATICFILES_EXCLUDED_APPS were removed.
- The setting STATICFILES_RESOLVERS was removed, and replaced by the new STATICFILES_FINDERS.
- The default for STATICFILES_STORAGE was renamed from staticfiles.storage.StaticFileStorage to staticfiles.storage.StaticFilesStorage
- If using runserver for local development (and the DEBUG setting is True), you no longer need to add anything to your URLconf for serving static files in development.
Learn more¶
This document has covered the basics and some common usage patterns. For complete details on all the settings, commands, template tags, and other pieces include in django.contrib.staticfiles, see the staticfiles. | https://docs.djangoproject.com/en/1.3/howto/static-files/ | 2014-04-16T10:12:15 | CC-MAIN-2014-15 | 1397609523265.25 | [] | docs.djangoproject.com |
Try it now and let us know what you think. Switch to the new look >>
You can return to the original look by selecting English in the language selector above.
Creating and Managing a Private Marketplace
To create and manage a private marketplace you must have the IAM permissions found
in the
AWSPrivateMarketplaceAdminFullAccess IAM policy. For information on applying
this policy to IAM users, groups, and roles, see Creating a Private Marketplace IT Administrator.
Creating Your Private Marketplace
To create your private marketplace, navigate to Private Marketplace and choose Create a Private Marketplace.
Adding Products to Your Private Marketplace
To add products to your organization’s private marketplace:
From the Private Marketplace administrator page, on the Products tab, choose All AWS Marketplace products. You can search by product name or vendor name to find a product to add to your private marketplace.
Select the check box next to each product that you want to add to your private marketplace and then choose Add to Private Marketplace.
To verify that a product is in your private marketplace, from the Private Marketplace search page, search for the product that you added and choose it. You are redirected to the product detail page for that product. If the product isn't in your private marketplace, a red banner appears at the top of the page. To add the product to your private marketplace, choose Add in the red banner.
You can return to the Private Marketplace administrator page at any time to add or remove other products.
Customizing Your Private Marketplace
Once you have added products to your private marketplace, from the Private Marketplace administrator page, you can choose the Profile tab to configure your organization’s private marketplace profile.
You can add a logo, create a custom welcome message, and customize to use your organization’s color scheme. Instructions to customize your private marketplace are available on the profile page.
Configuring Your Private Marketplace
After you are satisfied with your product list and your look and feel, enable your private marketplace. To enable or disable your private marketplace, from the AWS Private Marketplace administrators page, under Private Marketplace status, you can toggle the private marketplace status between live and Not live. When your private marketplace is live, members of your organization can subscribe only to the products that you added to your private marketplace. When your private marketplace is disabled, you retain the list of products. However, disabling removes the procurement restriction control from users in your AWS Organizations organization. | https://docs.aws.amazon.com/marketplace/latest/buyerguide/private-catalog-admisitration.html | 2019-09-15T11:02:54 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
Case page.
The values in those fields, however, do not necessarily match the service codes and
categories returned by the
DescribeServices request. Always use the service
codes and categories obtained programmatically. This practice ensures that you always
have the most recent set of service and category codes.
Namespace: Amazon.AWSSupport
Assembly: AWSSDK.dll
Version: (assembly version)
.NET Framework:
Supported in: 4.5, 4.0, 3.5 | https://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/MAWSSupportIAWSSupportDescribeServicesNET35.html | 2019-09-15T10:19:25 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.aws.amazon.com |
Synchronous and asynchronous query execution
Queries can be executed against the database synchronously or asynchronously.
Queries can be executed against the database synchronously or asynchronously. The correct execution paradigm to use depends on the application.
Synchronous execution
Synchronous query execution is blocking, meaning nothing else in the application proceeds until the result from the query is returned. The application blocks for the entire round trip, from when the query is first sent to the database until the results are retrieved and returned to the application.
The advantage of synchronous queries is that it is simple to tell when a query completes, so the execution logic of the application is easy to follow. However, synchronous queries cause poor application throughput.
Asynchronous execution
Asynchronous query execution is more complex. An asynchronous query execute call does not block for results. Instead, a future is immediately returned from the asynchronous execute call. A future is a placeholder object that stands in for the result until the result is returned from the database. Depending on the driver and feature set of the language, this future can facilitate asynchronous processing of results. This typically allows high throughput.
| https://docs.datastax.com/en/devapp/doc/devapp/driversSyncAsync.html | 2019-09-15T10:12:23 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/driversAsyncExecution.png',
'Asynchronous execution example'], dtype=object)] | docs.datastax.com |
State
Managed
State Collection. IState Manager. Load View State(Object) Managed
State Collection. IState Manager. Load View State(Object) Managed
Method
Collection. IState Manager. Load View State(Object)
Definition
Restores the previously saved view state of the StateManagedCollection collection and the IStateManager items it contains.
virtual void System.Web.UI.IStateManager.LoadViewState(System::Object ^ savedState) = System::Web::UI::IStateManager::LoadViewState;
void IStateManager.LoadViewState (object savedState);
Sub LoadViewState (savedState As Object) Implements IStateManager.LoadViewState
Parameters
An object that represents the collection and collection elements' state to restore.
Implements
Remarks
This method restores view-state information that was saved by the IStateManager.SaveViewState method. NIB: ASP.NET Web Server Controls.
This method is used primarily by control developers. You can override this method to specify how a custom server control restores its view state. For more information, see ASP.NET State Management Overview. | https://docs.microsoft.com/en-us/dotnet/api/system.web.ui.statemanagedcollection.system-web-ui-istatemanager-loadviewstate?view=netframework-4.7.2 | 2019-09-15T10:09:11 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Configure an Application Pool to Recycle after Reaching Maximum Used Memory (IIS 7)
Applies To: Windows 7, Windows Server 2008, Windows Server 2008 R2, Windows Vista.
Prerequisites
For information about the levels at which you can perform this procedure, and the modules, handlers, and permissions that are required to perform this procedure, see Application Pools Feature Requirements (IIS 7).
Exceptions to Feature Requirements
- None
To configure an application pool to recycle after reaching maximum used memory,.
Command Line
To configure an application pool to recycle after it uses a specified amount of private memory, use the following syntax:
**appcmd set config /section:applicationPools /[name='string'].recycling.periodicRestart.privateMemory:**uint
The variable string is the name of the application pool that you want to configure. The variable uint is an unsigned integer, which specifies the amount of private memory (in kilobytes) after which you want the application pool to recycle. For example, to configure an application pool named Marketing to recycle after it uses two thousand kilobytes of private memory, type the following at the command prompt, and then press Enter:
appcmd set config /section:applicationPools /[name='Marketing'].recycling.periodicRestart.privateMemory:2000
For more information about Appcmd.exe, see Appcmd.exe (IIS 7).
Configuration
The procedure in this topic affects the following configuration elements:
privateMemory attribute of the <periodicRestart> element under <recycling>
For more information about IIS 7 configuration, see IIS 7.0: IIS Settings Schema on MSDN.
WMI
Use the following WMI classes, methods, or properties to perform this procedure:
- ApplicationPool.Recycling.PeriodicRestart.PrivateMemory Recycling Settings for an Application Pool (IIS 7)
Managing Application Pools in IIS 7 | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc725749%28v%3Dws.10%29 | 2019-09-15T10:16:00 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Glossary
A | B | C | D | E | F | G | H | I | J | K | L | M |
N | O | P | Q | R | S | T | U | V | W | X | Y | Z
A
About box
A dialog box providing general program information, such as version identification, copyright, licensing agreements, and ways to access technical support.
above the fold."
access key.
active monitor
The monitor where the active program is running.
address bar
A navigational element, usually appearing at the top of a window, that displays, and allows users to change, their current location. See also: breadcrumb bar.
affordance
Visual properties of an object that indicate how it can be used or acted on. For example, command buttons have the visual appearance of real-world buttons, suggesting pushing or clicking.
application
A program used to perform a related set of user tasks; often relatively complex and sophisticated. See also: program.
application key
A keyboard key with a context menu graphic on it. This key is used to display the context menu for the selected item.
application menu
A control that presents a menu of commands that involve doing something to or with a document or workspace, such as file-related commands.
Displays the context menu for the current selection (the same as pressing Shift+F10).
application theming
Using related visual techniques, such as customized controls, to create a unique look or branding for an application.
aspect ratio
An expression of the relation between the width of an object and its height. For example, high definition television uses a 16:9 aspect ratio.
auto-complete
A type of list used in text boxes and editable drop-down lists in which the likely input can be extrapolated and populated automatically from having been entered previously. Users type a minimal amount of text to populate the auto-complete list.
auto-exit
A text box in which the input focus automatically moves to the next related text box as soon as a user types the last character.
B
balloon
A common Windows control that informs users of a non-critical problem or special condition.
breadcrumb bar.
C
check box
A common Windows control that allows users to decide between clearly differing choices, such as toggling an option on or off.
chevron
A small control or button that indicates there are more items than can be displayed in the allotted space. Users click the chevron to see the additional items.
child window
A window, such as a control or pane, that is contained completely within another window, referred to as the parent window. See also: parent window, owned window.
cleared
In a check box, indicates that the option is not set. See also: selected, mixed state.
combo box
A common Windows control that combines the characteristics of a drop-down list or standard list box, and an editable text box. See also: list box, drop-down list.
command area
The area in a window where the commit buttons are located. Typically, dialog boxes and wizards have a command area. See also: commit button.
command button
A common Windows control that allows users to initiate an action immediately.
command link
A control used to make a choice among a set of mutually exclusive, related choices. In their normal state, command links have a lightweight appearance similar to hyperlinks, but their behavior is more similar to command buttons.
commit button
A command button used to commit to a task, proceed to the next step in a multi-step task, or cancel a task. See also: command area.
commit page
A type of wizard page in which users commit to performing the task. After doing so, the task cannot be undone by clicking Back or Cancel buttons.
completion page
A wizard page used to indicate the end of a wizard. Sometimes used instead of Congratulations pages. See also: congratulations page.
congratulations page
A wizard page used to indicate the end of a wizard. These pages are no longer recommended. Wizards conclude more efficiently with a commit page or, if necessary, a follow-up or completion page. See also: commit page, completion page, follow-up page.
Consent UI
A dialog box used by User Account Control (UAC) that allows protected administrators to elevate their privileges temporarily.
constraint
In controls that involve user input, such as text boxes, input constraints are a valuable way to prevent errors. For example, if the only valid input for a particular control is numeric, the control can use appropriate value constraints to enforce this requirement.
content area
The portion of UI surfaces, such as dialog boxes, control panel items, and wizards, devoted to presenting options, providing information, and describing controls. Distinguished from the command area, task pane, and navigation area.
contextual tab
A tab containing a collection of commands that are relevant only when the user has selected a particular object type. See also: Ribbon.
Control Panel
A Windows program that collects and displays for users the system-level features of the computer, including hardware and software setup and configuration. From Control Panel, users can click individual items to configure system-level features and perform related tasks. See also: control panel item.
control panel item
An individual feature available from Control Panel. For example, Programs and Ease of Access are two control panel items.
Credential UI
A dialog box used by User Account Control (UAC) that allows standard users to request temporary elevation of their privileges.
critical
The highest degree of severity. For example, in error and warning messages, critical circumstances might involve data loss, loss of privacy, or loss of system integrity.
custom icon
A pictorial representation unique to a program (as opposed to a Windows system icon).
custom visuals
Graphics, animations, icons, and other visual elements specially developed for a program.
D
default command button or link
The command button or link that is invoked when users press the Enter key. The default command button or link is assigned by the developer, but any command button or link becomes the default when users tab to it.
default monitor
The monitor with the Start menu, taskbar, and notification area.
delayed commit model
The commit model used by control panel item spoke pages where changes aren't made until explicitly committed by users clicking a commit button. Thus, users can abandon a task, navigating away using the Back button, Close, or the Address bar. See also: immediate commit model.
desktop
The onscreen work area provided by Windows, analogous to a physical desktop. See also: work area.
destructive command
An action that has a widespread effect and cannot be undone easily, or is not immediately noticeable.
details pane
The pane at the bottom of a Windows Explorer window that displays details (if any) about the selected items; otherwise, it displays details about the folder. For example, Windows Photo Gallery displays the picture name, file type, date taken, tags, rating, dimensions, and file size. See also: preview pane.
dialog box
A secondary window that allows users to perform a command, asks users a question, or provides users with information or progress feedback.
dialog box launcher
In a ribbon, a button at the bottom of some groups that opens a dialog box with features related to the group. See also: Ribbon.
dialog unit
A dialog unit (DLU) is the device-independent measure to use for layout based on the current system font.
direct manipulation
Direct interaction between the user and the objects in the UI (such as icons, controls, and navigational elements). The mouse and touch are common methods of direct manipulation.
docked window
A window that appears at a fixed location on the edge of its owner window. See also: floating window.
drop-down arrow
The arrow associated with drop-down lists, combo boxes, split buttons, and menu buttons, indicating that users can view the associated list by clicking the arrow.
drop-down list
A common Windows control that allows users to select among a list of mutually exclusive values. Unlike a list box, this list of available choices is normally hidden.
E
effective resolution.
elevated administrator
In User Account Control, elevated administrators have their administrator privileges. Without elevating, administrators run in their least-privileged state. The Consent UI dialog is used to elevate administrators to elevated status only when necessary. See also: protected administrator, standard user.
enhanced tooltip.
error
A state in which a problem has occurred. See also: warning.
expandable headings
A progressive disclosure chevron pattern where a heading can be expanded or collapsed to reveal or hide a group of items. See also: progressive disclosure.
extended selection
In list views and list boxes, a multiple selection mode where selection of a single item can be extended by dragging or with Shift+click or Ctrl+click to select groups of contiguous or non-adjacent values, respectively. See also: multiple selection.
F
flick
A quick, straight stroke of a finger or pen on a screen. A flick is recognized as a gesture, and interpreted as a navigation or an editing command.
floating window
A window that can appear anywhere on the screen the user wants. See also: docked window.
flyout
A popup window that temporarily shows more information. On the Windows desktop, flyouts are displayed by clicking on a gadget, and dismissed by clicking anywhere outside the flyout. You can use flyouts in both the docked and floating states.
follow-up page
A wizard page used to present related tasks that users are likely to do as follow-up. Sometimes used instead of congratulations pages.
font
A set of attributes for text characters.
full screen
A maximized window that does not have a frame.
G
gadget
A simple mini-application hosted on the user's desktop. See also: Sidebar.
gallery
A list of commands or options presented graphically. A results-based gallery illustrates the effect of the commands or options instead of the commands themselves. May be labeled or grouped. For example, formatting options can be presented in a thumbnail gallery.
gesture
A quick movement of a finger or pen on a screen that the computer interprets as a command, rather than as a mouse movement, writing, or drawing.
getting started page
An optional wizard page that outlines prerequisites for running the wizard successfully or explains the purpose of the wizard.
glass
A window frame option characterized by translucence, helping users focus on content and functionality rather than the interface surrounding it.
glyph
A generic term used to refer to any graph or symbolic image. Arrows, chevrons, and bullets are glyphs commonly used in Windows.
group box
A common Windows control that shows relationships among a set of related controls.
H
handwriting recognition.
high-contrast mode
A special display setting that provides extreme contrast for foreground and background visual elements (either black on white or white on black). Particularly helpful for accessibility.
hub page
In control panel items, a hub page presents high-level choices, such as the most commonly used tasks (as with task-based hub pages) or the available objects (as with object-based hub pages). Users can navigate to spoke pages to perform specific tasks. See also: spoke page.
hybrid hub page
In control panel items, a hybrid hub page is a hub page that also has some properties or commands directly on it. Hybrid hub pages are strongly recommended when users are most likely to use the control panel item to access those properties and commands.
I
immediate commit model
The commit model used by hybrid hub pages where changes take effect as soon as users make them. Commit buttons aren't used in this model. See also: delayed commit model.
in-place message
A message that appears in the context of the current UI surface, instead of a separate window. Unlike separate windows, in-place messages require either available screen space or dynamic layout.
indirect dialog box
A dialog box displayed out of context, either as an indirect result of a task or as the result of a problem with a system or background process.
inductive user interface
A UI that breaks a complex task down into simple, easily explained, clearly stated steps with a clear purpose.
infotip.
ink
The raw output for a pen. This digital ink can be kept just as written, or it can be converted to text using handwriting recognition software.
inline
Placement of links or messages directly in the context of its related UI. For example, an inline link occurs within other text instead of separately.
input focus
The location where the user is currently directing input. Note that just because a location in the UI is highlighted does not necessarily mean this location has input focus.
instance
A program session. For example, Windows Internet Explorer allows users to run multiple instances of the program because users can have several independent sessions running at a time. Settings can be saved across program sessions. See also: persistence.
J
K
keytip
In a ribbon, the mechanism used to display access keys. The access keys appear in the form of a small tip over each command or group, as opposed to the underlined letters typically used to display access keys. See also: access key.
L
landscape mode
A presentation option that orients an object to be wider than it is tall. See also: portrait mode.
least-privilege user account
A user account that normally runs with minimal privileges. See also: User Account Control.
list box
A common Windows control that allows users to select from a set of values presented in a list, which, unlike a drop-down list, is always visible. Supports single or multiple selections.
list view
A common Windows control that allows users to view and interact with a collection of data objects, using either single selection or multiple selection.
live preview
A preview technique that shows the effect of a command immediately on selection or hover without the user commiting the action. For example, formatting options such as themes, fonts, and colors benefit from live previews by showing users the effect with minimal effort.
localization
The process of adapting software for different countries, languages, cultures, or markets.
log file
A file-based repository for information of various kinds about activity on a computer system. Administrators often consult log files; ordinary users generally do not.
M
main instruction
Prominently displayed text that concisely explains what to do in the window or page. The instruction should be a specific statement, imperative direction, or question. Good main instructions communicate the user's objective rather than focusing just on manipulating the UI.
managed environment
A networked computer environment managed by an IT department or third-party provider, instead of by individual users. Administrators may optimize performance and apply operating system and application updates, among other tasks.
manipulation
A type of touch interaction in which input corresponds directly to how the object being touched would react naturally to the action in the real world.
maximize
To display a window at its largest size. See also: minimize, restored window.
A list of commands or options available to users in the current context.
message box
A secondary window that is displayed to inform a user about a particular condition.
mini-toolbar
A contextual toolbar displayed on hover.
minimize
To hide a window. See also: maximize, restored window.
mixed state
For check boxes that apply to a group of items, a mixed state indicates that some of the items are selected and others are cleared.
modal
Restrictive or limited interaction due to operating in a mode. Modal often describes a secondary window that restricts a user's interaction with the owner window. See also: modeless.
modeless
Non-restrictive or non-limited interaction. Modeless often describes a secondary window that does not restrict a user's interaction with the owner window. See also: modal.
multiple selection
The ability for users to choose more than one object in a list or tree.
N
non-critical system event
A type of system event that does not require immediate attention, often pertaining to system status. See also: critical.
notification
Information of a non-critical nature that is displayed briefly to the user; a notification takes the form of a balloon from an icon in the notification area of the taskbar.
O
opt in
The ability for users to select optional features explicitly. Less intrusive to users than opt-out, especially for privacy and marketing related features, because there is no presumption of users' wishes. See also: opt out, options.
opt out
The ability for users to remove features they don't want by clearing their selection. More intrusive to users than opt-in, especially for privacy and marketing related features, because there is an assumption of users' wishes. See also: opt in, options.
options
Choices available to users for customizing a program. For example, an Options dialog box allows users to view and change program options. See also: properties.
out-of-context UI
Any UI displayed in a pop-up window that isn't directly related to the user's current activity. For example, notifications and the Consent UI for User Access Control are out-of-context UI.
owned window
A secondary window used to perform an auxiliary task. It is not a top-level window (so it isn't displayed on the taskbar); rather, it is "owned" by its owner window. For example, most dialog boxes are owned windows. See also: child window, owner window.
owner control
The source of a tip, balloon, or flyout. For example, a text box that has input constraints might display a balloon to let the user know of these limitations. In this case, the text box is considered the owner control.
owner window
A window from which an owned window originates. Appears beneath the owned window in Z order. See also: owned window, parent window, Z order.
P
page
A basic unit of navigation for task-based UI, such as wizards, property sheets, control panel items, and Web sites. Users perform tasks by navigating from page to page within a single host window. See also: page flow, window.
page flow
A collection of pages in which users perform a task. See also: page, task, wizard, Control Panel.
page space control
Allows users to view and interact with a hierarchically arranged collection of objects. Page space controls are like tree controls, but they have a slightly different visual appearance. They are used primarily by Windows Explorer.
palette window
A modeless secondary window that displays a toolbar or other choices, such as colors, patterns, fonts, or font attributes.
pan.
pane.
parent window
The container of child windows (such as controls or panes). See also: owner window.
pen
A stylus used for pointing, gestures, simple text entry, and free-form handwriting. Pens have a fine, smooth tip that supports precise pointing, writing, or drawing in ink. They may also have an optional pen button (used to perform right-clicks) and eraser (used to erase ink).
persistence
The principle that the state or properties of an object is automatically preserved.
personalization
Customizing a core experience that is crucial to the user's personal identification with a program. By contrast, ordinary options and properties aren't crucial to the user's personal identification with a program.
personas
Detailed descriptions of imaginary people. Personas are constructed out of well-understood, highly specified data about real people.
physical resolution
The horizontal and vertical pixels that can be displayed by a computer monitor's hardware.
pop-up group button
In a ribbon, a menu button that consolidates all the commands and options within a group. Used to display ribbons in small spaces.
portrait mode
A presentation option that orients an object to be taller than it is wide. See also: landscape mode.
preferences
Don't use. Use options or properties instead.
preview
A representation of what users will see when they select an option. Previews can be displayed statically as part of the option, or upon request with a Preview or Apply button.
preview pane
A window pane used to show previews and other data about selected objects.
primary command
A central action that fulfills the primary purpose of a window. For example, Print is a primary command for a Print dialog box. See also: secondary command.
primary toolbar
A collection of commands designed to be comprehensive enough to preclude the use of a menu bar. See also: supplemental toolbar.
primary window
A primary window has no owner window and is displayed on the taskbar. Main program windows are always primary windows. See also: secondary window.
program
A sequence of instructions that can be executed by a computer. Common types of programs include productivity applications, consumer applications, games, kiosks, and utilities.
progress bar
A common Windows control that displays the progress of a particular operation as a graphical bar.
progressive disclosure
A technique of allowing users to display less commonly used information (typically, data, options, or commands) as needed. For example, if more options are sometimes needed, users can expose them in context by clicking a chevron button.
progressive escalation.
prompt
A label or short instruction placed inside a text box or editable drop-down list as its default value. Unlike static text, prompts disappear once users type something into the control or it gets input focus.
properties.
protected administrator
In User Account Control, an administrator running in their least-privileged state. See also: elevated administrator, standard user.
Q
Quick Access Toolbar
A small, customizable toolbar that displays frequently used commands.
Quick Launch bar
A direct access point on the Windows desktop, located next to the Start button, populated with icons for programs of the user's choosing. Removed in Windows 7.
R
radio button
A common Windows control that allow users to select from among a set of mutually exclusive, related choices.
relative pixels
A device-independent metric that is the same as a physical pixel at 96 dpi (dots per inch), but proportionately scaled in other dpis. See also: effective resolution.
restored window
A visible, partial-screen window, neither maximized nor minimized. See also: maximize, minimize.
ribbon
A tabbed container of commands and options, located at the top of a window or work area and having a fixed location and height. Ribbons usually have an Application menu and Quick Access Toolbar. See also: menu, toolbar.
risky action
A quality of user action that can have negative consequences and can't be easily undone. Risky actions include actions that can harm the security of a computer, affect access to a computer, or result in unintended loss of data.
S
scan path
The route users are likely to take as they scan to locate things in a window. Particularly important if users are not engaged in immersive reading of text.
screen reader
An assistive technology that enables users with visual impairments to interpret and navigate a user interface by transforming visuals to audio. Thus, text, controls, menus, toolbars, graphics, and other screen elements are spoken by the computerized voice of the screen reader.
scroll bar
A control that allows users to scroll the content of a window, either vertically or horizontally.
secondary command
A peripheral action that, while helpful, isn't essential to the purpose of the window. For example, Find Printer or Install Printer are secondary commands for a Print dialog box. See also: primary command.
secondary window
A window that has an owner window and consequently is not displayed on the taskbar. See also: primary window.
secure desktop
A protected environment that is isolated from programs running on the system, used to increase the security of highly secure tasks such as log on, password changes, and UAC Elevation UI. See also: User Account Control.
security shield
A shield icon used for security branding.
selected
Chosen by the user in order to perform an operation; highlighted.
sentence-style capitalization
For sentence-style capitalization:
- Always capitalize the first word of a new sentence.
- Don't capitalize the word following a colon unless the word is a proper noun, or the text following the colon is a complete sentence.
- Don't capitalize the word following an em-dash unless it is a proper noun, even if the text following the dash is a complete sentence.
- Always capitalize the first word of a new sentence following any end punctuation. Rewrite sentences that start with a case-sensitive lowercase word.
settings
Specific values that have been chosen (either by the user or by default) to configure a program or object.
shortcut key.
Sidebar
A region on the side of the user's desktop used to display gadgets in Windows Vista. See also: gadget.
single-point error
A user input error relating to a single control. For example, entering an incorrect credit card number is a single-point error, whereas an incorrect logon is a double-point error, because either the user name or password could be the problem.
slider
A common Windows control that displays and sets a value from a continuous range of possible values, such as brightness or volume.
special experience
In programs, special experiences relate to the primary function of the program, something unique about the program, or otherwise make an emotional connection to users. For example, playing an audio or video is a special experience for a media player.
spin box
The combination of a text box and its associated spin control. Users click the up or down arrow of a spin box to increase or decrease a numeric value. Unlike slider controls, which are used for relative quantities, spin boxes are used only for exact, known numeric values.
spin control
A control that users click to change values. Spin controls use up and down arrows to increase or decrease the value.
splash screen
Transitional screen image that appears as a program is in the process of launching.
split button
A bipartite command button that includes a small button with a downward pointing triangle on the rightmost portion of the main button. Users click the triangle to display variations of a command in a drop-down menu. See also: command button.
spoke page.
standard user
In User Account Control, standard users have the least privileges on the computer, and must request permission from an administrator on the computer in order to perform administrative tasks. In contrast with protected administrators, standard users can't elevate themselves. See also: elevated administrator, protected administrator.
static text
User interface text that is not part of an interactive control. Includes labels, main instructions, supplemental instructions, and supplemental explanations.
supplemental instructions
An optional form of user interface text that adds information, detail, or context to the main instruction. See also: main instruction.
supplemental toolbar
A collection of commands designed to work in conjunction with a menu bar. See also: primary toolbar.
system color.
system menu
A collection of basic window commands, such as move, size, maximize, minimize, and close, available from the program icon on the title bar, or by right-clicking a taskbar button.
T
tabbed dialog
A dialog box that contains related information on separate labeled pages (tabs). Unlike property sheets, which also often contain tabs, tabbed dialog boxes are not used to display an object's properties. See also: properties.
task
A unit of user activity, often represented by a single UI surface (such as a dialog box), or a sequence of pages (such as a wizard).
task dialog
A dialog box implemented using the task dialog API. Requires Windows Vista or later.
task flow
A sequence of pages that helps users perform a task, either in a wizard, explorer, or browser.
task link
A link used to initiate a task, in contrast to links that navigate to other pages or windows, choose options, or display Help.
task pane.
taskbar
The access point for running programs that have a desktop presence. Users interact with controls called taskbar buttons to show, hide, and minimize program windows.
text box
A control specifically designed for textual input; allows users to view, enter, or edit text or numbers.
theme.
title-style capitalization
For title-style capitalization:
- Capitalize all nouns, verbs (including is and other forms of to be), adverbs (including than and when), adjectives (including this and that), and pronouns (including its).
- Capitalize the first and last words, regardless of their parts of speech (for example, The Text to Look For).
- Capitalize prepositions that are part of a verb phrase (for example, Backing Up Your Disk).
- Don't capitalize articles (a, an, the), unless the article is the first word in the title.
- Don't capitalize coordinate conjunctions (and, but, for, nor, or), unless the conjunction is the first word in the title.
- Don't capitalize prepositions of four or fewer letters, unless the preposition is the first word in the title.
- Don't capitalize to in an infinitive phrase (for example, How to Format Your Hard Disk), unless the phrase is the first word in the title.
- Capitalize the second word in compound words if it is a noun or proper adjective, an "e-word," or the words have equal weight (for example, E-Commerce, Cross-Reference, Pre-Microsoft Software, Read/Write Access, Run-Time). Do not capitalize the second word if it is another part of speech, such as a preposition or other minor word (for example, Add-in, How-to, Take-off).
- Capitalize user interface and application programming interface terms that you would not ordinarily capitalize, unless they are case-sensitive (for example, The fdisk Command). Follow the traditional capitalization of keywords and other special terms in programming languages (for example, The printf Function, Using the EVEN and ALIGN Directives).
- Capitalize only the first word of each column heading.
toolbar
A graphical presentation of commands optimized for efficient access.
tooltip
A small pop-up window that labels the unlabeled control being pointed to, such as unlabeled toolbar controls or command buttons.
touch
Direct interaction with a computer display using a finger.
U
usability study.
User Account Control.
User Account Control shield
A shield icon used to indicate that a command or option needs elevation for User Account Control.
user input problem
An error resulting from user input. User input problems are usually non-critical because they must be corrected before proceeding.
user scenario
A description of a user goal, problem, or task in a specific set of circumstances.
V
W
warning.
welcome page
The first page of a wizard, used to explain the purpose of the wizard. Welcome pages are no longer recommended. Users have a more efficient experience without such pages.
window
A rectangular area on a computer screen in which programs and content appear. A window can be moved, resized, minimized, or closed; it can overlap other windows. Docking a child window converts it to a pane. See also: pane.
Windows logo key
A modifier key with the Windows logo on it. This key is used for a number of Windows shortcuts, and is reserved for Windows use. For example, pressing the Windows logo key displays or hides the Windows Start menu.
wireframes
A UI mockup that shows a window's functionality and layout, but not its finished appearance. A wireframe uses only line segments, controls, and text, without color, complex graphics, or the use of themes.
wizard
A sequence of pages that guides users through a multi-step, infrequently performed task. Effective wizards reduce the knowledge required to perform the task compared to alternative UIs.
work area
The onscreen area where users can perform their work, as well as store programs, documents, and their shortcuts. See also: desktop.
X
Y
Z
Z order
The layered relationship of windows on the display. | https://docs.microsoft.com/en-us/windows/win32/uxguide/glossary?redirectedfrom=MSDN | 2019-09-15T10:23:44 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
dis — Disassembler for Python bytecode¶.
Changed RETURN_VALUE
(The “2” is a line number)., asynchronous generator, coroutine,().
Changed in version 3.7: This can now handle coroutine and asynchronous generator objects.
Example:
>>> bytecode = dis.Bytecode(myfunc) >>> for instr in bytecode: ... print(instr.opname) ... LOAD_GLOBAL LOAD_FAST CALL_FUNCTION RETURN_VALUE, depth=None)¶
Disassemble the x object. x can denote either a module, a class, a method, a function, a generator, an asynchronous generator, a. See Objects/lnotab_notes.txt for the
co_lnotabformat and how to decode it.
Changed in version 3.6: Line numbers can be decreasing. Before, they were always increasing.
dis.
findlabels(code)¶
Detect all offsets in the code object code which are jump targets, and return a list of these offsets.
dis.
stack_effect(opcode, oparg=None, *, jump=None)¶
Compute the stack effect of opcode with argument oparg.
If the code has a jump target and jump is
True,
stack_effect()will return the stack effect of jumping. If jump is
False, it will return the stack effect of not jumping. And if jump is
None(default), it will return the maximal stack effect of both cases.
New in version 3.4.
Changed in version 3.8: Added jump parameter.
ROT_FOUR¶
Lifts second, third and forth stack items one position up, moves top down to position four.
New in version 3.8.
DUP_TOP_TWO¶
Duplicates the two references on top of the stack, leaving them in the same order.
New in version 3.2.__.
New in version 3.5.
GET_AITER¶
Implements
TOS = TOS.__aiter__().
New in version 3.5.
Changed in version 3.7: Returning awaitable objects from
__aiter__is no longer supported.
GET_ANEXT¶
Implements
PUSH(get_awaitable(TOS.__anext__())). See
GET_AWAITABLEfor details about
get_awaitable
New in version 3.5.
END_ASYNC_FOR¶
Terminates an
async forloop. Handles an exception raised when awaiting a next item. If TOS is
StopAsyncIterationpop 7 values from the stack and restore the exception state using the second three of them. Otherwise re-raise the exception using the three values from the stack. An exception handler block is removed from the block stack.
New in version 3.8.
BEFORE_ASYNC_WITH¶
Resolves
__aenter__and
__aexit__from the object on top of the stack. Pushes
__aexit__and result of
__aenter__()to the stack.
New in version 3.5.
Miscellaneous opcodes
PRINT_EXPR¶
Implements the expression statement for the interactive mode. TOS is removed from the stack and printed. In non-interactive mode, an expression statement is terminated with
POP_TOP.
MAP_ADD(i)¶
Calls
dict.__setitem__(TOS1[-i], TOS1, TOS). Used to implement dict comprehensions.
New in version 3.1.
Changed in version 3.8: Map value is TOS and map key is TOS1. Before, those were reversed.
For all of the
SET_ADD,
LIST_APPEND and
MAP_ADD
instructions, while the added value or key/value pair is popped off, the
container object remains on the stack so that it is available for further
iterations of the loop.
SETUP_ANNOTATIONS¶
Checks whether
__annotations__is defined in
locals(), if not it is set up to an empty
dict. This opcode is only emitted if a class or module body contains variable annotations statically.
New in version 3.6.
trystatements,.
POP_FINALLY(preserve_tos)¶
Cleans up the value stack and the block stack. If preserve_tos is not
0TOS first is popped from the stack and pushed on the stack after performing other stack operations:
If TOS is
NULLor an integer (pushed by
BEGIN_FINALLYor
CALL_FINALLY) it is popped from the stack.
If TOS is an exception type (pushed when an exception has been raised) 6 values are popped from the stack, the last three popped values are used to restore the exception state. An exception handler block is removed from the block stack.
It is similar to
END_FINALLY, but doesn’t change the bytecode counter nor raise an exception. Used for implementing
break,
continueand
returnin the
finallyblock.
New in version 3.8.
BEGIN_FINALLY¶
Pushes
NULLonto the stack for using it in
END_FINALLY,
POP_FINALLY,
WITH_CLEANUP_STARTand
WITH_CLEANUP_FINISH. Starts the
finallyblock.
New in version 3.8.
END_FINALLY¶
Terminates a
finallyclause. The interpreter recalls whether the exception has to be re-raised or execution has to be continued depending on the value of TOS.
If TOS is
NULL(pushed by
BEGIN_FINALLY) continue from the next instruction. TOS is popped.
If TOS is an integer (pushed by
CALL_FINALLY), sets the bytecode counter to TOS. TOS is popped.
If TOS is an exception type (pushed when an exception has been raised) 6 values are popped from the stack, the first three popped values are used to re-raise the exception and the last three popped values are used to restore the exception state. An exception handler block is removed from the block stack._START.).
New in version 3.2.
WITH_CLEANUP_START¶
Starts cleaning up the stack when a
withstatement block exits.
At the top of the stack are either
NULL(pushed by
BEGIN_FINALLY) or 6 values pushed if an exception has been raised in the with block. Below is the context manager’s
__exit__()or
__aexit__()bound method.
If TOS is
NULL, calls
SECOND(None, None, None), removes the function from the stack, leaving TOS, and pushes
Noneto the stack. Otherwise calls
SEVENTH(TOP, SECOND, THIRD), shifts the bottom 3 values of the stack down, replaces the empty spot with
NULLand pushes TOS. Finally pushes the result of the call.
WITH_CLEANUP_FINISH¶
Finishes cleaning up the stack when a
withstatement block exits.
TOS is result of
__exit__()or
__aexit__()function call pushed by
WITH_CLEANUP_START. SECOND is
Noneor an exception type (pushed when an exception has been raised).
Pops two values from the stack. If SECOND is not None and TOS is true unwinds the EXCEPT_HANDLER block which was created when the exception was caught and pushes
NULLto the stack.
All of the following opcodes use their arguments. of.
POP_JUMP_IF_TRUE(target)¶
If TOS is true, sets the bytecode counter to target. TOS is popped.
New in version 3.1.
POP_JUMP_IF_FALSE(target)¶
If TOS is false, sets the bytecode counter to target. TOS is popped.
New in version 3.1.
JUMP_IF_TRUE_OR_POP(target)¶
If TOS is true, sets the bytecode counter to target and leaves TOS on the stack. Otherwise (TOS is false), TOS is popped.
New in version 3.1.
JUMP_IF_FALSE_OR_POP(target)¶
If TOS is false, sets the bytecode counter to target and leaves TOS on the stack. Otherwise (TOS is true), TOS is popped.
New in version 3.1._FINALLY(delta)¶
Pushes a try block from a try-finally or try-except clause onto the block stack. delta points to the finally block or the first except block.
CALL_FINALLY(delta)¶
Pushes the address of the next instruction onto the stack and increments bytecode counter by delta. Used for calling the finally block as a “subroutine”.
New in version 3.8..
New in version 3.4.
DELETE_DEREF(i)¶
Empties the cell contained in slot i of the cell and free variable storage. Used by the
delstatement.
New in version 3.2.
RAISE_VARARGS(argc)¶
Raises an exception using one of the 3 forms of the
raisestatement, depending on the value of argc:
0:
raise(re-raise previous exception)
1:
raise TOS(raise exception instance or type at
TOS)
2:
raise TOS1 from TOS(raise exception instance or type at
TOS1with
__cause__set to
TOS)
CALL_FUNCTION(argc)¶
Calls a callable object with positional arguments. argc indicates the number of positional arguments. The top of the stack contains positional arguments, with the right-most argument on top. Below the arguments is a callable object to call.
CALL_FUNCTIONpops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object.
Changed in version 3.6: This opcode is used only for calls with positional arguments.
CALL_FUNCTION_KW(argc)¶
Calls a callable object with positional (if any) and keyword arguments. argc indicates the total number of positional and keyword arguments. The top element on the stack contains a tuple of keyword argument names. Below that are keyword arguments in the order corresponding to the tuple. Below that are positional arguments, with the right-most parameter on top. Below the arguments is a callable object to call.
CALL_FUNCTION_KWpops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object.
Changed in version 3.6: Keyword arguments are packed in a tuple instead of a dictionary, argc indicates the total number of arguments.
CALL_FUNCTION_EX(flags)¶
Calls a callable object with variable set of positional and keyword arguments. If the lowest bit of flags is set, the top of the stack contains a mapping object containing additional keyword arguments. Below that is an iterable object containing positional arguments and a callable object to call.
BUILD_MAP_UNPACK_WITH_CALLand
BUILD_TUPLE_UNPACK_WITH_CALLcan be used for merging multiple mapping objects and iterables containing arguments. Before the callable is called, the mapping object and iterable object are each “unpacked” and their contents passed in as keyword and positional arguments respectively.
CALL_FUNCTION_EXpops all arguments and the callable object off the stack, calls the callable object with those arguments, and pushes the return value returned by the callable object. values for positional-only and positional-or-keyword parametersS)
BUILD_SLICE(argc)¶.
EXTENDED_ARG(ext)¶
Prefixes any opcode which has an argument too big to fit into the default one byte. ext holds an additional byte which act as higher bits in the argument. For each opcode, at most three prefixal
EXTENDED_ARGare allowed, forming an argument from two-byte to four-byte.
FORMAT_VALUE(flags)¶
Used for implementing formatted literal strings (f-strings). Pops an optional fmt_spec from the stack, then a required value. flags is interpreted as follows:
(flags & 0x03) == 0x00: value is formatted as-is.
(flags & 0x03) == 0x01: call
str()on value before formatting it.
(flags & 0x03) == 0x02: call
repr()on value before formatting it.
(flags & 0x03) == 0x03: call
ascii()on value before formatting it.
(flags & 0x04) == 0x04: pop fmt_spec from the stack and use it, else use an empty fmt_spec.
Formatting is performed using
PyObject_Format(). The result is pushed on the stack.
New in version 3.6.). | https://docs.python.org/3.8/library/dis.html | 2019-09-15T10:16:07 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.python.org |
Modify the interface language
The Bitnami Dolibarr Stack uses the English (US) language pack by default. However, translations also exist for other languages. To change the interface language, follow these steps:
- Log in to the application as an administrator.
- Select the “Setup -> Display” menu item.
- Click the “Modify” button.
Select a new default language for the “Default language to use” field.
Click “Save” to save your changes. | https://docs.bitnami.com/oracle/apps/dolibarr/configuration/change-language/ | 2019-09-15T10:42:26 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.bitnami.com |
Reference Implementation
The reference implementation uses MongoDB as the storage for the UserDetails. It uses Morphia to abstract the UserDetailsDAO that is stored.
Security Properties file
Create a
security.properties file at the default location
<XAP root>/config/security/security.properties
It should include a SecurityManager implementation class and other properties configurable by the implementation.
com.gs.security.security-manager.class = com.mycompany.MyCustomSecurityManager com.mycompany.mongodb.host = localhost com.mycompany.mongodb.port = 27017 com.mycompany.mongodb.databsebName = myDB
Implementing SecurityManager
The implementation utilizes the properties file to configure the connection to MongoDB.
Calls to
authenticate are triggered once per user login operation. The authentication
is done against the user details stored in the DB.
public static class MyCustomSecurityManager implements SecurityManager { private MongoClient mongo; private UserDetailsDAO userDetailsDAO; public void init(Properties properties) throws SecurityException { // Access a property form the properties file to configure this implementation String host = properties.getProperty("com.mycompany.mongodb.host"); String port = properties.getProperty("com.mycompany.mongodb.port"); String dbName = properties.getProperty("com.mycompany.mongodb.databsebName"); // Connect to the db, use org.mongodb.morphia.Morphia for mapping MongoClient mongo = new MongoClient(host, Integer.parseInt(port)); userDetailsDAO = new UserDetailsDAO(morphia, mongoClient, dbName); ... } public Authentication authenticate(UserDetails userDetails) throws AuthenticationException { UserDetailsDAO user = userDetailsDAO.findOne( userDetailsDAO.createIdQuery(userDetails.getUsername())); if (user == null) { throw new AuthenticationException("user not found"); } if (user.getPassword().equals(userDetails.getPassword())) { // Populate the returned details with the authorities extracted // from the DB (user.getAuthorities()) and return an Authentication // object with these details. UserDetails populatedDetails = new User(user.getUsername(), user.getPassword(), user.getAuthorities()); return new Authentication(populatedDetails); } else { throw new AuthenticationException("access denied"); } } public DirectoryManager createDirectoryManager(UserDetails userDetails) throws AuthenticationException, AccessDeniedException { return null; //ignore or throw DirectoryAccessDeniedException } public void close() { mongo.close(); } }
Packaging and Classpath
The most common scenario is for all services to share the same custom security. This is easily accomplished by placing the custom implementation classes in the
lib/optional/security directory.
You can use a different directory by configuring the
com.gigaspaces.lib.opt.security system property.
<XAP root>/lib/optional/security/my-custom-security.jar
Processing units may share a custom security implementation. In this case, the custom security jar can be placed under
pu-common.
<XAP root>/lib/optional/pu-common/my-pu-custom-security.jar
If each processing unit has its own custom security implementation, the custom security jar can be part of the processing unit distribution.
<XAP root>/deploy/hello-processor/lib/my-processor-custom-security.jar | https://docs.gigaspaces.com/xap/12.0/security/security-ref-impl.html | 2019-09-15T09:50:13 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.gigaspaces.com |
save Method (DOMDocument)
Saves an XML document to the specified location.
oXMLDOMDocument.save(destination);
Parameters
destination
An object. The object can represent a file name, an ASP
Response object, a
DOMDocument object, or a custom object that supports persistence. See Remarks for more information.
oXMLDOMDocument.save (destination)
Example
If your
DOMDocument contains unformatted XML, you might want to format it before saving. There is no flag in
DOMDocument that can be set to indent your XML. In this situation, you might want to use the SAX writer to produce the resulting XML.
HRESULT save( VARIANT destination);
Parameters
destination[in]
The type of object to save. This object can represent a file name, an ASP
Response object, an XML document object, or a custom object that supports persistence. See Remarks for more information.
C/C++ Return Values
S_OK
The value returned if successful.
XML_BAD_ENCODING
The value returned if the document contains a character that does not belong in the specified encoding. The character must use a numeric entity reference. For example, the Japanese Unicode character 20013 does not fit into the encoding Windows-1250 (the Central European alphabet) and therefore must be represented in markup as the numeric entity reference 中 or 中 This version of
save does not automatically convert characters to the numeric entity references.
E_INVALIDARG
The value returned if a string was provided, but it is not a valid file name.
E_ACCESSDENIED
The value returned if a save operation is not permitted.
E_OUTOFMEMORY
The value returned if the save operation must allocate buffers.
(Other values)
Any other file system error can be returned in the
save(string) case.
Example
BOOL DOMDocSaveLocation() { BOOL bResult = FALSE; IXMLDOMDocument *pIXMLDOMDocument = NULL; HRESULT hr; try { _variant_t varString = _T("D:\\sample.xml"); // Initialize pIXMLDOMDocument (create a DOMDocument). // Load document. hr = pIXMLDOMDocument->save(varString); if(SUCCEEDED(hr)) bResult = TRUE; } catch(...) { DisplayErrorToUser(); // Release the IXMLDOMDocument interface. } // Release the IXMLDOMDocument interface when finished with it. return bResult; }
Remarks
The behavior differs based on the object specified by the
objTarget parameter.
External entity references in
<DOCTYPE>,
<ENTITY>,
<NOTATION>, and XML namespace declarations are not changed; they point to the original document. A saved XML document might not load if the URLs are not accessible from the location in which you saved the document.
Character encoding is based on the encoding attribute in the XML declaration, such as
<?xml version="1.0" encoding="windows-1252"?>. When no encoding attribute is specified, the default setting is UTF-8.
Validation is not performed during
save, which can result in an invalid document that does not load again because of a specified document type definition (DTD).
This member is an extension of the Worldwide Web Consortium (W3C) Document Object Model (DOM).
Versioning
Implemented in: MSXML 3.0 and MSXML 6.0
See Also
Persistence and the DOM
IXMLDOMDocument-DOMDocument | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/ms753769(v=vs.85)?redirectedfrom=MSDN | 2019-09-15T09:46:58 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Battery Battery Battery Battery Battery Class
Definition
Provides information about a battery controller that is currently connected to the device. For more info, see Get battery information.
public : sealed class Battery
struct winrt::Windows::Devices::Power::Battery
public sealed class Battery
Public NotInheritable Class Battery
// This class does not provide a public constructor.
- Attributes
-
Windows 10 requirements
Remarks
In this context, device refers to the hardware that your app is running on. Battery controller refers to the electronics that interface between the physical battery and the operating system. A battery controller appears in Device Manager as a "Battery" under the Batteries node.
Depending on the device, it may be possible to remove the physical battery while the device remains running. For example, a laptop that's plugged into A/C power. In that case, if the battery controller were part of the laptop enclosure, you could potentially create a Battery object when no battery is connected to the device. However, if the battery controller resided on the physical battery, it would no longer be visible to the operating system and therefore you could not create a corresponding Battery object for an individual battery.
Properties
Methods
Events
See also
Feedback | https://docs.microsoft.com/en-us/uwp/api/Windows.Devices.Power.Battery?redirectedfrom=MSDN | 2019-09-15T10:25:29 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
ContainerShapes
This article will walk you through the functionality and the main features of the RadDiagramContainerShape.
Overview
RadDiagramContainerShape allows you to place multiple shapes into one container shape. RadDiagramContainerShapes are much like groups , a way to logically combine other shapes but add to this the capability to have a visual wrapper including a header. You can drag shapes in and out of a container shape at run-time and take advantage of its built-in auto-sizing logic that can extend the size of a container to wrap a shape. RadDiagram provides a visual feedback when a shape is being dragged over a RadDiagramContainerShape and even if part of the shape is outside the bounds of the container, the framework internally handles the drop and expands the size of the container to place the shape inside the content area of the container.
Fig.1 Illustration of the ContainerShape auto-sizing capabilities
A container can be connected and handled like other shapes.
RadDiagramContainerShape derives from the RadDiagramShapeBase class and that is why it exposes similar properties to those of the RadDiagramShape . To get familiar with the RadDiagramShape features and properties, please refer to the Shapes article.
Setting a header
RadDiagramContainerShape header is controlled via the Content property:
RadDiagramContainerShape container = new RadDiagramContainerShape(); container.Content = "Container header"; this.radDiagram1.Items.Add(container);
Dim container As New RadDiagramContainerShape() container.Content = "Container header" Me.RadDiagram1.Items.Add(container)
Fig.2 ContainerShape's content
Edit Mode
By default you can edit the header of the RadDiagramContainerShape out-of-the-box by double-clicking on the container or by hitting F2. If you'd like to disable the editing functionality, you can set the IsEditable property to false.
You can manually put the RadDiagramContainerShape in edit mode by setting its IsInEditMode property to true. This is the property that gets and sets the edit mode of the container.
Populating with data
The main purpose of the RadDiagramContainerShape is to allow you to drop shapes on it thus grouping them in one container. This is why dragging and dropping shapes onto the container is the main approach for populating its Items collection.
You can also populate it manually in code-behind:
RadDiagramShape shape = new RadDiagramShape() { Text = "Shape1", Shape = new RoundRectShape(4), BackColor = Color.LimeGreen }; shape.Position = new Telerik.Windows.Diagrams.Core.Point(100, 100); RadDiagramContainerShape containerShape = new RadDiagramContainerShape(); containerShape.Content = "Container header"; containerShape.Location = new Point(10,10); containerShape.DrawBorder = true; this.radDiagram1.Items.Add(containerShape); containerShape.Items.Add(shape);
Dim shape As New RadDiagramShape() With { _ .Text = "Shape1", _ .Shape = New RoundRectShape(4), _ .BackColor = Color.LimeGreen _ } shape.Position = New Telerik.Windows.Diagrams.Core.Point(100, 100) Dim containerShape As New RadDiagramContainerShape() containerShape.Content = "Container header" containerShape.Location = New Point(10, 10) containerShape.DrawBorder = True Me.RadDiagram1.Items.Add(containerShape) containerShape.Items.Add(shape)
Fig.3 RadDiagramContainerShape.Items
Container Bounds
You can get the bounds of the RadDiagramContainerShape through the Bounds property, which is of type Telerik.Windows.Diagrams.Core.Rect and it gets the width, height and location of the container’s bounds.
Collapsible ContainerShapes
By default, RadDiagramContainerShape is collapsible. It is controlled by the IsCollapsible property.
Fig.4 RadDiagramContainerShape.Items
Below you can find a list of all RadDiagramContainerShape members that are related to the collapsible feature of the shape:
IsCollapsible - a property of type bool that controls the collapsible state of a RadDiagramContainerShape.
IsCollapsed - a property of type bool that controls whether a collapsible RadDiagramContainerShape is currently collapsed.
IsCollapsedChanged - an event that is raised by a RadDiagramContainerShape to inform that the collapsed state of the container is changed.
Interaction
Below you can find a list of the interactions supported by the RadDiagramContainerShape:
Translation - you can translate the RadDiagramContainerShape along with its children.
Resizing - you can resize only the RadDiagramContainerShape without affecting its children size.
Cut and Copy - these clipboard operations work only on the RadDiagramContainerShape. The shapes inside the container won't be cut or copied. You can find more information about the clipboard operations supported in the RadDiagram in the Clipboard operations.
If you do wish to resize and cut or copy both the container and its children simultaneously, you can do so by dragging a selection rectangle around the container (instead of just clicking-selecting the container). This selection will contain both the container and the children thus allowing you to perform the aforementioned actions on all items at the same time.
Customize the ContainerShape Appearance
You can easily customize the visual appearance of the RadDiagramContainerShape by using the following properties:
BackColor - specifies the RadDiagramContainerShape background color.
BorderBrush - gets or sets the brush that specifies the RadDiagramContainerShape border color.
BorderThickness - gets or sets the width of the RadDiagramContainerShape outline.
RadDiagramContainerShape's appearance
container.BackColor = Color.Yellow; container.BorderThickness = new Padding(3); container.BorderBrush = new System.Drawing.SolidBrush(Color.Fuchsia);
container.BackColor = Color.Yellow container.BorderThickness = New System.Windows.Forms.Padding(3) container.BorderBrush = New System.Drawing.SolidBrush(Color.Fuchsia)
Fig.5 RadDiagramContainerShape's appearance
| https://docs.telerik.com/devtools/winforms/controls/diagram/diagram-items/containershapes | 2019-09-15T09:43:54 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['images/diagram-diagram-items-container-shapes001.gif',
'diagram-diagram-items-container-shapes 001'], dtype=object)
array(['images/diagram-diagram-items-container-shapes002.png',
'diagram-diagram-items-container-shapes 002'], dtype=object)
array(['images/diagram-diagram-items-container-shapes003.png',
'diagram-diagram-items-container-shapes 003'], dtype=object)
array(['images/diagram-diagram-items-container-shapes004.gif',
'diagram-diagram-items-container-shapes 004'], dtype=object)
array(['images/diagram-diagram-items-container-shapes005.png',
'diagram-diagram-items-container-shapes 005'], dtype=object)] | docs.telerik.com |
How can I change the number of products shown on an archive page?
There are two different ways to accomplish this.
Change It Everywhere
If you'd like to make a global setting that affects ALL archive pages then you can use the "show at most" setting under WordPress' Settings → Reading page.
This affects all archives on your entire site, including EDD Product archives, Blog archives, and any custom post archive you may have.
Change It In One Place
EDD provides a shortcode called [downloads] which renders a product archive, and accepts a number for how many you'd like to show. Here's a simple example:
[downloads number="15" columns="3"]
In-depth documentation on [downloads] is available here. | https://docs.easydigitaldownloads.com/article/990-how-can-i-change-the-number-of-products-shown-on-an-archive-page | 2019-09-15T09:45:23 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/55bf53c9e4b089486cad7fa0/file-oR95OUywGL.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
2.2.2.7.2 AMap (Allocation Map) Page
An AMap page contains an array of 496 bytes that is used to
track the space allocation within the data section that immediately follows the
AMap page. Each bit in the array maps to a block of 64 bytes in the data
section. Specifically, the first bit maps to the first 64 bytes of the data
section, the second bit maps to the next 64 bytes of data, and so on. AMap
pages map a data section that consists of 253,952 bytes (
496 * 8 * 64).
An AMap is allocated out of the data section and, therefore, it actually "maps itself". What this means is that the AMap actually occupies the first page of the data section and the first byte (that is, 8 bits) of the AMap is 0xFF, which indicates that the first 512 bytes are allocated for the AMap.
The first AMap of a PST file is located at absolute file offset 0x4400, and subsequent AMaps appear at intervals of 253,952 bytes thereafter. The following is the structural representation of an AMap page. | https://docs.microsoft.com/en-us/openspecs/office_file_formats/ms-pst/60466ef4-af15-49b6-8413-b3a72f0e9bdb?redirectedfrom=MSDN | 2019-09-15T10:29:56 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Elephant Setup Process
From CSLabsWiki
Revision as of 16:06, 5 November 2014 by Kimbalac (talk | contribs) (Literally just removed a link. Since I can login again, this page will be fixed soon!)
THIS PAGE IS A WIP
This is a guide for setting up Elephant, the CSLabs backup server.
Contents
Pre-Installation Process
Blanking HDD's
If the hard drives that you are installing into Elephant are not already blank, you will want to make sure to do so, and break down any remaining RAID arrays.
- Blank the first 4 megabytes of the HDD, to make the rest unreadable (replace * with appropriate drive letter)
dd if=/dev/zero of=/dev/sd* bs=1MB count=4
- Stop pre-existing software arrays using mdadm utility
mdadm -S /dev/md{*first array*..*last array*}
Installation
Setting Up the File System
- List all connected HDD's, and pipe the list to grep for output to terminal by searching for the key phrase 'sd-'
ls /dev | grep sd
- For each of these drives, use the fdisk formatting utility to initialize it.
fdisk /dev/sd*
- Create a new empty DOS partition table on sd*
o
- Add a new partition
n
- Designate the partition as primary
p
- Use the default values for number of partitions, first sector, last sector, and then write and quit.
- The above can be placed into a text file to more easily automate this process.
fdisk /dev/sd* o n p w
- The following command can be used to execute the text file:
fdisk /dev/sd* < *filename*
Setting Up Software RAID with MDADM
mdadm --create /dev/md* --level=raid* --raid-devices=* /dev/sd{*drive 1*..*drive 4*}1
- this is the array manager. Use man mdadm for more info. All *'s should be replaced with proper numbers, depending upon what raid level you want, how many devices you want in the array, etc.
mdadm --create /dev/md1 --level=raid1 --raid-devices=4 /dev/sd{c..f}1 mdadm --create /dev/md2 --level=raid1 --raid-devices=4 /dev/sd{g..j}1 mdadm --create /dev/md3 --level=raid1 --raid-devices=4 /dev/sd{k..n}1
- Your raid arrays (check with cat /proc/mdstat) may still show as resync=PENDING. Use this command to fix it: mdadm --readwrite /dev/md*
- for us, this leaves 2 spare drives outside of a RAID array. They will be setup as hot-swap drives.
- Now we'll set up the 3 RAID1 arrays into a single RAID0 array.
mdadm --create /dev/md4 --level=raid0 --raid-devices=3 /dev/md{1..3}
mdadm --examine --scan > /etc/mdadm/mdadm.conf //places all arrays in configuration file to start on boot
- Now we create the filesystem for the RAID0 array, and mount it.
mkfs.ext3 /dev/md4 //we'll use ext3 here because it's older and a bit more reliable.
- Time to mount the filesystem!
mkdir /backup mount /dev/md4 /backup
- and to check the filesystem...
df -h
- We'll want to automatically mount the backup array upon boot, so...
blkid /dev/md*
- add the UUID="???" to /etc/fstab in format:
UUID="***" /place ext* defaults 0 2
Committing Backups
BACKING UP ELEPHANT (to elephant)
-Create Script backup.sh in folder /user/local/bin/
#!/bin/bash # Directories to Backup (Seperate with spaces & don't put / at the start or end of the directory) directories="boot etc var home root usr/local/bin usr/local/sbin usr/local/awstats" # Backup Server Login Info. user="remote" #server="10.0.1.25" server="128.153.145.216" rdirectory="/backup/mirror" # MySQL Database Info. #mysqluser="root" #mysqlpassword="" DATE=`which date` HOSTNAME=`which hostname` TAR=`which tar` CUT=`which cut` #MYSQLDUMP=`which mysqldump` GZIP=`which gzip` SSH=`which ssh` CAT=`which cat` hostname=`${HOSTNAME} | ${CUT} -d"." -f1` date=`${DATE} +%F` nice -n 15 ${TAR} -cpzf - -C / ${directories} --exclude "var/run" --exclude "usr/local/awstats/httpd_logs/access_log" | ${SSH} -q ${user}@${server} "${CAT} > ${rdirectory}/${hostname}-backup-${date}.tar.gz" #${MYSQLDUMP} --all-databases --complete-insert --all -h localhost -u${mysqluser} -p${mysqlpassword} | ${GZIP} | ${SSH} -q ${user}@${server} "${CAT} > ${rdirectory}/${hostname}-databases-backup-${date}.sql.gz"
- edit controntab (crontab -e)
set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh )
- add a folder for all servers intended for backup to /backup/
- create user remote
- change ownership of /backup/ to remote of user group remote (chown remote:remote /backup/ -R)
GENERAL BACKUPS
- change script to reflect server currently being edited for
'rdirectory="/backup/*server*"'
- create a key for SSH to elephant (if ~/.ssh/id_rsa.pub not exist)
ssh-keygen
- hand off ssh key to elephant
ssh-copy-id -i ~/.ssh/id_rsa.pub remote@elephant
- make directory on elephant for server
mkdir *folder*; chown remote:remote *folder* -R
- edit controntab (crontab -e)
set to run script once per week ( 45 0 * * 1 /bin/nice -n 19 /usr/bin/ionice -c2 -n7 /usr/local/bin/backup.sh ) | http://docs.cslabs.clarkson.edu/mediawiki/index.php?title=Elephant_Setup_Process&oldid=6545 | 2019-09-15T09:48:31 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.cslabs.clarkson.edu |
Setting Up Your First EDD Store. plugin you'll be taken to a Welcome screen, and there will be a new menu item in the left column of the WordPress menu called Downloads.
Now Easy Digital Downloads is installed and ready to be configured.
Settings and Configuration
All settings for Easy Digital Downloads are in the WordPress admin. In the left column look for Downloads, and under there, Settings.
General Settings
General Settings is the first page you'll see under Settings. The only thing you need to set here is Store Location, which is probably where YOU are.
Currency Settings
Currency is the kind of money you'd like your store to accept. On the General Settings page, at the top is a link to Currency Settings. If you're in the United States you don't need to change anything here. If you're outside the US, make the proper settings there.
Payment Gateways
These are the tools that let people pay money. EDD has built in PayPal Standard and Amazon. Both of these allow for paying with credit cards. Your customer wouldn't need a PayPal account, but they would need an Amazon account to use Amazon.
Simply choose which ones you want to use and choose a default. It's possible to have may different Payment Gateways installed, but be careful not to enable more than one or two, else the customer could become confused about which to use.
PayPal Standard
At the top of the Payment Gateways page is a link to the PayPal settings. The only thing required here is the email address associated with YOUR PayPal account.
Amazon
We have comprehensive documentation on setting up the Amazon Live Payments gateway.
Additional Payment Gateways
EDD has many more Payment Gateways available in the addons section of our web site.
EDD has the ability to automatically send a variety of emails each time a sale occurs, including alerts to the store owner and receipts to the customer.
Under the Emails tab, on the main screen for Emails nothing is actually required, but we recommend you upload a logo for your store.
Purchase Receipts
At the top of the Emails page is a link to Purchase Receipts. The only thing you really need to change here is the Name and Email that receipts will be sent from.
New Sale Notifications
This area only requires something from you if you wish to be emailed when a sale happens. If you do NOT wish to be emailed, skip this section.
If you DO wish to receive an email for every sale, simply put your email address in the Sale Notification Emails box at the bottom of this page.
Configuration Finished, maybe
Unless you need to configure taxes you're done at this point, your store should be fully functional.
We recommend you finish reading this document however, it's short and sweet.
Taxes
You need to know whether you're required to charge taxes. If you are not, skip this section. If you are required to charge taxes fill out the form on this page. link field to put in a price.
Next we need to upload our file (or add it from the media library if you've already uploaded). There's a box labelled Download Files. Leave the Product Type Options at Default for now, and click the Upload a File link. This will open the WordPress Media Manager. Upload your file or choose any existing file there.
>>IMAGE Post, look for the box that says Download Image, click the Set Download Image link and either upload to choose from your Media Library.
. | https://docs.easydigitaldownloads.com/article/1191-setting-up-your-first-edd-store | 2019-09-15T09:49:45 | CC-MAIN-2019-39 | 1568514571027.62 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c22a2fc6979142c5062305/file-PiDymyaDSh.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c22a63903360436857f145/file-K3Yx9CuKIN.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c22b64c6979142c5062309/file-zRcK0YVwvk.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c22e93903360436857f152/file-qlAi8JFriU.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56a52beb9033603f7da34214/file-ykW9uiqZrr.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56a52de1c697914361561eae/file-iVuOusHt8G.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56a52efbc697914361561eb1/file-cQCLJpRf1R.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56a52f78c697914361561eb2/file-Y03PVD6vtR.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c4c258c6979142c5062fc9/file-gWo1RUIeSA.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c4c33a903360436857fdcb/file-fU0QZ3GZwa.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c4c3ed903360436857fdd2/file-PM30HwZ0Jg.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5463d152e4b0f639418397ca/images/56c4c505c6979142c5062fdb/file-5eAnkFZZTv.png',
None], dtype=object) ] | docs.easydigitaldownloads.com |
You can back up the state of the native Marathon instance of your cluster, and later restore from that backup.
You may wish to back up your cluster before performing an upgrade or downgrade. You may need to restore your cluster to a known good state if something goes wrong during an upgrade or if you install a Universe package that does not perform as expected.
LimitationsLimitations
- As of DC/OS 1.10, backups include only the state of Marathon running on master nodes.
- You can perform backup and restore operations only from the DC/OS Enterprise backup and restore CLI and the backup and restore API. | https://docs-dev.mesosphere.com/1.12/administering-clusters/backup-and-restore/ | 2019-09-15T09:40:07 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs-dev.mesosphere.com |
Full Text Search
Couchbase Server Full Text Search (FTS) enables you to create, manage, and query full text indexes on JSON documents stored in a Couchbase bucket. You should use search instead of query when your application requires natural language processing when searching.
What is Full Text Search
Couchbase FTS is similar in purpose to other search software such as ElasticSearch or Solr.
Couchbase FTS is not intended as a replacement for third party search software if search is at the core of your application.
It is a simple and lightweight way to add search to your Couchbase data without deploying additional software and servers.
If you have many queries which look like
SELECT ... field1 LIKE %pattern% OR field2 LIKE %pattern, then full-text search may be right for you.
Executing Your First Search
Our first search query will be done against the
travel-sample bucket (install it, if you haven’t done so already).
Go to the Couchbase Web Console (for example,) and log in with your administrative username and password.
Select(at the top bars).
In the dropdown menu on the top left (
-- choose full text index or alias --) select
travel-search.
In the input box (on the right), type
cheeseand click Search.
You will see a list of document IDs that contain the given search term. You can click on any document ID to see the full document. You can also click on Advanced to see the raw query as it is sent over the REST API. If you click on the command-line curl example checkbox, you can simply copy/paste the output to your terminal and execute the search again.
Here’s a similar version of the search done using
curl on the command line:
curl -XPOST -H "Content-Type: application/json" \ \ -d '{"query":{"query":"cheese"}, "size": 1}' | json_pp
(Note that
json_pp is just a JSON formatter.
You can omit the pipe if you do not have it installed).
{ ... "hits" : [ { "index" : "travel-search_2c8e9e3d5a3638a8_b7ff6b68", "score" : 1.20476244729454, "locations" : { "name" : { "cheese" : [ { "end" : 12, "array_positions" : null, "start" : 6, "pos" : 2 } ] }, ... }, "id" : "landmark_27808" } ] }
(Most of the output has been redacted)
The same query using the Python SDK:
from couchbase.bucket import Bucket import couchbase.fulltext as FT cb = Bucket() results = cb.search('travel-search', FT.StringQuery('cheese'), limit=5) for hit in results: print('Found in document ID={}. Score={}'.format(hit['id'], hit['score']))
Found in document ID=landmark_27808. Score=1.20476244729 Found in document ID=landmark_8689. Score=0.961675281363 Found in document ID=landmark_1154. Score=0.883110932061 Found in document ID=landmark_1163. Score=0.846040674514 Found in document ID=landmark_15133. Score=0.81847489864
The result contains one or more hits, where each hit contains information about the location of the match within a document. This includes the relevance score as well as the location where the match was found.
Making Your Bucket Searchable
In order to execute a search query you must first define a search index. You can define a search index by using the Couchbase Web Console.
Go to the Couchbase Web Console (for example,) and log in with your administrative username and password.
Type the desired name of the search index in the Index Name field.
Select the bucket you would like to associate the search with in the Bucket field
The default settings are sufficient for most cases. You can edit the index later, specifying how certain fields may be analyzed and which fields to index.
Once you’ve created your index, your can query it by using the methods above, replacing
travel-search with the name you used to create the index.
To learn more about making your buckets searchable, see the Text Indexing section of the full text search documentation.
Query Types
The query executed in the previous section is called a string query.
This type of query searches for terms based on a special type of input string.
The query string
+description:cheese -country:france will match documents which contain cheese in their
description field, but are not located in France.
String queries are ideal for searchbox fields to allow users to provide more specialized query criteria.
You can read more about the Query String syntax in the full text search documentation.
There are many other query types available.
These query types differ primarily in how they interpret the search term: whether it is treated as a phrase, a word, an exact match, or a prefix: A Match query searches for the input text within documents and is the simplest of queries.
Match Phrase query searches for documents in which a specific phrase (i.e.
one or more terms, such as
"french cheese tasting") is present.
A Prefix query searches documents which contain terms beginning with the supplied prefix.
There are some other specialized queries, such as Wildcard and Regexp queries which allow you to use wildcards (
Couch?base') or regular expressions (
Couchbase (php|python) SDK).
Below are two code snippets showing how the query for
ch is treated differently when using a Prefix Query versus a Match Query.
results = cb.search('travel-search', FT.MatchQuery('ch'), limit=5) for r in results: print(' Result: ID', r['id']) for location, terms in r['locations'].items(): print(' ', location, terms.keys())
Result: ID airline_1442 iata dict_keys(['ch']) Result: ID landmark_35848 image_direct_url dict_keys(['ch'])
results = cb.search('travel-search', FT.PrefixQuery('ch'), limit=5) for r in results: print(' Result: ID', r['id']) for location, terms in r['locations'].items(): print(' ', location, terms.keys())
Result: ID hotel_15912 reviews.content dict_keys(['check', 'cheese', 'charge', 'checkout', 'chairs', 'chances', 'choice', 'checked', 'cheapcaribbean', 'cheeses', 'charged', "church's", 'cheaper', 'chicken', 'change']) reviews.author dict_keys(['christiansen']) Result: ID hotel_33886 reviews.content dict_keys(['check', 'chose', 'chips', 'choosing', 'chairs', 'channels', 'changed', 'choice', 'checked', 'chair', 'chocolate', 'chaise', 'checking', 'chicken', 'change', 'choose', 'charter', 'cheerful']) reviews.author dict_keys(['christy']) Result: ID hotel_16634 reviews.content dict_keys(['check', 'chocolate', 'chinese', 'chairs', 'children', 'chips', 'chilis', 'chilly', 'checked', 'choice', 'chicago', 'childrens', "church's", 'cheaper', 'chicken', 'choose', 'christina', 'choices']) Result: ID hotel_37318 reviews.content dict_keys(['check', 'choices', 'chairs', 'children', 'changed', 'choice', 'charge', 'challenging', 'chair', 'childrens', 'chicken', 'change', 'choose', 'chambermaid', 'chichen', 'child']) city dict_keys(['cheshire']) Result: ID hotel_21723 reviews.content dict_keys(['check', 'chairs', 'cheesy', 'changed', 'checked', 'chair', 'charging', 'chaotic', 'charge', 'chapel', 'change', 'choose', 'children', 'cheep', 'chef', 'child']) content dict_keys(['cheapie']) public_likes dict_keys(['christop'])
As can be seen in the above examples, the Term assumes the search input is an actual term to search for (
ch) and therefore rejects things such as
chose,
chairs and similar.
Compound Queries
You can compose queries made of other queries. You can use a Conjunction or Disjunction query which contains one or more queries that the document should match (a Disjunction query can be configured with the number of required subqueries that must be matched). You may also use a Boolean query that itself contains sub queries which should, must, and must not be matched.
Compound queries can be used to execute searches such as find any landmark containing "cheese" and also containing one of "wine" , "crackers" , or "old", but does not contain "lake" or "ocean":
results = cb.search('travel-search', FT.BooleanQuery( must=FT.TermQuery('cheese'), should=[FT.TermQuery('wine'), FT.TermQuery('crackers')], must_not=[FT.TermQuery('lake'), FT.TermQuery('ocean')]), limit=5) for r in results: print('ID', r['id']) for location, terms in r['locations'].items(): print('\t{}: {}'.format(location, terms.keys()))
ID landmark_25779 content: dict_keys(['cheese', 'crackers']) ID landmark_7063 content: dict_keys(['wine']) alt: dict_keys(['cheese']) name: dict_keys(['wine']) ID landmark_16693 content: dict_keys(['cheese', 'wine']) ID landmark_27793 content: dict_keys(['cheese', 'wine']) ID landmark_40690 content: dict_keys(['cheese', 'wine'])
When using compoound queries, you can modify any subquery’s
boost setting to increase its relevance and scoring over other subqueries, affecting the ordering.
Other Query Types
There are other query types you can use, such as Date Range and Numeric Range queries which match documents matching a certain time span or value range. There are also debugging queries such as Term and Phrase queries which perform exact queries (without any analysis).
For a quick overview of all the available query types, see the Types of Queries section of the full text search documentation.
Query Options
You can specify query options to modify how the search term is treated. This section will enumerate some common query options and how they affect query results.
field: This option restricts searches to a given field. By default searches will be executed against all fields.
fuzziness: Sets the leniency of the matching algorithm. A higher fuzziness value may result in less relevant matches being considered
analyzer: Sets the analyzer to be used for the search term.
limit: Limits the number of search results to be returned.
skip: Start returning after this many results. This may be used in conjunction with
limitto use pagination.
Search Results
After you have executed a search, you will be given a set of results, containing information about documents which match the query.
In the raw JSON payload, the server returns an object with a
hits property, which contains a search result.
The search result itself is a JSON object containing:
id: The document ID of the hit
score: How relevant the result is to the initial search query. Search results are always ordered by score, with highest-scored hits appearing first.
locations: A JSON object containing information about each match in the document. Its keys are document paths (in the N1QL sense) where matches may be found, and its values are arrays that contain the match location. The match location is a JSON object whose keys are the matched terms found, and whose values are locations:
start: The character offset at which the matched text begins
end: The character offset at which the matched text ends
pos: The word-position of the matched result. This indicates how far deep the match is, in respect to words. For example if the searched term was
schema, and the matched text was:
Ahout NoSQL schema organization, the
poswould be
3, or the third word in the field.
To learn more about the response format used by the FTS service, see the Response Object Schema section of the full text search documentation. Couchbase SDKs may abstract some of the fields or provide wrapper methods around them.
Aggregation and Statistics (Facets)
You may perform search result aggregation and statistics using facets. Facets allow you to specify aggregation parameters in your query. When the query results are received, aggregation results are returned alongside the actual query hits.
You can use a Term Facet to count the number of times a specific term appears in the results
results = cb.search('beer-search', FT.MatchQuery('hops'), limit=5, facets={'terms': FT.TermFacet('description', limit=5)}) for result in results: # handle results pass pprint(results.facets['terms'])
{'field': 'description', 'missing': 9, 'other': 30725, 'terms': [{'count': 782, 'term': 'hops'}, {'count': 432, 'term': 'beer'}, {'count': 365, 'term': 'ale'}, {'count': 327, 'term': 'malt'}, {'count': 130, 'term': 'hop'}], 'total': 32761}
You can likewise use a Date Range Facet to count the number of results by their age, and Numeric Range Facet to count results using an arbitrary numeric range.
To learn more about facets, see the Search Facets section of the full text search documentation.
Partial Search Results
The FTS service splits the indexing data between several pindexes. Because of that, you may encounter situations where only a subset of the pindexes could provide results (eg. if some pindex nodes are not online...).
What happens in this case is that FTS returns a list of partial results, and notifies you that one or several errors also happened. You can inspect the errors, which will each correspond to an failing pindex, via the SDK. Of course, the partial results (the result from healthy pindexes) are still available through the usual methods in the SDK result representation. | https://docs.couchbase.com/php-sdk/2.2/full-text-search-overview.html | 2019-09-15T09:36:47 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.couchbase.com |
How do I filter products by category, tag, price, or other fields?
Easy Digital Downloads does not by default include filtering options for categories, tags, prices, or other product fields. There is, however, a great plugin called FacetWP that allows you to easily add these kind of filters to the [downloads] shortcode.
See FacetWP's video walkthrough for an example of how this can be done. | https://docs.easydigitaldownloads.com/article/1410-how-do-i-filter-products-by-category-tag-price-or-other-fields | 2019-09-15T10:16:59 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.easydigitaldownloads.com |
Identity and Access Tool for Visual Studio 2012
This topic describes the new Identity and Access Tool for Visual Studio 11. You can download this tool from the following URL: or directly from within Visual Studio 11 by searching for "identity" directly in the Extensions Manager.
The Identity and Access Tool for Visual Studio 11 delivers a dramatically simplified development-time experience with the following highlights:
Using the new tool, you can develop using web applications project types and target IIS express.
Unlike with the blanket protection-only authentication, with the new tool, you can specify your local home realm discovery page/controller (or any other endpoint handling the authentication experience within your application) and WIF will configure all unauthenticated requests to go there, instead of redirecting them to the STS.
The tool includes a test Security Token Service (STS) which runs on your local machine when you launch a debug session. Therefore, you no longer need to create custom STS projects and tweak them in order to get the claims you need to test your applications. The claim types and values are fully customizable.
You can modify common settings directly via the tool’s UI, without the need to edit web.config.
You can establish federation with Active Directory Federation Services (AD FS) 2.0 (or other WS-Federation providers) in a single screen.
The tool leverages Windows Azure Access Control Service (ACS) capabilities with a simple list of checkboxes for all the identity providers that you want to use: Facebook, Google, Live ID, Yahoo!, any OpenID provider, and any WS-Federation provider. Select your identity providers, click OK, then F5, and both your application and ACS will be automatically configured and your test application will be ACS-aware.
See also
Feedback | https://docs.microsoft.com/en-us/dotnet/framework/security/identity-and-access-tool-for-vs | 2019-09-15T10:18:16 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.microsoft.com |
Contents IT Service Management Previous Topic Next Topic Create a receiving slip line Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a receiving slip line When assets arrive at a stockroom and you receive them, a receiving slip is created on the purchase order. You create a receiving slip line to identify the specific assets and quantities that were received. Before you beginRole required: procurement_admin or procurement_user About this task If the asset already exists, the asset record is updated when you save the receiving slip line. If the asset does not already exist, a new hardware or software asset record is created. The Model category and Configuration item fields are automatically filled in on the new asset record based on information in the request, purchase order, or receiving slip. If Asset Tag and Serial Number information exists, it is not overwritten. Procedure Navigate to Procurement > Receiving > Receiving Slips and open a receiving slip. In the Receiving Slip Lines related list, click New. The following fields are completed automatically. A Number is assigned. In Received, the current date and time are added. In Received by, the currently logged in user is added. In Purchase Order Line, click the reference lookup icon and select a purchase order line. The Purchase Order Line field is mandatory if the parent receiving slip has an associated purchase order. Only purchase order lines that are associated with the same purchase order linked to the parent receiving slip are available to select. In Quantity, enter the number of items received. For example, five items were ordered, but only two are being received. (Optional) Edit the Received by, Requested for, and Unit cost fields, as needed. Click Submit. After you create a receiving slip line, the Receiving stockroom field on the Receiving Slip record becomes read-only. Related tasksReceive an assetCreate a receiving slipRelated conceptsConsumable assets On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-it-service-management/page/product/procurement/task/t_CreateAReceivingSlipLine.html | 2019-09-15T10:35:51 | CC-MAIN-2019-39 | 1568514571027.62 | [] | docs.servicenow.com |
Server Side Configuration
CKFinder configuration in the ASP language is based on editing the config.asp file. To learn more go throughout the following sections:
- Quick Start
- Access Control
- Images
- Resource Types
- Security
JavaScript Configuration
CKFinder configuration in JavaScript is based on editing the
config.js file.
Unlike the ASP'; }; | http://docs.cksource.com/CKFinder_2.x/Developers_Guide/ASP/Configuration | 2017-06-22T18:18:29 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.cksource.com |
Setting up for a lesson
Step 1 - Install Microsoft Visual Studio
If you don't have a it on your system then grab a copy of Microsoft Visual Studio Community 2015. Its totally free and it has most of the features of the full version and supports everything we will need to extend Orchard.
Step 2 - Install Git Extensions
Every developer should have Git Extensions installed on their machine.
This is the easiest way to quickly clone the Orchard source code onto your dev machine. It's the easiest way to keep up to date when new code is released. As a bonus, if you haven't got to grips with Git and GitHub, this will gently introduce you to the whole process.
If you need detailed help installing Git Extensions and cloning the repository this process is described in the setting up a source enlistment tutorial in step-by-step detail.
Step 3 - Set up a braces management extension
This step is optional but recommended.
You will probably have noticed that Orchard code has the braces on the same line as the definition. It looks like this:
namespace Orchard.LearnOrchard.Example.Models { public class ExamplePart { } }
Instead of what you normally see with .NET code where the opening curly brace is on its own line, which looks like this:
namespace Orchard.LearnOrchard.Example.Models { public class ExamplePart { } }
The placement of the opening curly braces on the same line is a requirement listed in the code conventions document for Orchard CMS.
You can do it by editing the Visual Studio settings manually but this will apply it to all of your solutions whether they are Orchard-based or not.
A better solution is to let a Visual Studio extension manage this for you. Orchard supports two options out of the box:
ReSharper. (Paid) This is a powerful extension with many more features than simple brace management. It is recommended that you check this extension out if you haven't used it.
Rebracer. (Free) Orchard also supports the free Rebracer extension. This extension simply manages your brace configurations for you on a per-solution basis.
Install one of these two extensions.
Step 4 - Clone the repository to your machine
You should always work on a fresh copy of Orchard when you're following a tutorial, testing out new 3rd party modules and themes from the Gallery, or working on your own modules to keep things clean.
Database tables are going to be modified, you will make changes in the admin dashboard, you will make mistakes and change things a second time. When you install new modules or themes they can inject their own data into your database and adjust built in content types. Even if you deactivate them again these changes can get left behind.
To stop this detritus getting into your main site you should always use a fresh copy to test things out in.
With Git Extensions and Orchard's support for SqlCE databases you can have a fresh copy of Orchard up and running in just a minute or two.. Unless your lesson says differently, select
masterfor the latest stable branch.
The rest of the settings can be left as-is. Click
Clone.
Git Extensions will now pull down the files from the remote repository hosted on GitHub. This process will take a minute or two while the files are downloaded to your hard drive.
Tip: If you find yourself following this process often you can greatly speed this up by cloning from a local copy.
To do this create a fresh clone on your hard drive and keep that as your reference copy. Then, instead of using a URL in the
Repository to clone:field you can point at the location of your reference copy stored on your local hard drive.
The whole clone process should then take just a couple of seconds.
Step 5 - Complete the initial site setup process
Now you just need to follow these last few steps to create a default database and set up the admin user:
Open Visual Studio
Click
File,
Open,
Project/Solution...
Navigate to the folder you cloned the repo into
Open the main solution file located at
.\src\Orchard.sln
Ctrl-F5to start the project without debugging (it loads quicker)
You will now be presented with the
Select these options:
- Site name: Enter any name you like that's related to the lesson you're following
- User name: admin
- Password: password
- Data store: SQL Server Compact
- Recipe: Default
Click
Finish setup
The site will now do its initial prep and present you with the
Welcome to Orchard! start screen.
Preparation completed
You can now return to the lesson that brought you here. | http://docs.orchardproject.net/en/latest/Documentation/Setting-up-for-a-lesson/ | 2017-06-22T18:31:57 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.orchardproject.net |
Screeps has a handy embedded code editor for writing game scripts. However, in some cases (for example, you want to use a language other than JavaScript or integrate with your IDE) you will have to commit game scripts to your Screeps account from outside.
If you signed up using GitHub, you have to set your Screeps password in the account settings in order to use external synchronization.
Using Grunt task-screeps
Configure your Gruntfile.js:
module.exports = function(grunt) { grunt.loadNpmTasks('grunt-screeps'); grunt.initConfig({ screeps: { options: { email: '<your e-mail>', password: '<your password>', branch: 'default', ptr: false }, dist: { src: ['dist/*.js'] } } }); }
Now you can run this command to commit your code from
dist folder to your Screeps account:
grunt screeps
Using direct API access
Screeps Web API has an endpoint for working with scripts. The two supported methods are
GET for writing and retrieving respectively. Both methods accept Basic access authentication. Endpoints get and return a JSON structure containing modules object with module names as keys and their content as values.
An example of committing code using Node.js:
var https = require('https'); var email = '<your e-mail>', password = '<your password>', data = { branch: 'default', modules: { main: 'require("hello");', hello: 'console.log("Hello World!");' } }; var req = https.request({ hostname: 'screeps.com', port: 443, path: '/api/user/code', method: 'POST', auth: email + ':' + password, headers: { 'Content-Type': 'application/json; charset=utf-8' } }); req.write(JSON.stringify(data)); req.end();</code>
Request:
POST /api/user/code HTTP/1.1 Content-Type: application/json; charset=utf-8 Host: screeps.com:443 Authorization: Basic PHlvdXIgZS1tYWlsPjo8eW91ciBwYXNzd29yZD4= Connection: close Transfer-Encoding: chunked {"branch":"default","modules":{"main":"require(\"hello\");","hello":"console.log(\"Hello World!\");"}}
Response:
X-Powered-By: Express Content-Type: application/json; charset=utf-8 Content-Length: 8 Date: Mon, 02 Feb 2015 18:46:11 GMT Connection: close {"ok":1} | http://docs.screeps.com/commit.html | 2017-06-22T18:19:17 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.screeps.com |
Map Display
The Experience add-on for the WP Store Locator Plus allows you to select when and what you want the Map to display.
With the Experience add-on installed and activated , a pull down menu under the section “At start-up” allows the below options to be set:
- Show Map (default)
- Hide map until search
- Image until search
Appearance
Additional options for how your Map layout and appearance looks is also available with the Experience Add-on | https://docs.storelocatorplus.com/blog/tag/experience-add-on/ | 2017-06-22T18:39:53 | CC-MAIN-2017-26 | 1498128319688.9 | [] | docs.storelocatorplus.com |
if) is one of the most fundamental items in programming logic. Parenthesis on the if are optional. Due to this, braces are required around the code block. The rationale for this was that omitting braces can result in difficult to debug, critical bugs, as Apple has shown us[citation-needed].
else ifblocks can be supplied: | https://docs.cheddar.vihan.org/docs/control-flow/conditional | 2022-05-16T12:40:57 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.cheddar.vihan.org |
Integration Checklist
Integrating the fiskaltrust.Middleware involves several steps, from calling the API the right way to registering in the production fiskaltrust.Portal and connecting to Dealers. To simplify this process for POS Creators, we've composed a concise checklist that can be used to validate if all required steps were succesfully performed. | https://docs.fiskaltrust.cloud/docs/poscreators/get-started/checklist | 2022-05-16T12:35:18 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.fiskaltrust.cloud |
-04
As we send the kids back to school, we start the slow but steady unveil of what our development team has been working on throughout the summer. Check out a list of the major updates below, and let us know if you have any questions...
Alert settings rewrite
If you follow our
, you know that we're a big fan of the latest web technologies and frameworks. As with our test settings page, which had a rewrite earlier this year, we've updated our alert settings page to improve the user experience, speed, and allow our team to leverage
AngularJS
.
With the new Alert Settings page, users will experience a familiar browsing experience to that in place for test settings: simply select the layer you're interested in, the type of alert, and create a set of alert conditions. Configure notifications rules (number of consecutive periods where alerting state is true, custom messages to send) in the notifications tab
We've also added one alert criterion to the alert rule options: the ability to specify a specific HTTP response code. To review our current list of supported alert criteria, check out the article listed
here
.
For detailed instructions on creating and editing Alert Rules, please review this knowledge base
article
.
Disabling Enterprise Agents
As of today, users assigned to the Account Administrator role can now disable Enterprise Agents. This will allow organizations who have agents deployed in various locations to enable and disable those agents when they need to manage the measurements.
Enterprise Agents will:
Not attempt to run assigned tests
Not factor into alerting (because it's not running tests)
Not alert when it goes offline or has time synchronization issues
New group is available for disabled enterprise agents
Only count against billing when enabled
This is ideal in circumstances where an organization needs to deploy agents to specific sites, but doesn't require the use of those agents on an ongoing basis. This will facilitate the deploy > use > disable > enable > use > disable workflow, allowing faster activation during future troubleshooting efforts, without complicating enterprise agent status widgets. Disabled agents are shown in the Agent Settings page, with a
icon next to the agent name.
To disable an Enterprise Agent, uncheck the "Agent Enabled" checkbox in the agent settings page.
Note: an Enterprise Agent must be online in order to disable it, since the agent acknowledges an instruction to stop testing until reactivated.
We've added a built-in group for disabled agents to allow fast identification of those agents. Administrators can control the enablement of their agents via the Agent Settings page. The Disabled built-in group will only be shown in the case where the account has access to at least one disabled Agent.
BGP Private Peering
As described in our Inside-Out BGP Visibilty article, the BGP Private peering capability has been moved from Beta into Production. This means that all customers who maintain an edge router and want visibility from their own organization into Autonomous System availability are now able to take advantage of this feature. Users with Organization Admin privileges can find this menu item in the Settings >
BGP Private Peers
menu item.
this article
for more details on using the feature.
Bug Extermination
We've fixed a few issues that have cropped up over the last couple of weeks.
Regular users are now able to configure a dashboard as their default. Prior to this fix, users in the Regular User role were unable to save a default due to a permissions error.
We've corrected an issue where the links to view test results from the test settings page could take users to the correct view, but select the wrong test.
The ThousandEyes Virtual Appliance package was previously resetting the web console to the default password. After this most recent update, web console passwords will persist between package upgrades.
The End-to-End Metrics view will now sort by the loss metric by default, when jumping to this view from another Layer.
If you have any questions about this, or any release, please feel free to comment at the bottom of this thread, or to contact our Customer Success team by emailing
We're always happy to answer any questions you may have.
Release Notes: 2014-09-17
Release Notes: 2014-08-20
Last modified
4mo ago
Copy link
Contents
Alert settings rewrite
Disabling Enterprise Agents
BGP Private Peering
Bug Extermination | https://docs.thousandeyes.com/archived-release-notes/2014/2014-09-04-release-notes | 2022-05-16T12:19:48 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.thousandeyes.com |
Release 1.7 (dizzy)¶
This section provides migration information for moving to the Yocto Project 1.7 Release (codename “dizzy”) from the prior release..
Autotools Class Changes¶
The following autotools class changes occurred:
A separate build directory is now used by default: The autotools class. | https://docs.yoctoproject.org/migration-guides/migration-1.7.html | 2022-05-16T12:30:41 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.yoctoproject.org |
DeleteVirtualInterface
Deletes a virtual interface.
Request Syntax
{ "virtualInterfaceId": "
string" }
Request Parameters
For information about the parameters that are common to all actions, see Common Parameters.
The request accepts the following data in JSON format.
- virtualInterfaceId
The ID of the virtual interface.
Type: String
Required: Yes
Response Syntax
{ "virtualInterfaceState": "string" }
Response Elements
If the action is successful, the service sends back an HTTP 200 response.
The following data is returned in JSON format by the service.
-: | https://docs.aws.amazon.com/directconnect/latest/APIReference/API_DeleteVirtualInterface.html | 2022-05-16T12:47:08 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.aws.amazon.com |
Curve Tangent Node
The Curve Tangent node outputs the direction that a curve points in at each control point, depending on the direction of the curve (which can be controlled with the Reverse Curve Node). The output values are normalized vectors. example, a Bézier spline might have 48 evaluated points, but only four control points, if its resolution is 12.
- Tangent
The direction of the curve at every control point. | https://docs.blender.org/manual/ko/dev/modeling/geometry_nodes/curve/curve_tangent.html | 2022-05-16T11:46:26 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.blender.org |
Method
GtkWidgetset_clip
Description [src]
Sets the widget’s clip. This must not be used directly,
but from within a widget’s size_allocate method.
It must be called after
gtk_widget_set_allocation() (or after chaining up
to the parent class), because that function resets the clip.
The clip set should be the area that
widget draws on. If
widget is a
GtkContainer, the area must contain all children’s clips.
If this function is not called by
widget during a ::size-allocate handler,
the clip will be set to
widget‘s allocation. | https://docs.gtk.org/gtk3/method.Widget.set_clip.html | 2022-05-16T11:30:09 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.gtk.org |
Assemblies main difference between participatory processes and assemblies is that assemblies don’t have phases, meaning that they don’t have timelines.
You can see a real world usage of assemblies in Decidim Barcelona, where you can see the different Participation Organs, that are the regular spaces where the City Council meets with citizens and organizations to get feedback.
In this section, we’ll explain how we can configure an Assembly in Decidim.
List
To configure assemblies on the Decidim platform, click on btn:[Assemblies] in the admin sidebar menu. A list will appear with the existing assemblies if there are any:
You can filter by the ones that are:
Published / Unpublished
Public / Private
You can also search by title and control how many elements are in the list.
You have 4 possible actions in this list after an Assembly is created:
Export: send by email the configuration for a given assembly. Can be imported in other Decidim installation.
Duplicate: to duplicate this assembly.
Configure: to edit the metadata and configuration for a assembly.
Assemblies: to manage all the children assemblies for a assembly.
Preview: how it will look once published.
New assembly form
After you’ve initially created your assembly you have a submenu where you need to keep configuring more information about your assembly.
Here you can keep configuring your process:
Info: the same form that we explained in this page.
-
-
-
-
-
-
-
Assemblies types
For clasyfing the assemblies in different kinds, you can define Assembly types. These types can be filtered in the public assemblies page. | https://docs.decidim.org/en/admin/spaces/assemblies/ | 2022-05-16T12:09:01 | CC-MAIN-2022-21 | 1652662510117.12 | [] | docs.decidim.org |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.