content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
knife role¶.
The knife role subcommand is used to manage the roles that are associated with one or more nodes on a Chef server.
Note
To add a role to a node and then build out the run-list for that node, use the knife node subcommand and its run_list add argument.
Note
Review the list of common options available to this (and all) knife subcommands and plugins.
bulk delete¶
Use the bulk delete argument to delete one or more roles that match a pattern defined by a regular expression. The regular expression must be within quotes and not be surrounded by forward slashes (/).
create¶
Use the create argument to add a role to the Chef server. Role data is saved as JSON on the Chef server.
Options¶
This argument has the following options:
- --description DESCRIPTION
- The description of the role. This value populates the description field for the role on the Chef server.
Examples¶
The following examples show how to use this knife subcommand:
Create a role
To add a role named role1, enter:
$ knife role create role1
In the $EDITOR enter the role data in JSON:
{ "name": "role1", "default_attributes": { }, "json_class": "Chef::Role", "run_list": ['recipe[cookbook_name::recipe_name], role[role_name]' ], "description": "", "chef_type": "role", "override_attributes": { } }
When finished, save it.
delete¶
Use the delete argument to delete a role from the Chef server.
edit¶
Use the edit argument to edit role details on the Chef server.
Examples¶
The following examples show how to use this knife subcommand:
Edit a role
To edit the data for a role named role1, enter:
$ knife role edit role1
Update the role data in JSON:
{ "name": "role1", "description": "This is the description for the role1 role.", "json_class": "Chef::Role", "default_attributes": { }, "override_attributes": { }, "chef_type": "role", "run_list": ["recipe[cookbook_name::recipe_name]", "role[role_name]" ], "env_run_lists": { }, }
When finished, save it.
from file¶
Use the from file argument to create a role using existing JSON data as a template.
Options¶
This command does not have any specific options.
list¶
Use the list argument to view a list of roles that are currently available on the Chef server.
show¶
Use the show argument to view the details of a role.
Options¶
This argument has the following options:
- -a ATTR, --attribute ATTR
- The attribute (or attributes) to show.
Examples¶
The following examples show how to use this knife subcommand:
Show as JSON data
To view information in JSON format, use the -F common option as part of the command like this:
$ knife role show devops -F json
Other formats available include text, yaml, and pp.
Show as raw JSON data
To view node information in raw JSON, use the -l or --long option:
knife role show -l -F json <role_name>
and/or:
knife role show -l --format=json <role_name> | https://docs-archive.chef.io/release/11-16/knife_role.html | 2020-10-20T00:48:51 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs-archive.chef.io |
For an uploaded asset to qualify as a spherical panorama image that you intend to use with the Panoramic Image viewer, the asset must have either one or both of the following:
- An aspect ratio of 2.
- .
Previewing Panoramic Images
See Previewing Assets .
Publishing Panoramic Images
See Publishing Assets . | https://docs.adobe.com/content/help/en/experience-manager-cloud-service/assets/dynamicmedia/panoramic-images.html | 2020-10-20T01:18:21 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['/content/dam/help/experience-manager-cloud-service.en/help/assets/dynamic-media/assets/panoramic-image2.png',
None], dtype=object) ] | docs.adobe.com |
Conditional Access: Require MFA for Azure management
Organizations use a variety of Azure services and manage them from Azure Resource Manager based tools like:
- Azure portal
- Azure PowerShell
- Azure CLI
These tools can provide highly privileged access to resources, that can alter subscription-wide configurations, service settings, and subscription billing. To protect these privileged resources, Microsoft recommends requiring multi-factor authentication for any user accessing these resources.
User exclusions
Conditional Access policies are powerful tools, we recommend excluding the following accounts from your policy:
- Emergency access or break-glass accounts to prevent tenant-wide account lockout. In the unlikely scenario all administrators are locked out of your tenant, your emergency-access administrative account can be used to log into the tenant take steps to recover access.
- More information can be found in the article, Manage emergency access accounts in Azure AD.
- Service accounts and service principals, such as the Azure AD Connect Sync Account. Service accounts are non-interactive accounts that are not tied to any particular user. They are normally used by back-end services allowing programmatic access to applications, but are also used to sign in to systems for administrative purposes. Service accounts like these should be excluded since MFA can't be completed programmatically. Calls made by service principals are not blocked by Conditional Access.
- If your organization has these accounts in use in scripts or code, consider replacing them with managed identities. As a temporary workaround, you can exclude these specific accounts from the baseline policy.
Create a Conditional Access policy
The following steps will help create a Conditional Access policy to require those with access to the Microsoft Azure Management app to perform multi-factor authentication.
- and groups
- Under Include, select All users.
- Under Exclude, select Users and groups and choose your organization's emergency access or break-glass accounts.
- Select Done.
- Under Cloud apps or actions > Include, select Select apps, choose Microsoft Azure Management, and select Select then Done.
- Under Conditions > Client apps (Preview), under Select the client apps this policy will apply to leave all defaults selected and select Done.
- Under Access controls > Grant, select Grant access, Require multi-factor authentication, and select Select.
- Confirm your settings and set Enable policy to On.
- Select Create to create to enable your policy.
Next steps
Conditional Access common policies
Determine impact using Conditional Access report-only mode
Simulate sign in behavior using the Conditional Access What If tool | https://docs.microsoft.com/en-us/azure/active-directory/conditional-access/howto-conditional-access-policy-azure-management | 2020-10-20T01:36:37 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
Contents:
Contents:
This section provides some workflow information for how to use API access tokens as part of your API projects in Trifacta® Wrangler Enterprise.
This feature must be enabled in your instance of the platform. For more information, see Enable API Access Tokens.
Generate New Token
API access tokens must be created.
NOTE: The first time that you request a new API token, you must submit a separate form of authentication to the endpoint. To generate new access tokens after you have created one, you can use a valid access token if you have one.
Via API
For more information, see.
Tip: If you wish to manage your token via the APIs, you should copy the Token ID value, too. The Token ID can always be retrieved from the Trifacta application..
NOTE: When using the APIs in SSO environments, API access tokens work seamlessly with platform-native versions of SAML and LDAP-AD. They do not work with the reverse proxy SSO methods. For more information, see)
List Tokens
NOTE: For security reasons, you cannot acquire the actual token through any of these means.
Tip: You can see all of your current and expired tokens through the Trifacta application. See Access Tokens Page. API: Acquire the
tokenIdvalue for the token and use the delete endpoint. See
- Via UI: In the Access Tokens page, select Delete Token... from the context menu for the token listing. See Access Tokens Page.
This page has no comments. | https://docs.trifacta.com/display/r071/Manage+API+Access+Tokens | 2020-10-20T00:23:07 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
Certificates and Trust Management
The components of Charmed Kubernetes need to be able to communicate securely over the network. This is accomplished using TLS and public-key encryption with a chain of trust up to a shared root Certificate Authority (CA). However, when the cluster is being brought up or a new unit is being added, the chain of trust and certificates required must be bootstrapped into the machines somehow.
Juju relations
All communication between Juju units and the Juju controller happens over TLS-encrypted websockets. The certificate for the TLS connection to the controller is added as explicitly trusted to each machine as part of the bootstrap process using a combination of cloud-init and SSH.
With this secure channel, Juju charms can communicate with each other using relation data. The data published by one unit is sent to the controller, which then makes it available for all other units on the same relation. The data for each relation is scoped by ID and is only visible to units participating in the specific relation on which it is set. However, it is worth noting that relation data is stored on the controller machine in MongoDB,so for truly sensitive information, proper secret storage engines, such as Vault, and encryption-at-rest should be used.
Managing certificates
Unfortunately, the Juju controller does not provide any mechanisms for generating or distributing additional certificates to be used by the applications on the machines, so they must be managed by the charms via the secure relation data channel. This is done using the tls-certificates interface protocol and a relation to an application providing a Certificate Authority. (This CA could be either a root CA, or an intermediary CA authorised by some other root CA to issue certificates.)
When the relation is established, the root CA’s certificate is sent via the relation and installed as trusted. Then, certificate requests can be issued over the relation and new certificates created by the CA and returned over the relation.
Because all units with a relation to the CA have a chain of trust up to it (or its root CA), they will automatically trust a certificate generated by the CA without requiring anything to be communicated from the unit which holds the certificate. On the other hand, for the certificates to be trusted externally (such as by clients) or by applications without a relation to the CA, the CA will have to be configured as an intermediary CA with a chain of trust up to a globally trusted CA (such as Comodo or DigiCert).
The certificate information will also be included in the Kubernetes
config
file that Charmed Kubernetes generates to be used by kubelet, so that communications
between the local machine and Kubernetes will be secured.
Server certificates
Each service that will be connected to will need a server certificate identifying and validating it to any clients that wish to connect.
The primary address at which the service can be reached will be its common name; ideally, this will be a fully-qualified domain name (FQDN) which will not change, but for internal service communication, it is often just the ingress address for the unit. Any additional names or addresses by which the service can be reached will be its subject alternative names (SANs).
Charmed Kubernetes will manage the server certificates automatically, including
generating the certificate with the proper CN and SANs. However, the
kubernetes-master charm also supports an
extra_sans option which
can be used to provide additional names to be added to the SANs list.
Client certificates
In order to provide for two-way security, some services require that clients identify themselves via a client certificate. These are more or less the same as server certificates, but are presented by a client to the service they are connecting to so that the service can validate that client’s identity. Client certificates can only be used to identify a client and will be rejected by clients if presented by a service they are connecting to.
Certificate Authorities for Charmed Kubernetes
Charmed Kubernetes can use a CA provided by any charm which provides a tls-certificates endpoint. The two current recommendations are EasyRSA and Vault.
EasyRSA
By default, the Charmed Kubernetes bundle includes the EasyRSA charm. This is a relatively simple charm which uses OpenVPN’s easy-rsa to provide a CA and sign certificates. This is lightweight and works out of the box without any additional configuration, but it cannot act as an intermediary CA and does not support HA.
Vault
For production systems, it is recommended to replace EasyRSA with the Vault charm. This uses HashiCorp’s Vault to provide either a root or intermediate CA. It can also be deployed HA, as well as provide a secure secrets store which can be used to enable encryption-at-rest for Charmed Kubernetes. However, it requires a database to store its data, and (depending on configuration) some manual steps will be required after deployment as well as after any reboot to unseal Vault so that the secrets, such as certificates and signing keys, can be accessed.
See the operations documentation for details on how to deploy Vault as a CA. | https://deploy-preview-267--cdk-docs-next.netlify.app/kubernetes/docs/certs-and-trust | 2020-10-20T00:19:14 | CC-MAIN-2020-45 | 1603107867463.6 | [] | deploy-preview-267--cdk-docs-next.netlify.app |
In this article we will give you an overview of how to set up email destinations, using your own mail server.
Using your own mail server to submit data
Using your own mail server is essential to clients that want their customer to see that submissions are being sent by them. So instead of your client seeing the standard [email protected], they will see the email address you choose to use in this instance.
How to set up your own mail server in a Device Magic email Destination
You will follow the normal step by step for setting up your email destination, the only difference is that under your email options, you will now tick the box that reads "Custom Email Server: Send through my own mail server"
By ticking the box, you will see a new set of fields populate right below:
Fill in the details as provided by your IT department or ISP. Once this is done you can save or update your destination.
Note: Always try to type the details instead of using copy and paste, as this can sometimes cause errors and undelivered submissions. These submissions will stay undelivered until the errors are rectified.
Every time you create a new form with the same intent for the email server settings, remember to add them. The settings are per destination and not stored in a central settings environment to be used across all forms. concludes our overview of using your own mail server. If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/393016-use-your-own-mail-server-to-send-submissions | 2020-10-20T00:55:03 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['https://device-magic.intercom-attachments-1.com/i/o/154578057/93238999ac4854978709be58/own_mail_server.jpg',
None], dtype=object)
array(['https://device-magic.intercom-attachments-1.com/i/o/154578059/d3632882c56a719fc63bcdd9/custom_server_fields.jpg',
None], dtype=object) ] | docs.devicemagic.com |
package com.example.switchyard.example; public interface ExampleService { public String sayHello(String name); }
This tutorial takes you through the steps required to create, implement, test and deploy a SwitchYard application using the Eclipse tooling. The application created provides a greeter service, implemented as a Java bean and accessible via a SOAP HTTP gateway. This tutorial illustrates how to perform the following tasks:
Create a SwitchYard project
Create a Java service interface
Create a bean component implementation
Create and execute a unit test for the service
Create a WSDL service interface
Add a SOAP gateway for accessing the service
Create a transformer
Create and execute a unit test for the transformer
Create and execute a unit test for the SOAP gateway
Deploy the application to a server
Test the deployed application
Refer to Application Basics in the User Guide for an overview of the basic building blocks of a SwitchYard application.
Before beginning you will need the SwitchYard tooling and a SwitchYard runtime.
Install the SwitchYard tooling for Eclipse. Instructions can be found here.
Install the SwitchYard runtime. Instructions can be found here.
This step describes how to create a server definition for your SwitchYard runtime. You will deploy your project to this server in a later step. The server references a runtime definition which will be used when creating your SwitchYard project.
Open the Servers view in Eclipse.
Create a new server (right-click, New→Server).
Select JBoss Enterprise Application Platform 6.1 from the JBoss Enterprise Middleware category. In this example, the Name field has been changed from its default value.
Press Next.
Specify the details for your server's runtime. Specify the path to your SwitchYard runtime installation in the Home Directory field.
Press Finish.
This step describes how to create a project for the application.
From the new menu, select SwitchYard Project. (See SwitchYard Projects for more details.)
The first page in the wizard is used for specifying the project name and location.
The next page is used for specifying various project details, including which SwitchYard components are required by the project. In this example, the default package has been modified and the Bean and SOAP components have been selected.
Notice the Target Runtime field references the runtime associated with the server created in the previous step. Also notice the Library Version corresponds with the SwitchYard version in the runtime.
Upon completion, a new project is created and the SwitchYard configuration will be open in the editor.
The Palette view may not be immediately visible. If it is missing from your workbench, it can be displayed using the Window→Show View→Other... menu
This step describes how to create a bean component using the SwitchYard editor. Upon completion of this step, you will have a component, providing a service (described by a Java interface), implemented by a bean. The following resources will be created during this step:
ExampleService.java describing the Java interface for the component's service.
ExampleServiceBean.java providing the implementation for the component.
component, component service and implementation elements in the SwitchYard configuration (i.e. the switchyard.xml file)
Refer to the Component, Implementation, and Component Service sections of Application Basics in the User Guide for general details.
Refer to the Bean topic in the Developer Guide for details specific to bean components.
In the SwitchYard editor, drag the Bean tool from the Components section of the palette onto the canvas. This opens a wizard prompting for details about the component.
Create a new interface by clicking the Interface link. This opens the standard Eclipse New Interface wizard. Specify a name for the interface (e.g. ExampleService), any other details and press Finish.
The service name field is initialized based on the type name used for the interface (e.g. ExampleService) and the class name field is initialized based on the service name (e.g. ExampleServiceBean). Look over the fields and press Finish.
A new component shape should be added to the canvas.
Save the changes to the switchyard.xml file.
If the project contains a number of XML validation errors, make sure Honour all XML schema locations is disabled in workbench preferences (under XML→XML Files→Validation).
This step describes how to flesh out the interface for the service. The Java interface created in the previous step contained no methods, so you will add them here. You will have a complete interface describing the service at the end of this step.
In the SwitchYard editor, double-click the service icon on the component (the green arrow on the left side of the component). This opens the file describing the interface (ExampleService.java).
Add a method to the interface, for example:
package com.example.switchyard.example; public interface ExampleService { public String sayHello(String name); }
In the previous step, you created the interface while we were creating the component. You could have created the interface first, using the Browse... button in the New Bean Service wizard to select it. The resulting bean class would have been created with stubs for each method on the interface.
This step implements the service logic in the bean class. There are compiler errors in the bean class because it does not implement the method you added to the interface in the previous step. The errors are resolved by creating the missing methods in the bean class. The project should have no errors at the end of this step.
In the SwitchYard editor, double-click the component (ExampleServiceBean) in the diagram. This opens the file used to implement the component (ExampleServiceBean.java).
The file contains errors, since it does not implement required methods.
Add an implementation for the missing method. (Using quick-fix, click the error icon and select, Add unimplemented methods.)
Update the implementation of the sayHello() method so it matches the following:
package com.example.switchyard.example; import org.switchyard.component.bean.Service; @Service(ExampleService.class) public class ExampleServiceBean implements ExampleService { @Override public String sayHello(String name) { return "Hello, " + name; } }
This step describes how to create a unit test to verify the implementation of the service. At the end of this step, you will have a unit test which sends a message to the service and verifies its output. You will also execute the test using the native Eclipse JUnit support. The following resources will be created during this step:
ExampleServiceTest.java
Refer to Unit Testing in the Tooling section for more details on creating unit tests with the tooling.
Refer to Testing in the Developer Guide for more details on application testing.
In the SwitchYard editor, right-click the service icon on the component (the green arrow on the left side of the component) and select New Service Test Class. This will open a wizard, which should be defaulted appropriately. Look the fields over and press Finish.
You can also access New Service Test Class from the context pad that appears when hovering over the component service and from the File→New menu.
Modify the testSayHello() method to properly exercise the service. Initialize the message variable to "Bob" and update the Assert statement to verify the return value from the service is, "Hello, Bob"
package com.example.switchyard.example; import org.junit.Assert; import org.junit.Test; import org.junit.runner.RunWith; import org.switchyard.component.test.mixins.cdi.CDIMixIn; import org.switchyard.test.Invoker; import org.switchyard.test.ServiceOperation; import org.switchyard.test.SwitchYardRunner; import org.switchyard.test.SwitchYardTestCaseConfig; import org.switchyard.test.SwitchYardTestKit; @RunWith(SwitchYardRunner.class) @SwitchYardTestCaseConfig(config = SwitchYardTestCaseConfig.SWITCHYARD_XML, mixins = { CDIMixIn.class }) public class ExampleServiceTest { private SwitchYardTestKit testKit; private CDIMixIn cdiMixIn; @ServiceOperation("ExampleService") private Invoker service; @Test public void testSayHello() throws Exception { // TODO Auto-generated method stub // initialize your test message String message = "Bob"; String result = service.operation("sayHello").sendInOut(message) .getContent(String.class); // validate the results Assert.assertTrue("Hello, Bob".equals(result)); } }
Run the tests by selecting Run As→JUnit Test.
This step describes how to promote a component service to a composite service so the service can be accessed by external clients. The promoted composite service will be described using a WSDL interface. You will create the WSDL file using the Java-to-WSDL capabilities in the tooling. Because the composite service exposes a different interface than the component service, you will need to create a couple of transformers to convert the between the different types (i.e. between the Java and WSDL types). The following resources will be created during this step:
ExampleService.wsdl describing the composite service interface
ExampleServiceTransformers.java providing transformation logic for converting between ExampleService.java and ExampleService.wsdl
composite service element in the SwitchYard configuration
Refer to the Composite Service section of Application Basics and Transformation in the User Guide for general details.
Refer to Java Transformer in the Developer Guide for specific details.
In the SwitchYard editor, right-click the service icon on the component and select Promote Component Service. This opens a wizard allowing you to specify the interface that will be exposed by the promoted service. By default, the interface specified matches the interface used to describe the component service.
Change the interface type from Java to WSDL. This blanks out the interface and name fields.
Press the Interface link to create a new WSDL file. This opens the Java2WSDL wizard.
The first page of the Java2WSDL wizard is used for specifying the name and location for the WSDL file. The default values should be appropriate. Look them over and press Next.
The next page is used for specifying details about the generated WSDL. The default values should be appropriate Look the values over and press Finish.
The WSDL file is created and you are back on the Promote Component Service wizard. The fields have been updated with details from the new WSDL file. Notice that the Create required transformers button is checked and press Next.
The next page is the New Transformers page allowing you to create the required transformers. Notice the two transformer type pairs checked in the table correspond to the input and output types declared on the two interfaces. Ensure both pairs are checked and Java Transformer is selected as the Transformer Type and press Next.
The next page collects information for a new Java class that will be used to implement the transformers. Specify ExampleServiceTransformers for the name and leave org.w3c.dom.Element selected as the Java type to be used to represent XML/ESB types in the transformer class. Press Finish.
The For XML/ESB types use field allows you to specify the Java type used to pass non-Java types into the transformer. The list contains commonly used types, but you can specify whatever type you like. The caveat is that SwitchYard must be able to convert the raw message content to the specified type.
A new composite service promoting the component service should have been added to the SwitchYard configuration. Save the editor.
Select the main shape in the SwitchYard editor and review the contents of the Transforms tab in the Properties view. The newly added transformers should be listed.
This step describes how to add a unit test to verify the transformer logic.
Open ExampleServiceTest.java and add the following method, which will test the transformers:
@Test public void testSayHelloTransform() throws Exception { final QName inputType = QName .valueOf("{urn:com.example.switchyard:switchyard-example:1.0}sayHello"); final QName outputType = QName .valueOf("{urn:com.example.switchyard:switchyard-example:1.0}sayHelloResponse"); // initialize your test message Object<string>Bob</string></sayHello>"; String result = service.operation("sayHello").inputType(inputType) .expectedOutputType(outputType).sendInOut(message) .getContent(String.class); // validate the results String<string>Hello, Bob</string></sayHelloResponse>"; Assert.assertTrue("Unexpected result: " + result, XMLUnit.compareXML(control, result).identical()); }
Notice the use of inputType() and expectedOutputType() in the operation invocation. These identify the message types specified through the interfaces in the switchyard.xml file.
Run the test and verify that it fails.
The step describes how to implement transformer class created earlier. At the end of this step, you will run the unit tests to verify they pass.
Open ExampleServiceTransformers.java and implement the transformStringToSayHelloResponse() as follows:
@Transformer(<string>" + from + "</string></sayHelloResponse>"; }
And implement transformSayHelloToString() as follows:
@Transformer(from = "{urn:com.example.switchyard:switchyard-example:1.0}sayHello") public String transformSayHelloToString(Element from) { return from.getTextContent(); }
Notice the return type of transformStringToSayHelloResponse() was changed from Element to String. The annotation specifies the to type as sayHelloResponse, but the data is returned as a String. SwitchYard provides low-level transformers for converting XML to/from a variety of Java accessible types (e.g. String, Element, Document, byte[]).
Run the unit tests to verify they pass.
This step describes how to add a SOAP gateway binding to the composite service. This will allow external clients to access your service using SOAP HTTP.
Refer to the Service Binding section of Application Basics in the User Guide for general details.
Refer to the SOAP section in the Developer Guide for specific details.
Add a SOAP endpoint to the service by dragging the SOAP tool under the Bindings section of the tool palette onto the new service. This will open the new binding wizard. Specify switchyard-example for the Context Path and press Finish.
Save the changes made to the switchyard.xml file.
Your application is configured with a single service, implemented using a Java bean, and exposed to clients as a SOAP HTTP service. You also have unit tests for the service and transformation logic. The view of the final state of the project and configuration:
This step describes how to deploy your project to a server using Eclipse. You will use the server you configured in one of the previous steps.
Open the Servers view.
Make sure the server is started (right-click, Start).
Right-click the project, Run As→Run On Server. Select the server and press Finish.
You can view the WSDL associated with your service by appending ?wsdl to the URL for your service, e.g..
If you see a ClassNotFoundException related to SwtichYardTestKit in the console log, your test resources may be included in the deployment. To correct this, right-click the project and select Maven→Update Project... You can verify the test resources have been removed from the deployment assembly by looking at the Deployment Assembly page in the Properties for the project.
This step describes how to test the service using a SOAP HTTP clients. You will use the web services tester that ships with Eclipse, but you could use the web services test/debug tool you are most familiar with.
Right-click the WSDL file (ExampleService.wsdl) and select Web Services→Test with Web Services Explorer. Exercise the service using the tester.
Right-click the server in the Servers view and select Show In→Web Management Console. Refer to the SwitchYard management console documentation for details. | https://docs.jboss.org/author/display/SWITCHYARD/Tutorial.html | 2020-10-20T01:30:05 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.jboss.org |
Flare. to a Mesh Renderer.
Flares work by containing several Flare Elements on a single Texture..
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2017.2/Documentation/Manual/class-Flare.html | 2020-10-20T01:30:58 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.unity3d.com |
The model-based activation workflow¶
This workflow implements a two-step – registration, followed by activation – process for user signup.
Note
Use of the model-based workflow is discouraged
The model-based activation workflow was originally the only
workflow built in to
django-registration, and later was the
default one. However, it no longer represents the best practice for
registration with modern versions of Django, and so it continues to
be included only for backwards compatibility with existing
installations of
django-registration.
If you’re setting up a new installation and want a two-step process with activation, it’s recommended you use the HMAC activation workflow instead.
Also, note that this workflow was previously found in
registration.backends.default, and imports from that location
still function in
django-registration 2.1 but now raise
deprecation warnings. The correct location going forward is
registration.backends.model_activation.
Default behavior and configuration¶
To make use of this workflow, simply add
registration to your
INSTALLED_APPS, run
manage.py migrate to install its model,
and include the URLconf
registration.backends.model_activation.urls at whatever location
you choose in your URL hierarchy. For example:
from django.conf.urls import include, url urlpatterns = [ # Other URL patterns ... url(r'^accounts/', include('registration.backends.model_activation.urls')), # More URL patterns ... ]
This workflow makes use of the following settings:
By default, this workflow uses
registration.forms.RegistrationForm as its form class for
user registration; this can be overridden by passing the keyword
argument
form_class to the registration view.
Two views are provided:
registration.backends.model_activation.views.RegistrationView and
registration.backends.model_activation.views.ActivationView. These
views subclass
django-registration‘s base
RegistrationView and
ActivationView, respectively, and
implement the two-step registration/activation process.
Upon successful registration – not activation – the user will be
redirected to the URL pattern named
registration_complete.
Upon successful activation, the user will be redirected to the URL
pattern named
registration_activation_complete.
This workflow uses the same templates and contexts as the HMAC activation workflow, which is covered in the quick-start guide. Refer to the quick-start guide for documentation on those templates and their contexts.
How account data is stored for activation¶
During registration, a new instance of the user model (by default,
Django’s
django.contrib.auth.models.User – see the custom
user documentation for notes on using a different
model)
OneToOneFieldto the user model, representing the user account for which activation information is being stored.
activation_key¶
A 40-character
CharField, storing the activation key for the account. Initially, the activation key is the hex digest.
user
- The user registering for the new account., to avoid header-injection vulnerabilities, instance(form, site, send_email=True. | https://django-registration.readthedocs.io/en/2.1.1/model-workflow.html | 2020-10-20T00:33:18 | CC-MAIN-2020-45 | 1603107867463.6 | [] | django-registration.readthedocs.io |
AlertControl.ShowPinButton Property
Gets or sets whether the Pin button must be displayed in newly created alert windows.
Namespace: DevExpress.XtraBars.Alerter
Assembly: DevExpress.XtraBars.v20.1.dll
Declaration
[DefaultValue(true)] [DXCategory("Appearance")] public virtual bool ShowPinButton { get; set; }
<DefaultValue(True)> <DXCategory("Appearance")> Public Overridable Property ShowPinButton As Boolean
Property Value
Remarks
An alert window can display the Close, Pin and Dropdown control buttons. The availability of these buttons is specified by the AlertControl.ShowCloseButton, ShowPinButton and AlertControl.PopupMenu properties, respectively. The position of the control buttons is specified by the AlertControl.ControlBoxPosition property.
Clicking the Pin button forces an alert window to stay on-screen until the button is clicked again. If the window is unpinned and the mouse cursor doesn't hover over the window for a specific time, the window is automatically destroyed.
It's also possible to display custom buttons in alert windows. To do this, add items to the AlertControl.Buttons collection.
Changing the ShowPinButton option is not in effect for existing alert windows. The option is applied to newly created windows. | https://docs.devexpress.com/WindowsForms/DevExpress.XtraBars.Alerter.AlertControl.ShowPinButton | 2020-10-20T01:03:52 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['/WindowsForms/images/alertwindow_buttons8656.png',
'AlertWindow_Buttons'], dtype=object) ] | docs.devexpress.com |
Introduction¶
With the full modeling approach, developers implement their model training routines in a python script file and directly use python functions for defining and using PyTorch datasets, TensorFlow datasets, and Dask dataframes (for XGBoost modeling) based on Lucd virtual datasets (defined in the Unity client). Additionally, a developer must call functions for uploading trained models and metadata (e.g., model training performance metrics) to the Lucd backend. The advantage of using the full model approach is that developers are free to “carry-over” modeling and customized and/or experimental performance analysis techniques from previously written code. Full model examples are contained in The Lucd Model Shop.
Full Model Format¶
Full models are implemented using python scripts. As opposed to using a
main function, the code’s entrypoint function
must be called
start. The arguments passed to start are described in the sections below.
As a further note, in full model scripts, except blocks (for handling exceptions) MUST end with the
raise statement,
as opposed to another terminating statement like
return. This ensures that the status of the model is accurately
captured in the Unity client.
TensorFlow and PyTorch¶
Table 1 describes the python arguments (defined in the Unity client when starting mode training) which are always
passed to the
start function for TensorFlow and PyTorch models.
Table 1. Full model python script arguments for TensorFlow and PyTorch models.
Dask XGBoost¶
Table 2 describes the python arguments passed to the
start function for Dask XGBoost models.
Table 2. Full model python script arguments for Dask-XGBoost models. | https://docs.lucd.ai/Documentation/Modeling%20Framework/Release%206.3.0/LMF%20Full%20Models.html | 2020-10-20T00:43:30 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.lucd.ai |
SSMA Feature Overview
TechN. | https://docs.microsoft.com/en-us/archive/blogs/ssma/ssma-feature-overview | 2020-10-20T01:15:51 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
Decommissioning
Decommissioning a cluster requires only a few commands, but beware that it will irretrievably destroy the cluster, its workloads and any information that was stored within. Before proceeding, it is important to verify that you:
- Have the correct details for the cluster you wish to decommission
- Have retrieved any valuable data from the cluster
Destroying the model
It is always preferable to use a new Juju model for each Kubernetes cluster. Removing the model is then a simple operation.
It is useful to list all the current models to make sure you are about to destroy the correct one:
juju models
This will list all the models running on the current controller, for example:
Controller: k8s-controller Model Cloud/Region Status Machines Cores Access Last connection controller aws/us-east-1 available 1 4 admin just now default aws/us-east-1 available 0 - admin 8 hours ago k8s-devel aws/us-east-1 available 9 24 admin 3 hours ago k8s-production aws/us-east-1 available 12 28 admin 5 minutes ago k8s-testing aws/us-east-1 available 9 24 admin 2 hours ago
To proceed, use the
juju destroy-model command to target the model you wish to remove. For example:
juju destroy-model k8s-testing
You will see a warning, and be required to confirm the action. Juju will then continue to free up the resources, giving feedback on the process. It may take some minutes to complete depending on the size of the deployed model and the nature of the cloud it is running on.
WARNING! This command will destroy the "k8s-testing" model. This includes all machines, applications, data and other resources. Continue [y/N]? y Destroying model Waiting on model to be removed, 7 machine(s), 4 application(s)... Waiting on model to be removed, 6 machine(s)... ... Waiting on model to be removed, 3 machine(s)... Waiting on model to be removed... Model destroyed.
You should confirm that the cloud instances have been terminated with the relevant cloud console/utilities.
Destroying a controller
If there are no longer any cluster models attached to the controller, you may wish to remove the controller instance as well. This is performed with a similar command:
juju destroy-controller <controller-name>
As previously, there is a confirmation step to make sure you wish to remove the controller.
The command will return an error if there are populated models still attached to the controller.
Removing/editing Kube config
If you have permanently removed clusters, it is also advisable to remove their entries in the Kubernetes configuration file (or remove the file entirely if you have removed all the clusters).
By default the file is located at
~/.kube/config. It is a YAML format file, and each cluster block looks similar to this:
- cluster: certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURPekNDQWlPZ0F3SUJBZ0lKQU9HTm9 PM1pNb3RGTUEwR0NTcUdTSWIzRFFFQkN3VUFNQmd4RmpBVUJnTlYKQkFNTURUTTBMakkwTk M0eE9USXVORGt3SGhjTk1UZ3hNREF4TURnek5qVTFXaGNOTWpnd09USTRNRGd6TmpVMQpXa kFZTVJZd0ZBWURWUVFEREEwek5DNHlORFF1TVRreUxqUTVNSUlCSWpBTkJna3Foa2lHOXcw QkFRRUZBQU9DCkFROEFNSUlCQ2dLQ0FRRUE4YThJVytCUTM5c0p3OENyR0c5MmlYSUlWczN QOElEVVJvOTMyVFVYcG05UWkwSUgKeVF0a2N1WEVpREhlbUgwK1RORHRmaFZ4cm9BRjQrVE czR3JWZXc0YzgrZE0zNWJMY0lMRkl1L1UydlR4NkRXbgpDa2lwblhJVlc1QUxXa1hqRUh3N TUvWnk3S0F2SjVjS0h5WnhMYzY1ZFZqVjJYNkQxRHhJRXh0c2dDVnB2R1gvCmRpK1ppZlJX eFIwR0l5SkM3b29VaEVjcitvQVpMOFc2YklUMUlwcklXUGQ1eWhJck10MmpmaE42NWVkV1h jYkoKNERQeEpIOVlDNFFqSC84OHNJdWVJMWo4S1NYQjdwbUJxMzJHYXZuaFp3K2M5bG1KSl E5WjNZM2dla3lBUlZDRQpwUUU5T3BYR01QOCtCdng4QXdrQW9obE83RE1xQTlMaTl3QXExU UlEQVFBQm80R0hNSUdFTUIwR0ExVWREZ1FXCkJCUXRaa3paWmxKSmZKMGZtbWNPZU9pR0VB L3d1REJJQmdOVkhTTUVRVEEvZ0JRdFprelpabEpKZkowZm1tY08KZU9pR0VBL3d1S0VjcEJ vd0dERVdNQlFHQTFVRUF3d05NelF1TWpRMExqRTVNaTQwT1lJSkFPR05vTzNaTW90RgpNQX dHQTFVZEV3UUZNQU1CQWY4d0N3WURWUjBQQkFRREFnRUdNQTBHQ1NxR1NJYjNEUUVCQ3dVQ UE0SUJBUUJnCjVndFpyY0FLUlFSYUJFZDFiVm5vRXpoZkxld2RXU2RYaEZGRXB6bjlzdG05 VGdVM2ZLREJ0NktUY3JKN2hqQWQKMUlUbUc1L2ExaUlDM29qQ2d3c1o3cnFmRlhkRGQzcVZ GdjJySmZEN2ljeGV2c0NjWTdiS1hlYy9QdVgxQmxlMwo1amRjSWRkZnhqZ1M3K2dibCtQcG owbm9OR0c5MUgydWtBWTlaei9FUHdZckhuV1V1V1o5Z3JTZlVGam1ZMTNWCjkxZmF0S2R2d lU1blFPUXdkdThPVHlFRGk2blA4ckN4bEJjRW1hN3hkM3c5djI0NUlaRnd5QTJBMlR6emFJ M04KYm0vMVNyL2tTNlZCSi9sZ2s3ampxRWFicmpFakluMlU4aGkzRkluRnBkZkZlUXhBaW5 JcUx5dGRzeXY5aFZVbQpKQ3luNW8yaGVjSTFsaDU3RFRtYQotLS0tLUVORCBDRVJUSUZJQ0 FURS0tLS0t server: name: conjure-canonical-kubern-fc3 contexts: - context: cluster: conjure-canonical-kubern-fc3 user: conjure-canonical-kubern-fc3 name: conjure-canonical-kubern-fc3 current-context: conjure-canonical-kubern-fc3 kind: Config preferences: {} users: - name: conjure-canonical-kubern-fc3 user: password: sZVKhY7bZK8oG7vLkkOssNhTzKZlBmcG username: admin | https://deploy-preview-267--cdk-docs-next.netlify.app/kubernetes/docs/decommissioning | 2020-10-20T00:45:52 | CC-MAIN-2020-45 | 1603107867463.6 | [] | deploy-preview-267--cdk-docs-next.netlify.app |
This module explores the use of the preprocessor to help achieve portability across hardware platforms, perform inclusion of source files and develop macros to aide in program readability and debugging.
The C preprocessor can conceptually be thought of as a program that processes the source text of a C or C++ program before the compiler. It can be an independent program or its functionality may be embedded in the compiler. It has three major functions:
Macro replacement is the replacement of one string by another, conditional inclusion is the selective inclusion and exclusion of portions of source text on the basis of a computed condition, and file inclusion is the insertion of the text of a file into the current file. Actions of the preprocessor are controlled by special directives placed in the source file. A preprocessor directive begins with the character # on a new line, and is terminated by the newline character unless continued on the next line by placing a backslash at the end of the line. Whitespaces may precede or follow the directive- introducing character #.
The general form for a simple macro definition is
#define macro-name value
and it associates with the macro-name whatever value appears from the first blank after the macro-name to the end of the line. The value constitutes the body of the macro. Previously defined macros can be used in the definition of a macro. Notice that the value of the macro does not end with a semicolon. The preprocessor replaces every occurrence of a simple macro in the program text by a copy of the body of the macro, except that the macro names are not recognized within comments or string constants. Because the macros are used within expressions in the body of the program, it is not appropriate to end a macro with a semicolon. Macros that represent single numeric, string or character values can also be referred to as defined constants. Some examples of simple macro definitions are
/* mass of an electron at rest in grams */ #define ELECTRON 9.107e-28 /* mass of a proton at rest in grams */ #define PROTON 1837 * ELECTRON /* number of bits in an integer */ #define BITSININT 32
The #define directive can also be used for defining parameterized macros. The general form for defining a parameterized macro is
#define macro-name(param1, param2, ...) body-of-macro
Parameterized macros are primarily used to define functions that expand into in-line code. Some examples of parameterized macro definitions are
#define ABS(N) ( (N) >= 0 ? (N) : -(N) ) #define READ(I) scanf( "%d", ) #define CONVERT(I) \ printf("decimal %d = octal %o, hex %x\n",I, I, I )
A macro can be defined to have zero parameters as in
#define getchar() getc(stdin)
which is useful for simulating functions that take no arguments. The preprocessor performs two levels of replacement on parameterized macros: first the formal parameters in the body of the macro are replaced by the actual arguments, and then the resulting macro body is substituted for the macro call. Thus, the preprocessor will replace the following statements
x = ABS(x); READ(n); CONVERT(n); c = getchar(); by x = ( (x) >= 0 ? (x) : -(x) ); scanf( "%d", ); printf( "decimal %d = octal %o, hex %x\n",n,n,n); c = getc(stdin);
Arguments in a macro call can be any sequence of tokens, including commas, provided that the sequence is bracketed within a pair of parentheses as in
#define DEGUG(FORMAT,ARGS) printf(FORMAT, ARGS) DEBUG("%s = %f\n", ("x",0) ); which is replaced by printf("%s = %f\n", "x", 0 );
One danger with macro development is that at the time of macro development it is unknown as to what types of expressions the macro will be used. Liberal use of parenthesis is encouraged when developing a macro. One example of the danger associated with macros is the following:
#include <stdio.h> #define SQR(x) x * x /* square a number */ int main() { int result; int a=5, b=6; result = SQR( 4 ); /* everything is OK */ result = SQR( a+b ); /* not what was desired */ }
In the above example, the passing of a+b to the macro results in the expanded code:
result = a+b * a+b;
which is evaluated as a + (b * a) + b which will not give the answer that was expected. What was expected was 121 and what was received as 41. The problem of course is the evaluation of the operators involving with the expanded macro. The multiplication operator is evaluated before the addition operator. By adding parenthesis the following is produced:
#define SQR(x) (x * x)
which when passed a+b, expands to (a + (b*a) + b) which not give what is desired. Therefore, more parenthesis are required:
#define SQR(x) ( (x) * (x) )
This will finally give the desired results to the macro expansion.
If a macro parameter appears inside a string in the body of a macro, the parameter is not replaced by the corresponding argument at the time of macro expansion. Thus, if you define a macro as
#define PRINT(V,F) printf("V = %F", V) and call it as PRINT(i,d); the call will expand into printf("V = %F", i);
ANSI C introduced a new preprocessing operator #, called the stringizing operator, which in conjunction with string concatenation provides a facility to overcome this difficulty. If a parameter follows the character # in the definition of a parameterized macro, both # and the parameter are replaced during macro expansion by the corresponding actual argument enclosed with double quotes. For example, given the macro definition
#define PRINT(V,F) printf(#V " = " #F, V ) the macro call PRINT(i,%d); expands into printf("i" " = " "%d", i ); which after string concatenations becomes printf("i = %d", i );
A ** character is automatically inserted before each **” or **** character that appears inside, or surrounding, a character constant or string literal in the argument. For example, given the macro definition
#define PRINT(s) printf("%s\n", #s) the macro call PRINT(use \ ("backslash") not /); expands into printf("%s\n", "use \\ (\"backslash\") not /");
ANSI C introduced another preprocessing operator ##, called the token pasting operator, to build a new token by macro replacement. The ## operator is recognized within both forms of macro definitions, and concatenates the two preprocessing tokens surrounding it into one composite token during a macro expansion. For example, given the macro definition
#define processor(n) CPU: ## n the macro call processor(586) expands into CPU:586
Conditional inclusion allows selective inclusion of lines of source text on the basis of a computed condition. Conditional inclusion is performed using the preprocessor directives:
#if #ifdef #ifndef #elif #else #endif
A directive of the form
#if constant-expression
checks whether the constant-expression evaluates to nonzero (true) or 0 (false). A directive of the form
#ifdef identifier
is equivalent in meaning to
#if 1
when identifier has been defined, and to
#if 0
when identifier has not been defined, or has been undefined with a #undef directive. The #ifndef directive has just the opposite sense, and a directive of the form
#ifndef identifier
is equivalent in meaning to
#if 0
when identifer has been defined, and to
#if 1
when identifier has not been defined, or has been undefined with a #undef directive. An identifier can be defined by writing
#define identifier
or by using the -D switch on the command line of the compiler. For instance, to define that a program is being compiled in a PC environment either of the following would work.
#define PC or cc -DPC program.c
Both establish the identifier PC as having been defined. Notice that #ifdef and #ifndef do not look at any value associated with the identifier, they merely look to see if the identifier has been defined. Conditional inclusion is frequently used in developing programs that run under diffferent environments. For example, BITS_IN_INT may be defined as
#if HOST == PC #define BITS_IN_INT 16 #elif HOST == DECSYSTEM10 #define BITS_IN_INT 36 #else #define BITS_IN_INT 32 #endif
The preprocessor can then select an appropriate value for BITS_IN_INT, depending upon the defined value of HOST. Conditional inclusion is also used to control debugging. You may write in your program
#ifndef DEBUG if( !(i % FREQUENCY) ) printf("Iteration: %d\n",i); #endif
and then turn debugging on and off simply by defining and undefining DEBUG. Instead of embedding #ifdef DEBUG directives all over the code when you require many debugging statements in a program, you may define a PRINT macro as
#ifndef DEBUG #define PRINT(arg) #else #define PRINT(arg) printf arg #endif
and then write
PRINT( ("iteration: %d\n",i) ); PRINT( ("x = %f, y = %f\n", x, y) );
which expands into
printf ("iteration: %d\n", i); printf ("x = %f, y = %f\n", x, y );
or null statements depending on whether DEBUG has been defined or not. Note the use of two pairs of parentheses when calling PRINT.
The traditional K & R C standard specified certain predefined macros be supported by conforming compilers. These macros are used mostly in debugging. These macros are:
__LINE__ __FILE__ __DATE__ __TIME__ __STDC__
All of these macro names are formed by having two underscore characteres both precede and follow the macro name. The macros __LINE__ and __FILE__ are replaced by the current line number and file name when they are expanded. The macros __DATE__ and __TIME__ will expand into the date and time of when the compilation took place. The __DATE__ and __TIME__ macros can be used to initialize program variables. The __STDC__ macro will be a 1 or undefined depending upon if the current compiler conforms to the ANSI C standard or not.
The #include directive allows for the inclusion of a source file into the current source file that is being processed. The form of the #include directive is:
#include "filename" or #include
where the filename is the name of a file, possibily with a pathname associated, that contains source statements that are to be placed at the current location in the file being processed. The included file can contain other include statements as well as other preprocessor directives and C or C++ statements. Notice the use of double quotes ( ” “) or angle brackets ( < >) to surround the name of the file. Traditionaly, the use of double quotes to surround the name of the file to be included has indicated that the file is be looked for in the current or local directory or the pathname indicated with the filename. If not found there, then the subdirectory where the standard C header files reside is to be searched. The use of angle brackets to surround the name of the file indicates that the file is to be looked for in the standard C header file subdirectory, no search of the local directory is made. All the following are valid representations of filenames to be included:
#include "local.h" /* searches for this file in current */ /* directory; if not found then look */ /* in the subdirectory associated */ /* with the standard C header files */ #include "/usr/local/include/stuff.h" /* searches for the file */ /* in the directory */ /* indicated; if not */ /* found then look */ /* in the subdirectory */ /* associated with the */ /* standard C header */ /* files */ #include /* searches for this file in the */ /* subdirectory associated with the */ /* standard C header files */ #include type.h> * searches for this file in the sys */ /* subdirectory under the subdirectory */ /* associated with the standard C */ /* header files */ #include "func.c" /* search for this file in the local */ /* subdirectory; if not found then look*/ /* in the subdirectory associated */ /* with the standard C header files; */ /* this file is a C source file */
Most files that are included are header files which contain macro definitions, defined constants, type definitions ( struct or union templates), enumerated types and function prototypes. Header files should not contain global variable declarations. The use of global variables should be avoided if at all possible because of the difficulty in controlling the integrity of data values placed in a global variable. Global variables must not be declared in header files because most applications consist of several source files. Each source file is a module of the application and may consist of one or more C or C++ functions needed to perform a specific task. Each source file usually includes not only the standard C header files needed for that module but also will include a header file that is application specific. This application specific header file will have the macro definitions, defined constants, type defiitions, enumerated types and function prototypes for the current application. If each source file associated with an application includes this local header file and if there is a global variable declaration in that header file the application will not be linked by the linkage editor. The linker will complain that there is a “duplicate redefinition of ....”. Since the linker is trying to bind several object files together and each object file has the same global declaration, the linker does not know which global declaration to use. If it is absolutely necessary to have global declarations in a header file, then protect the header file from multiple inclusion by surrounding the contents of the header file as follows:
/*********************************************************** * Header File : sample.h * Application : sample ************************************************************ #ifndef _SAMPLE_H #define _SAMPLE_H #include <stdio.h> . . . #endif
The above code uses the preprocessor to detect if the header file has been previously included. On the first inclusion of the header file, the identifier _SAMPLE_H was defined and the statements composing the header file where processed. On subsequent inclusions of the header file none of the statements within this header file are not seen because of the conditional inclusion.
The following preprocessor directives were added by the ANSI C standard.
#line #error #pragma
#line provides a mechanism for altering the settings of __LINE__ and __FILE__. This directive has the form of:
#line lineno "filename"
where lineno is a new value for __LINE__ and “filename” is a new value for __FILE__. The “filename” parameter is optional, but if present, then lineno must be present. #error will immediately terminate the compilation. This would most likely be used in conjunction with a #if directive. #pragma is an implementation dependent directive which allows a compiler vendor to add extensions to the standard preprocessor. | https://docs.aakashlabs.org/apl/cphelp/chap14.html | 2020-10-19T23:47:27 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.aakashlabs.org |
Concepts
A user application requests to open an audio role, which is bound to a stream, which is bound to a zone. This allows the application to only care about the audio role.
For example, a navigation application can request the navigation role, which is bound to the navigation stream defined by the HAL. This stream is bound to the driver zone, which is the closest speaker to the driver.
Roles
The high level API allows applications to open roles such as emergency, navigation or multimedia. A role is bound to a stream, which is basically a device URI. When a role is opened, then the policy engine is notified and executes an interrupt on every other opened role with a lower priority. An interrupt is a policy engine function that can change the volume, mute or unmute, change the stream’s state.
This behaviour allows the policy engine to take actions like lowering the radio volume when an application wants to play something on the emergency role.
Streams
A stream is basically a device URI that you can open to write audio data. For example, it can be “hw:2,0,1”, which means that you have to use this as an alsa device URI.
Zones
Multiple speakers are spread around inside a vehicule, they are named depending on their position, like front-center, front-left, front-right, rear-left, rear-right, etc…
Zones are an abstraction of positional audio. A zone is made of one or more speakers and describes logical audio areas like driver, front, rear, etc. | https://docs.automotivelinux.org/docs/en/halibut/apis_services/reference/audio/4a-framework/concepts.html | 2020-10-20T00:23:37 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.automotivelinux.org |
Install. Although it is possible to install and run Redis on almost any modern operating system, all Sensu users are encouraged to install and run Redis on one of the following supported platforms:
Amazon Web Services ElastiCache with Redis 2.8.x may be used to provide Redis service for Sensu. See Amazon’s Getting Started with Amazon ElastiCache guide for details on provisioning ElastiCache in AWS. Once provisioned in AWS, Sensu can be configured to use the ElastiCache endpoint address and port.
WARNING: Sensu Support is available for Redis installations on Ubuntu/Debian and RHEL/CentOS operating systems, and via Amazon Web Services ElastiCache, only. | https://docs.sensu.io/sensu-core/1.4/installation/install-redis/ | 2020-10-20T01:03:25 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.sensu.io |
makes it simple for you to custom design your store by implementing custom colors that help store target specific customers to increase sales.
Background Color: Will modify the color of your store body page.
Primary Theme Color: Sets the text color to the theme primary color. Sets the background color to the theme primary color
Secondary Theme Color: The secondary color is used for floating action buttons and other interactive elements, serving as visual contrast to the primary color.
Primary & Secondary Text Color: User can insert colorized text, such as red, orange, green, blue and indigo, and many others. User can specify its background color at the same time. Background-color and font-color should always be different so both colors are not overshadowing each other preventing user to view store product details visually.
Navbar Color: A navigation bar (or navigation system) is a section of a graphical user interface intended to aid customers in accessing store products or important information.
Admin can setup navigation color according to it’s theme color display.
Containers & Page Cards: A card is a flexible and extensible content container. It includes options for headers and footers, a wide variety of content, contextual background colors, and powerful display options which will be used to display store catalog and other important information on your store. Users has the option change coloring of each card by selecting custom color property as shown in the display.
Buttons Styles: Buttons are primarily used to "do something" on a website. If the action is to created, edit, delete or anything else to some piece of information, use a button.
Branding Tool gives access to vast button styles and colors to accommodate your stores theme and products. | https://docs.vendorfuel.com/plugin/branding/untitled | 2020-10-20T00:12:46 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.vendorfuel.com |
Customer Groups are a way to group customers together for various reasons. For example let's say a group of customers can place orders but the orders require final management approval before being submitted to your store for processing. By grouping the customers together they can all utilize the site and even checkout but the order is routed to an approver or approvers for final review and submission. | https://docs.vendorfuel.com/plugin/customers/customer-groups | 2020-10-19T23:37:54 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.vendorfuel.com |
Getting started¶
Once you finished the installation you could try to run the following test cases to familiarize yourself with CEASIOMpy
Without RCE:¶
If you want to use RCE to create your workflow, you can directly go to the next section. If you cannot or do not want to install RCE on your computer, you can still use CEASIOMpy through the module ‘WorkflowCreator’.
Test Case 1 : Simple workflow¶
The module ‘WorkflowCreator’ can be found at /CEASIOMpy/ceasiompy/WorkflowCreator/workflowcreator.py you can run it by simply type in your terminal:
cd YourPath/CEASIOMpy/ceasiompy/WorkflowCreator/ python workflowcreator.py -gui
Hint
If you use a Linux you can easily set an alias in you .bashrc file to run these command with a shortcut of your choice.
When you run this module, a GUI will appear. The first thing to do is to chose the CPACS file we will use for this analysis, click on “Browse” and select “D150_simple.xml”, it is a test aircraft similar to an A320 . Then, we will have the possibility to chose which module to run and in which order. For this first test case, we will use only the tab “Pre”. On the left you will see the list of all available modules, when you select one you can add it to the list of module to execute. You can also remove module from this list or change the order with the buttons. We will create a simple workflow with only three modules:
SettingsGUI -> WeightConventional -> Range.
Once you added these three modules in order you can click “Save & Quit”. The first module to run will be “SettingsGUI”, it will show you all the available options for the next modules. All the options are pre-filled with default values. You don’t need to change any value for this example, so you can just click “Save & Quit”. The two next modules will be executed automatically without showing anything except some results in the terminal.
Test Case 2 : Aerodynamic database with PyTornado¶
In this example we will see how to create an aerodynamic database with PyTornado and plot them on a graph. As in test case 1, we will run ‘WorkflowCreator’. In the GUI, after selecting the same D150_simple.xml CPACS file, we will select some modules in the list and place them in order to create the following workflow:
CPACSCreator -> SettingsGUI -> PyTornado -> PlotAeroCoefficients
Then, you can click “Save & Quit”. The first module to be executed will be CPACSCreator, with this module you can modify the geometry of the aircraft. We won’t made changes now, but if you want to learn how to use CPACSCreator, you can follow the link bellow:
If you apply some changes, save your modifications and close the CPACSCreator windows. Now, the SettingsGUI windows will appear, and first, we will import a new AeroMap. Now, click on ‘Import CSV’ to add a new AeroMap, select ‘Aeromap_4points_aoa.csv’ and ‘OK’.
You can also click on the ‘aeromap_empty’ and delete it with the buttons. You must click on the button ‘Update’ to make the new AeroMap available for all modules.
Now, you can click on the ‘PyTornado’ Tab, the AeroMap selected should be the one you imported before. We will not change the other option and just click ‘Save & Quit’.
The software should run for a few seconds and when the calculation are done, a plot of the aerodynamic coefficient should appear.
Test Case 3 : SU2 at fixed CL and Range¶
For this test case you can try to run the following workflow with the same aircraft. It will compute the range after performing a CFD analysis at fixed CL.
At first add all recquired modules to the workflow as illustrated in the figure below.
After that you can modify the different parameters for each module. For the CLCalculator you can choose under which condition you want to be able to fly. The required Cl will be computed and the SU2 analysis will modify the angle of attack in order to reach this value of Cl.
After that the SkinFriction module will add the friction term that is not taken into account by the SU2 computation, in order to have a corrected value of the drag.
The range is then computed and you can find your results within the CPACS file in the ToolOutput folder of the WorkflowCreator module. For the results of the CFD analysis you can find all the files in the WKDIR/CEASIOMpy_Run_DATE/ with the correct date.
Test Case 4 : Optimizing the CL¶
To launch an optimisation routine or a DoE, launch the WorkflowCreator tool with the GUI and select the modules you want to run in the routine in the ‘Optim’ tab and select the Optim option from the type list. Here the modules ‘WeightConventional’ and ‘PyTornado’ are chosen.
The next window that opens is the SettingsGUI, were you can tune the options specific to each module. Focusing on the options of the Optimisation tab, different options can be set. In our case the ‘Objective’ is set on ‘cl’ and the ‘Optimisationn goal’ is set to ‘max’ in order to search for the maximal cl. The other options from the ‘Optimisation settings’ group are left at their default values and the ‘DoE settings’ group is not used in the case of an optimisation. The ‘CSV file path’ is left blank as we have not defined a file with the problem parameters.
After saving the settings a CSV file is automatically generated and opened with your standard CSV opener.
Here you can see all the parameters that can be used in the routine. The ones that appear in the objective function are labelled as ‘obj’ in the ‘type’ column, and the ones that are only outputs of some modules are labelled ‘const’, their type must not be changed. All the other parameters can have their values modified in the following columns :
Or you can add a new element to the file if you know what to add. Here we suppress all the elements that we do not desire to have in our routine and we end up with just the parameters that we want for this optimisation. Note that you can also let some cases blank in the ‘min’ and ‘max’ columns if you don’t want to restrain the domain on one side. The ‘min’ and ‘max’ values of the ‘obj’-labelled parameters are not taken into account.
Save the file and close it, you must then press the enter key into the terminal to launch the routine. After that the routine is running and you just have to wait for the results.
When the routine finishes two windows are generated containing the history plots of the parameters on one and the objective function on the other. After closing these windows the program closes and you finished the process !
For the post-processing you can go in the WKDIR folder, where you will find the CEASIOMpy_Run with the corresponding date at which you launched the routine. In this file you will find the results of an initial run the program did befpore launching the optimisation loop and the ‘Optim’ folder, in which all the results of the routine are saved.
Driver_recorder.sql : Recorder of the routine from the OpenMDAO library. It is used to access the history of the objective function.
circuit.sqlite : File that is used to generate the N2 diagram of the problem.
circuit.html : This file represents an N2 diagram of the problem that was solved, showing the dependencies of the variables between the different modules.
Variable_library.csv : This file is the CSV that you modified before launching the routine.
Variable_history.csv : This file contains the value of all the desired parameters at each iteration, plus the basic informations of the parameters (name, type, getcmd, setcmd).
Geometry : This folder contains the CPACS that is used in the routine at each iteration, this can be changed by tuning the ‘Save geometry every’ parameter in the Optimisation settings.
Runs: This folder contains the directories of all the workflow runs that were made during the routine. These folders are equivalent to a simple CEASIOMpy_Run workflow folder.
Test Case 5 : Surrogate model for SU2¶
Before using a surrogate model the first step is to create a model and train it over a data set, for that the SMTrain module must be used. First launch a DoE with the lift as an objective function and at least 25 sample points (the more the better). When the CSV file for the parameters opens, choose the wing span and the angle of attack as design variables.
After the DoE launch a new workflow with the SettingsGUI and the SMTrain modules. Here get the Variable_history file that was generated by the DoE which is located under WKDIR/CEASIOMpy_Run_DATE/DoE/, which will serve as the training set. As we do not have a lot of data, we will use all of it to train the model by setting the % of training data to 1.0 and deactivate the plots used for validation. The model we chose this time is the simple krigin model KRG.
After setting the options launch the program, which will only take a few seconds before finishing, and go look for the trained model in the SM folder of the current working directory. This file cannot be normally opened as it has been dumped using a special python library (ref to ‘pickle’). Note that there also is a CSV called ‘Data_setup’ which was generated that contains the informations about the model inputs/outputs in case you want to check your model entries. Now the part comes were we call the SMUse module to get results with our surrogate.
For this part chose a CPACS file with different values than the one you fed to the model (either with a new CPACS or you can modify it using cpacscreator). Launch a workflow with SettingsGUI and SMUse. In the settings, choose the resulting file containing the surrogate, you don’t have to change any other option. Launch the program and now you have the resulting CPACS file in the ToolOutput folder of the SMUse module ! If you take a look at the aeromap you chose for the computation you will see that only a value of cl has been added/modified.
With RCE:¶
To run the following workflow you need to have a running version of RCE with the CEASIOMpy module installed. For more information check out the Step 3 of the installation page.
Test Case 1 : Simple workflow¶
We will create a simple workflow which contains a CPACS input and three modules.
CPACS input -> SettingsGUI -> WeightConventional -> Range
Your workflow should look like that:
Test Case 2 : Aerodynamic database with PyTornado¶
CPACS input -> CPACSCreator -> PyTornado -> SkinFriction -> PlotAeroCoefficients | https://ceasiompy.readthedocs.io/en/latest/user_guide/getting_started.html | 2020-10-20T00:43:13 | CC-MAIN-2020-45 | 1603107867463.6 | [] | ceasiompy.readthedocs.io |
,
This page has no comments. | https://docs.trifacta.com/display/r071/Import+Excel+Data | 2020-10-20T00:50:46 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
Installation guide¶
The 3.1.1 release of django-registration supports Django 2.2, 3.0, and 3.1 on the following Python versions:
- Django 2.2 supports Python 3.5, 3.6, 3.7, and 3.8.
- Django 3.0 and 3.1 support Python 3.6, 3.7, and 3.8.
Normal installation¶
The preferred method of installing django-registration is via pip, the standard Python package-installation tool. If you don’t have pip, instructions are available for how to obtain and install it, though. | https://django-registration.readthedocs.io/en/stable/install.html | 2020-10-20T01:07:59 | CC-MAIN-2020-45 | 1603107867463.6 | [] | django-registration.readthedocs.io |
Release Notes
New Features¶
- Added support for image-based models, including new image-based feature transformation operations and TensorFlow and PyTorch data loading functions for image data.
- Added support for user-defined feature transformation operations, specifically including the ability to define custom training data feature transformation operations in python, and upload them for usage in the Lucd GUI.
- Added functions to eda.lib.lucd_ml module for simplifying the process of getting classification and regression predictions from TensorFlow estimators, and well as computing confusion matrices.
Changes¶
- Updated modeling framework to be compatible with TensorFlow v2.1 (from v1.3).
- Removed the “simplified/templated modeling approach”. Now users provide models via full python scripts.
- Fixed bugs in loading data into models from the Lucd unified dataspace.
- Fixed bugs in loading data for multiclass (> 2 classes) TensorFlow text classification models. | https://docs.lucd.ai/Documentation/Modeling%20Framework/Release%206.2.7/LMF%20Release%20Notes.html | 2020-10-20T00:17:29 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.lucd.ai |
Derives the numeric value for the week within the year (Derives the numeric value for the week within the year (
1,
2, etc.). Input must be the output of the
DATEfunction or a reference to a column containing Datetime values. The output of this function increments on Sunday.
NOTE: If the source Datetime value does not include a valid input for this function, a missing value is returned.
There are differences in how the WEEKNUM function is calculated in Photon and Spark running environments, due to the underlying frameworks on which the environments are created:
- Photon week 1 of the year: The week that contains January 1.
- Spark week 1 of the year: The week that contains at least four days in the specified year.
Basic Usage
Column reference example:
derive type:single value:WEEKNUM(MyDate)
Output: Generates a column of values containing the numeric week number values derived from the
MyDate column.
Syntax and Arguments
derive type:single value:WEEKNUM(datetime_col)
For more information on syntax standards, see Language Documentation Syntax Notes.
datetime_col
Name of the column whose week number value is to be computed.
- Missing values for this function in the source data result in missing values in the output.
- Multiple columns and wildcards are not supported.
Tip: You cannot insert constant Datetime values as inputs to this function. However, you can use the following:
WEEKNUM(DATE(2017,12,20)).. | https://docs.trifacta.com/display/r051/WEEKNUM+Function | 2020-10-20T00:40:26 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.trifacta.com |
This is the DOC that we have summarized for the addons widget and the widgets that I use frequently on single listing detail page.We have subdivided many parts so that users can customize them.Please refer to the wiget description below.
See here for how to find your template.Or Go to Page builder (Dashboard >> Javo >> Page builder) to find all the templates.
Listing detail When you add a description from a template, the visible text is a temporary text.Please understand that there is temporary text because you can not show one of the values entered in each listing.
A widget that displays the FAQs entered in each listing.As described in the #1 description widget, please understand that what you see in the template is a temporary image.
A gallery widget that displays a description image for each list.As described in the # 1 description widget, please understand that what you see in the template is a temporary image.
This meta widget is a really useful widget.You can import and display the meta values for listings.No data, or any other value, can be displayed when modifying the template. (Because you have loaded the value of the template you are creating)However, the actual listings detail page shows the meta values for each listing. You can add it by selecting the value you want.
This widget is a widget that shows listings author information.You can set and display only the information you want. (For social, it is shown only if there is a value in each listings.)
contact form 7 Widget associated with plugin.You can show the contact form or set the form to be visible when you click the button.
Working hour A widget that can be used if there is an addon. You can show the working hour set for each listing.As described in the # 1 description widget, please understand that what you see in the template is a temporary image.
A widget that shows the amenities of each listing.Please understand that amendments are not visible when editing the template.Actual listings show amenities.
A mini map widget that shows the location of each listing.
Review widget available when there is a review addon.We have subdivided into 5 parts to make it easier for users to customize.Settings of the Review widget can be set in the back-end => listings => settings => review tab. | https://docs.wpjavo.com/theme/single-listing-detail-widgets-settings/ | 2020-10-20T00:44:05 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.wpjavo.com |
hi,
we have simple stateless service with default instance count 6, deployed to local five node service fabric cluster, errors out as below at 6th instance creation. No port is given for service end point, ports are dynamic, no conflicts.
System.CRM ServiceReplicaUnplacedHealth_Secondary_9a78cf1e-252f- 4b29-9d0c-10947e92928b Tue, 03 Dec 2019 12:31:37 GMT 0.00:01:05.0 132198498979714109 true false The Cluster Resource Manager was unable to find a placement for one or more of the Service's Replicas: Secondary replica could not be placed due to the following constraints and properties: TargetReplicaSetSize: 6 Placement Constraint: N/A Parent Service: N/A Constraint Elimination Sequence: Down nodes count 0, Deactivated nodes count 0, Deactivating nodes count 0 Existing Secondary Replicas eliminated 5 possible node(s) for placement -- 0/5 node(s) remain. Nodes Eliminated By Constraints: Existing Secondary Replicas -- Nodes with Partition's Existing Secondary Replicas/Instances: -- FaultDomain:fd:/4 NodeName:_Node_4 NodeType:NodeType4 NodeTypeName:NodeType4 UpgradeDomain:4 Deactivation Intent/Status: None/None FaultDomain:fd:/3 NodeName:_Node_3 NodeType:NodeType3 NodeTypeName:NodeType3 UpgradeDomain:3 Deactivation Intent/Status: None/None FaultDomain:fd:/2 NodeName:_Node_2 NodeType:NodeType2 NodeTypeName:NodeType2 UpgradeDomain:2 Deactivation Intent/Status: None/None FaultDomain:fd:/1 NodeName:_Node_1 NodeType:NodeType1 NodeTypeName:NodeType1 UpgradeDomain:1 Deactivation Intent/Status: None/None FaultDomain:fd:/0 NodeName:_Node_0 NodeType:NodeType0 NodeTypeName:NodeType0 UpgradeDomain:0 Deactivation Intent/Status: None/None
Not sure why 6 th instance creation is failing when there is no port, application is deployed on kestrel host in reliable service.
How can we prevent this error from coming and have the 6th instance created? any help is greatly appreciated. | https://docs.microsoft.com/en-us/answers/questions/2221/stateless-service-with-instance-count-6-when-node.html | 2020-10-20T01:26:54 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
ASP.NET MVC 4 Entity Framework Scaffolding and Migrations
Download Web Camps Training Kit
If you are familiar with ASP.NET MVC 4 controller methods, or have completed the "Helpers, Forms and Validation" Hands-On lab, you should be aware that many of the logic to create, update, list and remove any data entity it is repeated among the application. Not to mention that, if your model has several classes to manipulate, you will be likely to spend a considerable time writing the POST and GET action methods for each entity operation, as well as each of the views.
In this lab you will learn how to use the ASP.NET MVC 4 scaffolding to automatically generate the baseline of your application's CRUD (Create, Read, Update and Delete). Starting from a simple model class, and, without writing a single line of code, you will create a controller that will contain all the CRUD operations, as well as the all the necessary views. After building and running the simple solution, you will have the application database generated, together with the MVC logic and views for data manipulation.
In addition, you will learn how easy it is to use Entity Framework Migrations to perform model updates throughout your entire application. Entity Framework Migrations will let you modify your database after the model has changed with simple steps. With all these in mind, you will be able to build and maintain web applications more efficiently, taking advantage of the latest features of ASP.NET MVC 4.
Note
All sample code and snippets are included in the Web Camps Training Kit, available from at Microsoft-Web/WebCampTrainingKit Releases. The project specific to this lab is available at ASP.NET MVC 4 Entity Framework Scaffolding and Migrations.
Objectives
In this Hands-On Lab, you will learn how to:
- Use ASP.NET scaffolding for CRUD operations in controllers.
- Change the database model using Entity Framework Migrations. B: Using Code Snippets".
Exercises
The following exercise make up this Hands-On Lab:
Note
This exercise is accompanied by an End folder containing the resulting solution you should obtain after completing the exercise. You can use this solution as a guide if you need additional help working through the exercise.
Estimated time to complete this lab: 30 minutes
Exercise 1: Using ASP.NET MVC 4 Scaffolding with Entity Framework Migrations
ASP.NET MVC scaffolding provides a quick way to generate the CRUD operations in a standardized way, creating the necessary logic that lets your application interact with the database layer.
In this exercise, you will learn how to use ASP.NET MVC 4 scaffolding with code first to create the CRUD methods. Then, you will learn how to update your model applying the changes in the database by using Entity Framework Migrations.
Task 1- Creating a new ASP.NET MVC 4 project using Scaffolding
If not already open, start Visual Studio 2012.
Select File | New Project. In the New Project dialog, under the Visual C# | Web section, select ASP.NET MVC 4 Web Application. Name the project to MVC4andEFMigrations and set the location to Source\Ex1-UsingMVC4ScaffoldingEFMigrations folder of this lab. Set the Solution name to Begin and ensure Create directory for solution is checked. Click OK.
New ASP.NET MVC 4 Project Dialog Box
In the New ASP.NET MVC 4 Project dialog box select the Internet Application template, and make sure that Razor is the selected View engine. Click OK to create the project.
New ASP.NET MVC 4 Internet Application
In the Solution Explorer, right-click Models and select Add | Class to create a simple class person (POCO). Name it Person and click OK.
Open the Person class and insert the following properties.
(Code Snippet - ASP.NET MVC 4 and Entity Framework Migrations - Ex1 Person Properties)
using System; using System.Collections.Generic; using System.Linq; using System.Web; namespace MVC4EF.Models { public class Person { public int PersonID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } } }
Click Build | Build Solution to save the changes and build the project.
Building the Application
In the Solution Explorer, right-click the controllers folder and select Add | Controller.
Name the controller PersonController and complete the Scaffolding options with the following values.
In the Template drop-down list, select the MVC controller with read/write actions and views, using Entity Framework option.
In the Model class drop-down list, select the Person class.
In the Data Context class list, select <New data context...>. Choose any name and click OK.
In the Views drop-down list, make sure that Razor is selected.
Adding the Person controller with scaffolding
Click Add to create the new controller for Person with scaffolding. You have now generated the controller actions as well as the views.
After creating the Person controller with scaffolding
Open PersonController class. Notice that the full CRUD action methods have been generated automatically.
Inside the Person controller
Task 2- Running the application
At this point, the database is not yet created. In this task, you will run the application for the first time and test the CRUD operations. The database will be created on the fly with Code First.
Press F5 to run the application.
In the browser, add /Person to the URL to open the Person page.
Application: first run
You will now explore the Person pages and test the CRUD operations.
Click Create New to add a new person. Enter a first name and a last name and click Create.
Adding a new person
In the person's list, you can delete, edit or add items.
Person list
Click Details to open the person's details.
Person's details
Close the browser and return to Visual Studio. Notice that you have created the whole CRUD for the person entity throughout your application -from the model to the views- without having to write a single line of code!
Task 3- Updating the database using Entity Framework Migrations
In this task you will update the database using Entity Framework Migrations. You will discover how easy it is to change the model and reflect the changes in your databases by using the Entity Framework Migrations feature.
Open the Package Manager Console. Select Tools > NuGet Package Manager > Package Manager Console.
In the Package Manager Console, enter the following command:
PMC
Enable-Migrations -ContextTypeName [ContextClassName]
Enabling migrations
The Enable-Migration command creates the Migrations folder, which contains a script to initialize the database.
Migrations folder
Open the Configuration.cs file in the Migrations folder. Locate the class constructor and change the AutomaticMigrationsEnabled value to true.
public Configuration() { AutomaticMigrationsEnabled = true; }
Open the Person class and add an attribute for the person's middle name. With this new attribute, you are changing the model.
public class Person { public int PersonID { get; set; } public string FirstName { get; set; } public string LastName { get; set; } public string MiddleName { get; set; } }
Select Build | Build Solution on the menu to build the application.
Building the application
In the Package Manager Console, enter the following command:
PMC
Add-Migration AddMiddleName
This command will look for changes in the data objects, and then, it will add the necessary commands to modify the database accordingly.
Adding a middle name
(Optional) You can run the following command to generate a SQL script with the differential update. This will let you update the database manually (In this case it's not necessary), or apply the changes in other databases:
PMC
Update-Database -Script -SourceMigration: $InitialDatabase
Generating a SQL script
SQL Script update
In the Package Manager Console, enter the following command to update the database:
PMC
Update-Database -Verbose
Updating the Database
This will add the MiddleName column in the People table to match the current definition of the Person class.
Once the database is updated, right-click the Controller folder and select Add | Controller to add the Person controller again (Complete with the same values). This will update the existing methods and views adding the new attribute.
Updating the controller
Click Add. Then, select the values Overwrite PersonController.cs and the Overwrite associated views and click OK.
Updating the controller
Task4- Running the application
Press F5 to run the application.
Open /Person. Notice that the data was preserved, while the middle name column was added.
Middle Name added
If you click Edit, you will be able to add a middle name to the current person.
Summary
In this Hands-On lab, you have learned simple steps to create CRUD operations with ASP.NET MVC 4 Scaffolding using any model class. Then, you have learned how to perform an end to end update in your application -from the database to the views- by using Entity Framework Migrations.: Using Code Snippets
With code snippets, you have all the code you need at your fingertips. The lab document will tell you exactly when you can use them, as shown in the following figure.
_3<<
Start typing the snippet name
Press Tab to select the highlighted snippet
Right-click where you want to insert the code snippet and select Insert Snippet
Pick the relevant snippet from the list, by clicking on it | https://docs.microsoft.com/en-us/aspnet/mvc/overview/older-versions/hands-on-labs/aspnet-mvc-4-entity-framework-scaffolding-and-migrations | 2020-10-20T00:25:47 | CC-MAIN-2020-45 | 1603107867463.6 | [array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image6.png',
'Inside the Person controller Inside the Person controller'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image19.png',
'Adding a controller overwrite'], dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image27.png',
'Using Visual Studio code snippets to insert code into your project Using Visual Studio code snippets to insert code into your project'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image28.png',
'Start typing the snippet name Start typing the snippet name'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image29.png',
'Press Tab to select the highlighted snippet Press Tab to select the highlighted snippet'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image30.png',
'Press Tab again and the snippet will expand Press Tab again and the snippet will expand'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image31.png',
'Right-click where you want to insert the code snippet and select Insert Snippet Right-click where you want to insert the code snippet and select Insert Snippet'],
dtype=object)
array(['aspnet-mvc-4-entity-framework-scaffolding-and-migrations/_static/image32.png',
'Pick the relevant snippet from the list, by clicking on it Pick the relevant snippet from the list, by clicking on it'],
dtype=object) ] | docs.microsoft.com |
Run a rolling upgrade of an SOFS cluster to Windows Server 2016 in VMM
Important
This version of Virtual Machine Manager (VMM) has reached the end of support, we recommend you to upgrade to VMM 2019.
Cluster rolling upgrade was introduced in Windows Server 2016. It enables you to upgrade the operating system of cluster nodes in a scale-out file server (SOFS) cluster, or Hyper-V cluster, without stopping workloads running on the nodes. Read more about rolling upgrade requirements and architecture.
This article describes how to perform a cluster rolling upgrade of SOFS managed in the System Center - Virtual Machine Manager (VMM) fabric. Here's what the upgrade does:
- Creates a template: Creates a template of the node configuration by combining the appropriate physical computer profile with the node configuration settings detailed in the upgrade wizard.
- Migrates workloads: Migrates workloads off the node so workload operations aren't interrupted.
- Removes node: Puts the node into maintenance mode and then removes it from the cluster. This removes all VMM agents, virtual switch extensions, and so forth from the node.
- Provisions the node: Provisions the node running Windows Server 2016, and configures it according to the saved template.
- Returns the node to VMM: Brings the node back under VMM management and installs the VMM agent.
- Returns the node to the cluster: Adds the node back into the SOFS cluster, brings it out of maintenance mode, and returns virtual machine workloads to it.
Before you start
- The cluster must be managed by VMM.
- The cluster must be running Windows Server 2012 R2.
- The cluster must meet the requirements for bare metal deployment. The only exception is that the physical computer profile doesn't need to include network or disk configuration details. During the upgrade VMM records the node's network and disk configuration and uses that information instead of the computer profile.
- You can upgrade nodes that weren't originally provisioned using bare metal as long as those nodes meet bare metal requirements such as BMC. You'll need to provide this information in the upgrade wizard.
- The VMM library needs a virtual hard disk configured with Windows Server 2016.
Run the upgrade
- Click Fabric > Storage > File Servers. Right-click the SOFS > Upgrade Cluster.
- In the Upgrade Wizard > Nodes, click the nodes you want to upgrade or Select All. Then click Physical computer profile, and select the profile for the nodes.
- In BMC Configuration select the Run As account with permissions to access the BMC, or create a new account. In Out-of-band management protocol click the protocol that the BMCs use. To use DCMI click IPMI. DCMI is supported even though it's not listed. Make sure the correct port is listed.
- In Deployment Customization, review the nodes to upgrade. If the wizard couldn't figure out all of the settings it displays a Missing Settings alert for the node. For example, if the node wasn't provisioned by bare metal BMC settings might not be complete. Fill in the missing information.
- Enter the BMC IP address if required. You can also change the node name. Don't clear Skip Active Directory check for this computer name unless you're changing the node name, and you want to make sure the new name is not in use.
- In the network adapter configuration you can specify the MAC address. Do this if you're configuring the management adapter for the cluster, and you want to configure it as a virtual network adapter. It's not the MAC address of the BMC. If you choose to specify static IP settings for the adapter select a logical network and an IP subnet if applicable. If the subnet contains and address pool you can select Obtain an IP address corresponding to the selected subnet. Otherwise type an IP address within the logical network.
- In Summary, click Finish to begin the upgrade. If the wizard finishes, the node upgrades successfully so that all of the SOFS nodes are running Windows Server 2016. The wizard upgrades the cluster functional level to Windows Server 2016.
If you need to update the functional level of a SOFS that was upgraded outside VMM, you can do that by right-clicking the Files Servers > SOFS name > Update Version. This might be necessary if you upgraded the SOFS nodes before adding it to the VMM fabric, but SOFS is still functioning as a Windows Server 2012 R2 cluster. | https://docs.microsoft.com/en-us/system-center/vmm/sofs-rolling-upgrade?view=sc-vmm-2016 | 2020-10-20T01:58:18 | CC-MAIN-2020-45 | 1603107867463.6 | [] | docs.microsoft.com |
Walkability: Agent Based Models
- Walkability: Agent Based Models
Introduction
In this tutorial, we will introduce a walking demonstrator tool which will help you understand walkability and public transport using Ped-Catch. By setting simple parameters such as maximum walking time, walk speed and crossing wait time, it generates a piece of animation showing how the intelligent agents walk around the road network (Check out this link for more details)
Using the DemonstratorThis demonstrator is available here
The link above should bring you to the following page
The web page contains two main parts: a control panel and a map view. In the control panel, we can set up three parameters which control the agents’ navigation behaviour: maximum walking time, walk speed and crossing wait time. The simulation results will be rendered on the map view, which contains a layer controller. By default, it shows the road network layer, and we can turn on the destination layer (red dots).
To start a simulation, we need pick up an origin on the map. To do this, just click anywhere close to the road network. Tips: the origin point doesn’t have to be placed on the road, walkability algorithm will automatically create a path connecting the origin to the road network. But if the origin is too distant away (say 300 meters), the algorithm will fail to do so and you will see an error message “Generated output is empty” above the map view.
Let’s choose Melton train station as our origin point:
Then click the green “Simulate” button to run the algorithm. The computing time largely depends on the “Maximum Walking Time”. The longer walking time, the longer computing time is required.
When computation is done, you will see a big red circle on the map. The disabled “Play” and “Pause” buttons are now active.
Click the “Play” button to see the walkability animation. You will see circles spreading over the road network in three colors: red (0<= walk time <4 minutes), orange (4 minutes <= walk time < 8 minutes) and yellow (8 minutes <= walk time). Here are some example outputs:
- maximum walking time = 20 min; walk speed = 1.33 m/s; crossing wait time = 30 sec
NOTICE: If multiple outputs are shown on the map, only the last animation can be replayed.
More Information
Understanding Walkability and Public Transport Using Ped-Catch
People
Co-led by Dr Hannah Badland, McCaughey Centre, VicHealth Centre for Promotion of Mental Health and Community Wellbeing, and Dr Marcus White, Faculty of Architecture, Building, and Planning.
Victorian Government Champions: Jim Betts, Secretary, Department of Transport; and Christine Kilmartin, Manager, Sustainability Analysis, Department of Planning and Community Development.
Research Champion: Professor Billie-Giles Corti, Director, McCaughey Centre, The University of Melbourne.
About
This project will develop a walkability index which will be calculated and applied to census collection districts surrounding public transport nodes in the North West Metropolitan Region. This agent-based modelling tool has the capacity to be not only a powerful urban design tool that builds on existing walkability measures, but also an influential advocacy tool. The purpose of the tool was to yield a more accurate understanding of how neighbourhood walkability is associated with access and permeability, and to develop an interactive on-line tool for researchers and planners to modify neighbourhood walkability to enhance access to features of interest. As such, this work will provide not only innovative tools to investigate how neighbourhood walkability is related to amenity access, but enables different planning scenarios to be tested prior to developing new or retrofitting older areas. It is anticipated that planners will apply these tools to diverse areas in Melbourne’s North West Metropolitan region and beyond, prior to building infrastructure or when seeking to modify existing sites.
A strength of this tool is its spatial data flexibility; that is, different users have the ability to upload different data sources at different scales. In order to do this, the tool has been developed with a spatial data hierarchy in mind. Fine-grained data are optimal, but inputs extend too coarser-scale (e.g., SA2-level) and open-access (e.g., Walk Score) data sources. In this way, a variety of different end users are able to utilise the tool either using their own data, or those supplied through the AURIN portal or other open-access sources. Currently, standard datasets used for the tool include the road network, features of interest (e.g., schools, public transport nodes), and traffic lights. Depending on end user access to other spatial data, additional spatial layers can also be incorporated into the tool. These include footpaths, traffic volume, and topography. Including such additional spatial data layers enhances the accuracy of the tool. There is also scope to add in more subjective features of the environment, such as shade, crime and incivilities, and aesthetics.
User-specified functionality
This was conceptualised as being a flexible tool that allows users to test different scenarios based on features of interest (e.g., public access nodes, school locations), street connectivity, and population of interest (e.g., vulnerable populations, such as children or older adults). In order to achieve this, a series of user-specified functionalities were designed into the interface.
These included sliding bars to manipulate the: maximum walking time (up to 20 minutes), maximum walking speed (up to 2 m.s-1), and intersection wait time (up to 60 seconds). Vector editing tools were also provided in the interface, allowing the user to: add or remove street networks to modify street connectivity; and manipulate the agents starting point to reflect potential features of interest (e.g., public transport egress, location of a school). These attributes are theoretically linked to walking behaviours. For example, walking speeds vary greatly with different ages and levels of mobility, and having the ability to alter pedestrian speeds was important for investigating potential for different nodes of interest (e.g., primary schools, senior citizen organisations). It was hypothesised such destinations would have smaller catchments than others due to the different expected walking speeds.
Vector editing functionality
As per limitations of earlier walkability tools and feedback from stakeholders, there was a recognised need to include vector-editing functionality. This would enable the user to modify the street network and test different scenarios prior to retrofitting an environment. This was regarded by the stakeholder working group as being a valuable extension to the tool, and was created by snapping vectors to the existing street networks. Multiple vectors can be added or removed within a given scenario. An example of this is shown in Figure 1, where the blue line in top image has been added by the user; the image below shows the agents travelling the new connection.
Broader model considerations
As well as providing user and vector editing functionalities, broader considerations for model development included having to: operate within an open-source environment, and within the AURIN portal requirements and architecture; be flexible enough to include a range and hierarchy of spatial data and scales provided by the end user; function in hardware with lower computational power; and provide an interface that was easy to navigate. In order to do this, the tool was developed using basic agents with a limited level of artificial intelligence. Agents left from a user specified node and travelled to a randomly distributed ‘cookie-crumb’ snapped to the street network within the parameters set by the user (e.g., walking speed, time, intersection wait time) (see Figure 2). Having a limited artificial intelligence ensures different ‘what if’ scenarios can be rapidly tested; the stakeholder group regarded this as being an important feature.
Outputs include: a visual, graded representation of agents throughout the network; area coverage comparison between the agent-based model and circular catchment expressed as a ratio; and mean data on the number of intersections crossed. All of these variables are recognised as being important attributes of walkability. As well as a map (e.g., Figure 2), these output data are generated as a .csv file after running each model, thereby allowing comparisons of walkability to be made across different hypothetical built environments. Together these outputs enable the user to understand how walkability is influenced by built environmental and behavioural modifications, or test specific environments to reflect the population of interest (e.g., children, older adults).
Licensing
Because of the open-source nature of the current data, theoretically the agent-based model can be used across most built environments internationally. A online simulation of the agent-based model can be found here:, with another example here:
The tool is designed to be overlaid with the walkability index, which is housed in the AURIN portal and is created by the Place, Health, and Liveability Program, University of Melbourne.
This project has been supported by the University of Melbourne Centre for Spatial Data Infrastructure and Land Administration (CSDILA), the Victorian Government Office of the Valuer General, the Australian National Data Service (ANDS) and the Australian Urban Research Infrastructure Network (AURIN) through the National Collaborative Research Infrastructure Strategy Program and the Education Investment Fund (EIF) Super Science Initiative. | https://docs.aurin.org.au/tutorials-and-use-cases/walkability-agent-based-models/ | 2019-05-19T15:01:08 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.aurin.org.au |
Message-ID: <1265255198.199.1558275535002.JavaMail.daemon@confluence> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_198_1745180086.1558275535002" ------=_Part_198_1745180086.1558275535002 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
With cloud user provisioning you can manage Atlassian accou= nts and security policies in one place. You will save time, increase securi= ty a= ccounts based on the user information sent from the identity provider. All = though this approach makes sure that all users easily can log in, there are= some disadvantages:
With cloud user provisioning, an auto synchronized and virtual user dire= ctory is setup. This takes responsibility of keeping the Atlassian pro= ducts updated with user accounts, groups and group memberships.
Azure, G Suite and Okta all offer their own REST APIs giving access to i= nformation about your users and groups.
Since Atlassian do not support these APIs natively, we have created a br= idge API which exposes the cloud provider APIs as Atlassian Crowd APIs.
Atlassian Crowd APIs is not used to make this work, so you do not need t= o have a license for the Atlassian Crowd products.
The Atlassian products communicate with Kantega SSO using the normal RES= T Crowd API.
Kantega SSO will take the responsibility of connecting to the cloud prov= iders.
Kantega SSO provides customized instructions for connecting to Azure AD,= Google GSuite or Okta:
Each cloud provider requires slightly different connection settings.
This requires your G Suite domain name, a JSON service key file and an a= dmin account with API read permissions:
Once the connector is configured, we let you create a Crowd User Directo= ry which will sync users and groups from the cloud provider.
Notice how we let you configure "Local Groups" permissions on the direct= ory.
This allows users from Azure, G Suite or Okta to be added to local group= s such as jira-software-users, confluence-users or bitbucket-users:
Once the Crowd User Directory has been synchronized, you can preview the= users, groups and group memberships:
The setup wizard helps you prepare an API application in Azure portal an= d extract the values below.
This is the steps to follow:
Go to App registrations i Azure portal
Click the "New registration" button. Give your app a name and leave "Sup= ported account types" unchanged.
Let Redirect URI type be "Web" and copy the value given in the wizard of= Kantega Single Sign-on.
Click "Register". Copy the "Application (client) ID" value into "Applica= tion Id" field in the form in Kantega Single Sign-on.
Click "Certificates & secrets" in left menu.
Then click "New client secret",
If you like add a Description, set Expires to "Never", and click "Add".<= /p>
Copy VALUE of new secret and paste into the "Password" field in the form= in Kantega Single Sign-on.
Click the upper banner "Microsoft Graph".
Then select "Application permissions",
expand the Directory item and tick off Direc= tory.Read.All,
expand the Group item (you ma= y need to scroll) and tick off Group.Read.All= span>
and expand the User item and = tick off User.Read.All.
Find "Azure Tenant Name" by searching the top of Azure portal for "tenan= t status". The "Tentant Status" page will give you the "Tenant Name".
Insert this value into the "Azure Tenant Name" field in the wizard form = in Kantega Single Sign-on.
You are always welcome to reach out to our support team if you have any questions or would= like a demo. | https://docs.kantega.no/exportword?pageId=50495601 | 2019-05-19T14:18:55 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.kantega.no |
Use forwarders to get your data
Cluster peer nodes can get their data directly from any of the same sources as a non-clustered indexer. However, if data fidelity matters to you, you will use load-balancing forwarders to initially consume the data before forwarding it to the peer nodes, rather than ingesting the data directly into the nodes. The node that receives the data from the forwarder is called the receiver or the receiving node.
There are two key reasons for using forwarders, and particularly load-balancing" later in this topic.
- To handle potential node failure. With forwarder load balancing, if one receiving node in the load-balanced group goes down, the forwarder continues to send its data to the remaining peers in the group. Without load balancing, the forwarder has no way to continue sending data if its receiving node goes down. See "How load balancing works" later in this topic.
Important: Before continuing, you must be familiar with forwarders and how to use them to get data into Splunk. For an introduction to forwarders, read "About forwarding and receiving" in the Forwarding Data manual. Subsequent topics in that manual describe all aspects of deploying and configuring forwarders.
To use forwarders to get data into clusters, you must perform two types of configuration:
1. Configure the connection from forwarder to peer node.
2. Configure the forwarder's data inputs.
Note: This topic assumes you're using universal forwarders, but the steps are basically the same for light or heavy forwarders.
Configure the connection from forwarder to peer node
There are three steps to setting up connections between forwarders and peer nodes:'re finished setting up the connection, you then need to configure the data inputs that control the data that streams into the forwarders (and onwards to the cluster). How to do this is the subject of a later section in this topic, .
Important: One of the ways you can specify the receiving port is by editing the peer's inputs.conf file in
$SPLUNK_HOME/etc/system/local/. For many clusters, you can simplify peer input configuration by deploying a single, identical
inputs.conf file across all the peers. In that case, the receiving port configured forwarder deployment (for Windows forwarders only), as described in "Deploy a Windows forwarder manually" in the Forwarding Data manual.
- You can specify the receiver with the CLI command
add forward-server, as described in "Deploy a *nix forwarder manually" in the Forwarding Data manual.
Both of these methods work by modifying the underlying
outputs.conf file. No matter what method you use to specify the receiving peers, you still need to directly edit the underlying
outputs.conf file to turn on indexer acknowledgment, as described in the next step.
3.:
[tcpout:<peer_target_group>] useACK=true
For detailed information on configuring indexer acknowledgment, read "Protect against loss of in-flight data" in the Forwarding Data manual.
Important:'s a sample
outputs.conf configuration for a forwarder that.
Configure the forwarder's data inputs
Once you've specified the connection between the forwarder and the receiving peer(s), you must specify the data inputs to the..
Important: To ensure end-to-end data fidelity, you must explicitly enable indexer acknowledgment for each forwarder that's sending data to the cluster, as described earlier in this topic. If end-to-end data fidelity is not a requirement for your deployment, you can skip this step. some specified interval, such as every 30 seconds, the forwarder switches the data stream to another node in the group, selected at random. So,.
For more information on forwarder load balancing, read "Set up load balancing" in the Forwarding Data manual. For information on how load balancing works with indexer acknowledgment, read "Protect against loss of in-flight data" in the Forwarding Data manual.
Configure inputs directly on the peers
If you decide not to use forwarders to handle your data inputs, you can set up inputs on each peer in the usual fashion; for example, by editing
inputs.conf. For information on configuring inputs, read "Configure your inputs" in the Getting Data! | https://docs.splunk.com/Documentation/Splunk/6.0.15/Indexer/Useforwarderstogetyourdata | 2019-05-19T15:26:54 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
?"
Splunk offers lots of free apps and add-ons, with pre-configured inputs for things like Windows- or Linux-specific data sources, Cisco security data, Blue Coat data, and so on. Look in Splunkbase for an app or add-on that fits your needs. Splunk.! | https://docs.splunk.com/Documentation/Splunk/6.2.2/Data/WhatSplunkcanmonitor | 2019-05-19T15:39:49 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
A group is a container for other scene elements. More...
A group is a container for other scene elements.
Groups are typically used to structure the scene elements. Groups can be nested and thus can be used to create a hierarchy of scene elements. Typically, groups are used together with instances, either as group of instances or instance of a group. However, many different scene elements can be part of a group.
The root node of the scene graph is given by a top-level group (called root group).
The order of elements in a group does not matter.
Attaches a scene element to the group.
Adding an element that is already in the group is an undefined operation.
Only the following types of scene elements can be attached to groups:
NULLpointer).
Removes all elements in the array of grouped elements.
Detaches a scene element from the group.
Removing an element that is not in the group has no effect. The detached element is not changed or deleted.
NULLpointer).
Returns the name of the element
index.
NULLif
indexis out of bounds.
Returns the number of elements. | https://raytracing-docs.nvidia.com/iray/api_reference/iray/html/classmi_1_1neuraylib_1_1IGroup.html | 2019-05-19T14:20:25 | CC-MAIN-2019-22 | 1558232254889.43 | [] | raytracing-docs.nvidia.com |
Safari
Welcome to Acrolinx for Safari!
You can use Acrolinx for Safari Safari is particularly great if you create web content but don't yet have an Acrolinx product to use in your web CMS.
If you want to learn how to use Acrolinx to improve your content and how to configure and maintain Acrolinx for Safari, take a look through some of the following articles.
The Sidebar Card Guide is also worth having close to hand.
You can Check
You can check content Safari.
Release Notes
Take a look at our release notes to learn more about the development of Acrolinx for Safari. | https://docs.acrolinx.com/safari/latest/en | 2019-05-19T14:26:46 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.acrolinx.com |
The Cascading Style Sheet class to be applied to the weblet.
Default value
The name of the shipped class for the weblet.
Valid values
Any valid class name from the current Cascading Style Sheet, in single quotes. A list of available classes can be selected from by clicking the corresponding dropdown button in the property sheet. | https://docs.lansa.com/14/en/lansa087/content/lansa/wamengb2_1200.htm | 2019-05-19T15:05:49 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.lansa.com |
Register and use a custom workflow activity assembly
Applies To: Dynamics CRM 2013
After you compile your custom workflow activity to create an assembly, you have to register the assembly with Microsoft Dynamics CRM. Your custom activity will then appear in the process form of Microsoft Dynamics CRM Online or Microsoft Dynamics CRM 2013 depending on which deployment you registered the custom workflow activity with.
In This Topic
Enable or disable custom code
Register a custom workflow activity
Use a custom workflow activity in a process.
To enable="True"
set-crmsetting $setting
Verify the setting:
get-crmsetting customcodesettings
To disable=0
set-crmsetting $setting
Verify the setting:
get-crmsetting customcodesettings.
Note
You can find the Plug-in Registration tool’s executable file in the SDK\Tools\PluginRegistration folder of the SDK. Download the Microsoft Dynamics CRM SDK package. The tool can be added to the Microsoft Visual Studio Tools menu as an external tool to speed up the development process.
Use a custom workflow activity in a process
After you have registered your custom workflow activity assembly, you can use it in the process designer in Microsoft Dynamics CRM.
To use your custom workflow activity in a process:
Click or tap Settings > Processes.
Create or open an existing process.
In the process designer, click or tap Add Step. Your custom workflow activity name will appear in the drop-down list.
See Also
Custom workflow activities (workflow assemblies)
Debug a custom workflow activity
Plug-in isolation, trusts, and statistics
Register and Deploy Plug-Ins | https://docs.microsoft.com/en-us/previous-versions/dynamicscrm-2013/developers-guide/gg328153(v=crm.6) | 2019-05-19T14:44:22 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
Executing Commands at Boot¶
There are three primary options for executing custom commands at boot time: shellcmd, earlyshellcmd, and shell script.
The shellcmd package can manage the shellcmd and earlyshellcmd tags in the GUI, so config.xml values need not be edited by hand.
At boot time, the earlyshellcmd commands are executed first, shellcmd is executed later in the boot process, and the shell scripts are executed at the very end when packages are initialized.
shellcmd option¶
The hidden config.xml option <shellcmd> will run the command specified towards the end of the boot process.
To add a shellcmd to a configuration, either use the shellcmd package or edit the config by hand. To edit the config, back it up via Diagnostics > Backup/restore, and open the resulting XML file in a text editor (other than the stock Windows Notepad). Above the </system> line, add a line such as the following:
<shellcmd>mycommand -a -b -c 123</shellcmd>
Where
mycommand -a -b -c 123 is the command to run. Multiple lines may
be added to execute multiple commands. Save the changes and restore the
modified configuration.
earlyshellcmd option¶
The hidden config.xml option <earlyshellcmd> will run the command specified at the beginning of the boot process. Normally, <shellcmd> should be used instead, though this may be necessary in some circumstances. Similarly to <shellcmd>, to add a <earlyshellcmd> option, either use the shellcmd package or edit it in by hand. To edit it manually, backup the configuration, open it in a text editor, and add a line such as the following above </system>:
<earlyshellcmd>mycommand -a -b -c 123</earlyshellcmd>
Where
mycommand -a -b -c 123 is the command to run. Multiple
<earlyshellcmd> lines may be added to execute multiple commands. Save the
changes and restore the modified configuration.
Shell script option¶
Any shell script can be placed in the /usr/local/etc/rc.d/ directory.
The filename must end in .sh and it must be marked as executable
(
chmod +x myscript.sh). Every shell script ending in .sh in this
directory will be executed at boot time.
The first two options are preferable as they are retained in the config file and hence do not require additional modifications should the storage medium be replaced and reinstalled, or if the configuration is restored to a different piece of hardware. | https://docs.netgate.com/pfsense/en/latest/development/executing-commands-at-boot-time.html | 2019-05-19T15:06:29 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netgate.com |
2.3.3-p1 New Features and Changes¶
The 2.3.3-p1 errata release is a minor release after 2.3.3 and contains beneficial security and bug fixes.
Security / Errata¶
- Updated to FreeBSD 10.3-RELEASE-p17
- FreeBSD-SA-17:02.openssl (CVE-2016-7055, CVE-2017-3731, CVE-2017-3732)
- Upgraded cURL to 7.53.0 (CVE-2017-2629)
Bug Fixes¶
- Fixed issues with the upgrade check seeing the version of pfSense-upgrade instead of pfSense in some circumstances. #7343
- Fixed handling of domain-only (@ record) updates for CloudFlare Dynamic DNS #7357
- Fixed a problem with the Dynamic DNS Widget where RFC2136 entries showed an incorrect status #7290
- Fixed Dynamic DNS status widget formatting for medium with browser window #7301
- Fixed a problem with HTML tags showing in certificate description drop-down lists in the Certificate Manager #7296
- Fixed an error loading some older rules with ICMP types #7299
- Fixed display of selected ICMP types for old rules without an ipprotocol option set #7300
- Fixed Log widget filter interface selection with custom interface descriptions #7306
- Fixed the widget Filter All button so it does not affect all widgets #7317
- Fixed the password reset script so it resets the expiration date for the admin account when run, to avoid the user still being locked out #7354
- Fixed the password reset script so it properly handles the case when the admin account has been removed from config.xml #7354
- Fixed input validation of TCP State Timeout on firewall rules so it is not arbitrarily limited to a maximum of 3600 seconds #7356
- Fixed console settings for XG-1540/XG-1541 to use the correct default console #7358
- Fixed initial setup handling of VLAN interfaces when they were assigned at the console before running the Setup Wizard #7364
- Fixed display of OpenSSL and input errors when working in the Certificate Manager #7370
- Fixed Captive Portal “disconnect all” button
- Fixed pkg handling timeouts #6594
- Updated blog URL in the RSS widget
- Removed whirlpool from the list of CA/certificate digest algorithms since it does not work #7370 | https://docs.netgate.com/pfsense/en/latest/releases/2-3-3-p1-new-features-and-changes.html | 2019-05-19T15:00:27 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.netgate.com |
Enable the peer nodes
Before reading this topic, read .
Note: The procedure in this topic explains how to use Manager to enable a peer node. You can also enable a peer in two other ways:
- Directly edit the peer's
server.conffile. See "Configure the cluster with server.conf" for details.
- Use the CLI
edit cluster-configcommand. See "Configure the cluster with the CLI" for details.
Enable the peer
To enable an indexer as a peer node:
1. Click Manager in Splunk Web.
2. In the Distributed Environment group, click Clustering.
3. Select Enable clustering.
4. Select Make this instance a cluster peer.
5. There are a few fields to fill out:
- What is the location of the cluster master? Enter the IP address or domain name for the master, along with its management port. For example:
- Secret key. This is the key that authenticates communication between the master and the peers and search heads. The key must be the same across all cluster instances. If the master has a secret key, you must enter it here.
- Replication port. This is the port on which the peer receives replicated data streamed from the other peers. You can specify any available port for this purpose..
8. Repeat this process for all the cluster's peer nodes.
Once you've enabled the replication factor number of peers, the cluster can start indexing and replicating data, as described in "Enable the master node".
View the peer dashboard
After the restart, log back into the peer node and return to the Clustering page in Manager. This time, you see the peer's clustering dashboard. For information on the dashboard, see "View the peer dashboard".
Configure the peers
After enabling the peers, you need to configure them further before you start indexing data. For details, read these topics:
There are also some advanced configuration options available, as described in "Configure the peer nodes".! | https://docs.splunk.com/Documentation/Splunk/5.0.3/Indexer/Enablethepeernodes | 2019-05-19T15:24:08 | CC-MAIN-2019-22 | 1558232254889.43 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Syntax: choose from drop down list
This controls the style used for displaying individual results to user
queries. There are various styles from which to choose. The arrangement and
amount of information varies in every style. In the administrative
interface you may click the question mark (
?) next to
Results Style to see a sample of each of the available
styles. | https://docs.thunderstone.com/site/webinatorman/results_style.html | 2019-05-19T14:38:52 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.thunderstone.com |
Why aren't callbacks working?
Posted in General by Varun Varada Sat Sep 09 2017 00:10:14 GMT+0000 (UTC)·Viewed 197 times
I set up a callback with a username, and no matter how many times I yo'd the username, the callback isn't being called. I verified that the endpoint itself works by calling it directly. | http://docs.justyo.co/v2.0/discuss/59b331668ccd86003ac810f6 | 2019-05-19T14:59:31 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.justyo.co |
Handling Errors
Understand how Square APIs work with money.
On This Page
Connect v2 Errors
All Connect v2 endpoints include an
errors array in their response body when the request fails. For example:
{ "errors": [ { "category": "AUTHENTICATION_ERROR", "code": "UNAUTHORIZED", "detail": "This request could not be authorized." } ] }
See the Connect v2 Error type for more information.
Connect v1
Connect v2 these)
Note:. | https://docs.connect.squareup.com/basics/handling-errors | 2019-05-19T14:56:00 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.connect.squareup.com |
Each application must have proper security in order to have its data secured. This How-To will teach you how to turn the security on and configure it. You will start with the configuration of user and module roles and their access to page and microflows. Therefore we’ll turn on the prototype/demo security. Next you will deal with production security.
1. Prepare.
Add the following customer data to your app:
Add the following order data to your app:
2. Setting the security level to Prototype/Demo
In.
- Open the project security.
- The following properties editor will open.
- Switch the security level to Prototype/Demo.
- Go to the Administrator tab.
- Set the master Administrator password.
2.1 Creating module roles within a module
You have added one or more modules while the security was turned off. So currently, there’s is no security configured for those modules. Now that security is turned on, you have to configure it from scratch. Access to a module is managed using module roles. You will now add these.
- Open the module security of the MyFirstModule module.
- The following properties editor will open.
- Click New and create the Administrator module role.
- Add the User module role.
2.2 Connecting the User Roles to Module Roles
The two module roles that have been created should be assigned to a user role. When an end user has a specific user role, the end user has access to the data/forms and microflows depending on the assigned module roles of that user role.
- Open the project security.
- Go to the User roles tab.
- Double click the Administrator User Role.
- The properties editor will open.
- Click Edit to open the module role configuration.
- Select the Administrator module role for all the modules.
- Repeat these steps for the User User Role.
2.3 Configuring page- and microflow access of a module
Next you’re ready to configure the page- and microflow access of a module.
- Open the module security of the MyFirstModule module.
- Open the Page Access tab.
- Check the pages according to the example as shown in the picture below:
- Go to the Microflow Access tab.
- Check the microflows according to the example as shown in the picture below:
- Deploy the application.
- Create new users with different roles.
- Log in with these users.
- Test the differences in your application.
3. Setting the security level to Production
In this part of the How-To you will configure the security at production level. At this level, all security settings must be configured. In addition to prototype/demo security, you have to configure the entity (data) access. Production security is mandatory when deploying to the Mendix cloud.
- Open the project security.
- Switch the security level to Production.
3.1 Configuring form Entity Access
- Open the module security of the MyFirstModule module.
- Open the Entity Access tab.
Click New to create access rules for the Role Administrator module.
Make sure to allow a administrator to read/write all. And to allow a user less. This to clearly see the difference.
3.2 Creating access rules for the Administrator module role
You will start creating access rules for the Administrator module role. Since this role represents an administrator, let’s assume he is allowed to create, delete, read and write everything. So you can create the rules in a quick batch.
- Check-in all entities.
- Click OK.
Setting up the rule configuration.
- Module role: Administrator
- Allow creating new objects: Tick (Yes)
- Allow deleting existing objects: Tick (Yes)
- Member read and write rights: ‘Read, Write’
Click OK.
A separate access rule will be created for all entities when the module role is set to ‘Administrator’. It is possible to adjust each rule individually at a later moment.
3.3 Creating access rules for the module role ‘User’
Next you have to create access rules for the User module role. Since this role represents a user with limited access, let’s assume he/she is only allowed to read most data and is allowed to write some of the ‘Order’ data. This means you have to configure all access rules individually.
- Click new to create a new access rule for the User module role.
- Select the Customer entity.
- Click OK.
Setting up the correct rule configuration:
- Module role: User
- Allow creating new objects: Untick (No)
- Allow deleting existing objects: Untick (No)
- Default rights for new members: None
- Member access rights: Read
Adjust the rule for the Order.
Setting up the correct rule configuration:
- Module role: User
- Allow creating new objects: Tick (Yes)
- Allow deleting existing objects: Untick (No)
- Default rights for new members: Read, Write
- Member access rights: Read, Write
Deploy the application.
Log in with the different users and test the differences in your application.
4. Define the access rules on the order entity using XPath
In the previous section you have set some access rules to your domain model. In this section you you will define the access rules on the ‘Order’ entity in a way that orders can only be viewed by a user if the payment status of the order is set to ‘Open’. You will do this by adding an XPath constraint to the ‘Order’ entity for the module role ‘User’. The XPath constraint can be used to constrain the set of objects to which the access rule applies. If the XPath constraint is empty, the rule applies to all objects of the entity.
4.1 Add an account with the User user role
- Click the Accounts section at the Administration menu item.
- Click New user
- Add an account with the user role User.
- Click Save.
4.2 Set the entity access to OrderStatus ‘Open’.
- Double click the Order entity.
- Open the Access rules tab.
- Open the User module role.
- Select the XPath constraint tab.
- To constrain the access of the financial administrator to only the Open orders you add the following XPath statement:
- Click OK. The properties editor of your Order entity should look like this:
- Re-deploy your application.
- If you log in with the Test User account you will see that only the Open orders are shown in the orders overview: | https://docs.mendix.com/howto50/creating-a-secure-app | 2019-05-19T14:51:00 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.mendix.com |
Virus detection in SharePoint Online
Office 365 can help protect your environment from malware by detecting viruses in files that users upload to SharePoint Online. Files are scanned for viruses after they are uploaded. If a file is found to be infected, a property is set so that users can't download or sync the file. Security best practices for Office 365.
What happens when an infected file is uploaded to SharePoint Online?
Office 365 uses a common virus detection engine. The engine runs asynchronously within SharePoint Online, and scans files after they're uploaded. When a file is found to contain a virus, it's flagged so that it can't be downloaded again. In April 2018, we removed the 25 MB limit for scanned files.. The user is given the option to download the file and attempt to clean it using their own virus software.
Note
You can use the Set-SPOTenant cmdlet with the DisallowInfectedFileDownload parameter to not allow users to download a detected file, even in the anti-virus warning window. See [DisallowInfectedFileDownload] ().
What happens when the OneDrive sync client tries to sync an infected file?
Whether users sync files with the new OneDrive sync client (OneDrive.exe) or the previous OneDrive for Business sync client (Groove.exe), if a file contains a virus, the sync client won't download it. The sync client will display a notification that the file can't be synced.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/office365/securitycompliance/virus-detection-in-spo?redirectSourcePath=%252fsl-si%252farticle%252fZaznavanje-virusov-v-storitvi-SharePoint-Online-E3C6DF61-8513-499D-AD8E-8A91770BFF63 | 2019-05-19T14:59:42 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
rxFastForest: Fast Forest
Description
Machine Learning Fast Forest
Usage
rxFastForest(formula = NULL, data, type = c("binary", "regression"), numTrees = 100, numLeaves = 20, minSplit = 10, exampleFraction = 0.7, featureFraction = 0.7, splitFraction = 0.7, numBins = 255, firstUsePenalty = 0, gainConfLevel = 0, trainThreads = 8, randomSeed = NULL, = Fast Tree Binary Classification or
"regression"for Fast Tree Regression.
numTrees
Specifies the total number of decision trees to create in the ensemble. By creating more decision trees, you can potentially get better coverage, but the training time increases. The default value is 100.
numLeaves
The maximum number of leaves (terminal nodes) that can be created in any tree. Higher values potentially increase the size of the tree and get better precision, but risk overfitting and requiring longer training times. The default value is 20.
minSplit
Minimum number of training instances required to form a leaf. That is, the minimal number of documents allowed in a leaf of a regression tree, out of the sub-sampled data. A 'split' means that features in each level of the tree (node) are randomly divided. The default value is 10.
exampleFraction
The fraction of randomly chosen instances to use for each tree. The default value is 0.7.
featureFraction
The fraction of randomly chosen features to use for each tree. The default value is 0.7.
splitFraction
The fraction of randomly chosen features to use on each split. The default value is 0.7.
numBins
Maximum number of distinct values (bins) per feature. The default value is 255.
firstUsePenalty
The feature first use penalty coefficient. The default value is 0.
gainConfLevel
Tree fitting gain confidence requirement (should be in the range [0,1)). The default value is 0.
trainThreads
The number of threads to use in training. If
NULLis specified, the number of threads to use is determined internally. The default value is
NULL.
randomSeed
Specifies the random seed. The default value is
NULL. example,
Decision trees are non-parametric models that perform a sequence
of simple tests on inputs. This decision procedure maps them to outputs found in the training dataset whose inputs were similar to the instance being processed. A decision is made at each node of the binary tree data structure based on a measure of similarity that maps each instance recursively through the branches of the tree until the appropriate leaf node is reached and the output decision returned.
Decision trees have several advantages:
* They are efficient in both computation and memory usage during training and prediction.
* They can represent non-linear decision boundaries.
* They perform integrated feature selection and classification.
* They are resilient in the presence of noisy features.
Fast forest regression is a random forest and quantile regression forest implementation using the regression tree learner in rxFastTrees. The model consists of an ensemble of decision trees. Each tree in a decision forest outputs a Gaussian distribution by way of prediction. An aggregation is performed over the ensemble of trees to find a Gaussian distribution closest to the combined distribution for all trees in the model.
This decision forest classifier consists of an ensemble of decision trees. Generally, ensemble models provide better coverage and accuracy than single decision trees. Each tree in a decision forest outputs a Gaussian distribution by way of prediction. An aggregation is performed over the ensemble of trees to find a Gaussian distribution closest to the combined distribution for all trees in the model.
Value
rxFastForest: A
rxFastForestobject with the trained model.
FastForest: A learner specification object of class
mamlfor the Fast Forest trainer.
Note
This algorithm is multi-threaded and will always attempt to load the entire dataset into memory.
Author(s)
Microsoft Corporation
Microsoft Technical Support
References
Quantile regression forest
From Stumps to Trees to Forests
See Also
rxFastTrees, rxFastLinear, rxLogisticRegression, rxNeuralNet, rxOneClassSvm, featurizeText, categorical, categoricalHash, rxPredict.mlModel.
Examples
# Estimate a binary classification forest infert1 <- infert infert1$isCase = (infert1$case == 1) forestModel <- rxFastForest(formula = isCase ~ age + parity + education + spontaneous + induced, data = infert1) # Create text file with per-instance results using rxPredict txtOutFile <- tempfile(pattern = "scoreOut", fileext = ".txt") txtOutDS <- RxTextData(file = txtOutFile) scoreDS <- rxPredict(forestModel, data = infert1, extraVarsToWrite = c("isCase", "Score"), outData = txtOutDS) # Print the fist ten rows rxDataStep(scoreDS, numRows = 10) # Clean-up file.remove(txtOutFile) ###################################################################### # Estimate a regression fast forest # Use the built-in data set 'airquality' to create test and train data DF <- airquality[!is.na(airquality$Ozone), ] DF$Ozone <- as.numeric(DF$Ozone) randomSplit <- rnorm(nrow(DF)) trainAir <- DF[randomSplit >= 0,] testAir <- DF[randomSplit < 0,] airFormula <- Ozone ~ Solar.R + Wind + Temp # Regression Fast Forest for train data rxFastForestReg <- rxFastForest(airFormula, type = "regression", data = trainAir) # Put score and model variables in data frame rxFastForestScoreDF <- rxPredict(rxFastForestReg, data = testAir, writeModelVars = TRUE) # Plot actual versus predicted values with smoothed line rxLinePlot(Score ~ Ozone, type = c("p", "smooth"), data = rxFastForestScoreDF) | https://docs.microsoft.com/ro-ro/machine-learning-server/r-reference/microsoftml/rxfastforest | 2019-05-19T14:41:50 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.microsoft.com |
There are a number of updates you can make to your gateway extension’s code to support these changes and avoid deprecated notices on your customer’s sites.
If you have not already, you should read the What’s New in Subscriptions v2.0 document and Overview of Subscriptions v2.0 Architectural Changes before continuing with this document.
Items to Update ↑ Back to top
To be fully compatible with Subscriptions v2.0, you need to:
- update use of deprecated hooks
- update the location of your meta data
- add support for multiple subscriptions
- add support for admin payment method changes
- update customer facing change payment method support (if you already support it)
- use renewal orders for processing renewal payments
- make sure relevant meta data is copied from the original order to the subscription for existing orders/subscriptions on upgrade. All post meta on the original order will be copied by default by the Subscriptions upgrader, so you only need to copy meta data stored elsewhere or exclude post meta data you do not want copied, if any
Example Code: Simplify Commerce ↑ Back to top
For example code of the patches required to make a token based payment gateway fully compatible with Subscriptions v2.0, see the Simplify Commerce Subscriptions v2.0 Pull Request for WooCommerce Core.
This pull request includes annotated commits for all of the patches required by a token based payment gateway to be fully compatible with Subscriptions v2.0.
Store Meta Data on the Subscription not the Original Order ↑ Back to top
As discussed in the overview of Subscriptions v2.0’s Architectural Changes, a subscription’s meta data is no longer stored against the order created to record the purchase of that subscription. It is now stored on a separate
'shop_subscription' post type.
As a result, if your gateway stores payment related meta data, like credit card or customer tokens, it should store this against the subscription or subscriptions created during checkout and not the original order (as was previously the case).
This is also essential for adding support for the new Admin Payment Method Changes feature.
As Subscriptions will copy all post meta data stored against the
'shop_subscription' post to renewal orders (i.e.
'shop_order' posts), your gateway’s meta data will be copied to renewal orders and you can then access it on the renewal order to process the renewal payments via the
'woocommerce_scheduled_subscription_payment_{$gateway_id}' hook.
A note on upgrading: as mentioned above, all post meta on the original order will be copied by default by the Subscriptions upgrader. So migrating your code to use post meta on the subscription will work for both old and new subscriptions.
Update Deprecated Hooks ↑ Back to top
A number of hooks have been deprecated in Subscriptions v2.0, mainly due to the architectural changes.
Specifically, the following hooks have been changed in order to change the parameters passed to callbacks:
'scheduled_subscription_payment_{$gateway_id}has been replaced with
'woocommerce_scheduled_subscription_payment_{$gateway_id}'
'woocommerce_my_subscriptions_recurring_payment_method'has been replaced with
'woocommerce_my_subscriptions_payment_method'
'woocommerce_subscriptions_renewal_order_meta_query'has been replaced with
'wcs_resubscribe_order_created'in the case of what were previously called “parent” renewal orders and
'wcs_renewal_order_created'for orders that were previously called “child” renewal orders; and
'woocommerce_subscriptions_changed_failing_payment_method_{$gateway_id}'has been replaced with
'woocommerce_subscription_failing_payment_method_updated_{$gateway_id}'
You should update your use of these deprecated hooks to avoid notices and also take advantage of the extra features provided by receiving a
WC_Subscription as a parameter.
Hook for Recurring Payment Method Display ↑ Back to top
By default, Subscriptions displays your payment gateway extension’s name as the payment method for a subscription; however, Subscriptions also provides an API for displaying a custom label. This can be used to provide a gateway specific and more descriptive label on the payment method. For example, the Stripe extension uses it to display the last 4 digits of the credit card used for recurring payments instead of simply Credit Card.
In Subscriptions v1.5, the filter for displaying a custom label was:
'woocommerce_my_subscriptions_recurring_payment_method'.
In Subscriptions v2.0, this has been changed to
'woocommerce_my_subscriptions_payment_method'.
The new filter passes the string displayed to describe the payment method (the same as the old filter), and then an instance of a
WC_Subscription for which the string relates. This can be used to access meta data specific to that subscription.
Support for Multiple Subscriptions ↑ Back to top
Most modern payment gateways should be able to support Subscriptions v2.0’s new multiple subscriptions feature.
Specifically, if your payment gateway uses customer or credit card tokens for processing renewals, you will be able to support multiple subscriptions. Furthermore, if your payment gateway stores the meta data it uses to process automatic payments in post meta and/or user meta, adding support will be as simple as adding a
'multiple_subscriptions' value to your extension’s
$supports property.
To learn more about how multiple subscriptions is implemented in Subscriptions v2.0 and to understand whether your payment gateway will be able to support it, refer to the overview of multiple subscriptions.
Support Admin Payment Method Changes ↑ Back to top
Subscriptions v2.0 introduces a new feature to allow store managers to change the payment method used on a subscription. To learn how to add support for this feature in your extension, refer to the Admin Change Payment Method Integration Guide.
Update Customer Facing Change Payment Method Support ↑ Back to top
Subscriptions v1.5 provided a customer facing process for changing the payment method.
Because it is now possible to store a subscription’s payment method meta data on the subscription instead of the original order, this process has been updated for v2.0.
To update your process, you need to:
- declare support for the new feature, which is
'subscription_payment_method_change_customer': a new feature name has been chosen to both require an update and to differentiate between the new admin payment method change feature and the customer facing feature.
- update your payment method’s meta data using the new
'woocommerce_subscription_failing_payment_method_updated_{$gateway_id}'hook, which passes your gateway the
WC_Subscriptionobject, instead of the old
'woocommerce_subscriptions_changed_failing_payment_method_{$gateway_id}'hook, which passed the
WC_Orderof the original purchase.
The code required to update Simplify Commerce can be seen in this commit to WooCommerce core on GitHub.
Use Renewal Order for Processing Scheduled Payment ↑ Back to top
As discussed in the overview of renewal order creation changes, Subscriptions v2.0 now creates the renewal order for recurring payment before triggering the scheduled subscription payment hook.
This makes it possible to use the same code for processing an initial payment as for processing a recurring payment (as the renewal order contains all essential information about the payment). It also makes it possible to record the transaction ID of the recurring payment on the renewal order by calling
$renewal_order->payment_complete( $transaction_id ).
To use the renewal order for processing scheduled payments, update the code attached to the deprecated
'scheduled_subscription_payment_{$gateway_id}' action to use the new hook
'woocommerce_scheduled_subscription_renewal_{$gateway_id}' instead.
This hook will pass callbacks:
- the amount to charge (the same as the first parameter in Subscriptions v1.5)
- an instance of a
WC_Order, which represents the pending renewal order for this payment
You can also process the renewal payment without having to worry about the associated subscription by calling
$order->payment_complete( $transaction_id ) where
$order is the renewal order, just as you would with a normal payment.
A Note on Tokens and Renewal Order Meta Data ↑ Back to top
You may be wondering why the
'woocommerce_scheduled_subscription_payment_{$gateway_id}' only passes callbacks the amount and a single order as a parameter, but not the
WC_Subscription for which the order relates. This is to lay the foundation for batch processing renewal payments for multiple subscriptions in a single order. More generally, it also helps provide APIs that only require an understanding of WooCommerce primatives, like
WC_Order, instead of Subscriptions specific objects, like
WC_Subscription.
When a renewal order is generated, via
wcs_create_renewal_order(), Subscriptions copies all post meta data rows stored against the
'shop_subscription' object to the new renewal order object. It is possible for extensions to opt-out their data from this process using the
'wcs_renewal_order_meta' or
'wcs_renewal_order_meta_query' hook, but it is recommended that you let your payment gateway’s token data be copied to renewal orders. This makes it possible to use only the parameters passed along with
'woocommerce_scheduled_subscription_payment_{$gateway_id}', instead of also having to use
wcs_get_subscription_for_renewal_order() to access the data on the subscription.
Handling $0 Renewals ↑ Back to top
You may also (optionally) remove any code your gateway has to process $0 renewal orders/payments. Subscriptions v1.5 still called the payment gateway hook if a $0 amount was due; however, in v2.0, it no longer involves the payment gateway, and instead generates the order and processes it internally.
Test Cases ↑ Back to top
After you have updated your integration, you can confirm everything is working by running through all of the following test cases using your payment gateway as the payment method.
Tests for Subscriptions v1.5 and v2.0 ↑ Back to top
- checkout with only a simple subscription product in the cart
- checkout with only a simple subscription product with a sign-up fee in the cart
- checkout with only a simple subscription product with a free trial in the cart
- checkout with only a simple subscription product with a sign-up fee and free trial in the cart
- checkout with only a simple subscription product synchronized to a day in the future in the cart
- checkout with only a simple subscription product with a sign-up fee and synchronized to a day in the future in the cart
- checkout with a simple product and simple subscription product in the cart
- trigger an automatic renewal for a subscription with valid payment gateway meta data to test automatic payment success
- trigger an automatic renewal for a subscription with invalid payment gateway meta data to test automatic payment failure
- complete the manual payment process for the failed renewal and confirm that payment gateway meta data is updated correctly. Then trigger an automatic renewal after changing the payment method to ensure future renewals use the new payment gateway meta data
- test the Change Payment Method process on a subscription and confirm that payment gateway meta data is updated correctly. Then trigger an automatic renewal to ensure future renewals use the new payment gateway meta data.
Test for Subscriptions v2.0 Specific Functionality ↑ Back to top
- checkout with multiple simple subscription products with different billing periods in the cart (e.g. one subscription product renewing weekly and another renewing monthly)
- trigger an automatic renewal with working payment gateway meta data for a subscription with multiple simple subscription product line items
- checkout with a simple product and multiple simple subscription products in the cart
- (optional) if you added support for administrator payment method changes, change payment method on a subscription from the admin then trigger an automatic renewal to ensure future renewals use the new payment gateway meta data. | https://docs.woocommerce.com/document/subscriptions/develop/payment-gateway-integration/upgrade-guide-for-subscriptions-v2-0/ | 2019-05-19T15:01:24 | CC-MAIN-2019-22 | 1558232254889.43 | [] | docs.woocommerce.com |
New in version 2.2.
# ensure igmp snooping params supported in this module are in there default state - nxos_igmp_snooping: state: default host: inventory_hostname }} username: un }} password: pwd }} # ensure following igmp snooping params are in the desired state - nxos_igmp_snooping: group_timeout: never snooping: true link_local_grp_supp: false optimize_mcast_flood: false report_supp: true v3_report_supp: true host: "{{ inventory_hostname }}" username: "{{ un }}" password: "{{ pwd }}"
Common return values are documented here Return Values, the following are the fields unique to this module:
Note
state=default, params will be reset to a default state.
group_timeoutalso accepts never as an input.. | http://docs.ansible.com/ansible/nxos_igmp_snooping_module.html | 2017-03-23T08:18:20 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.ansible.com |
Wattics energy management solution for retail stores
Step-by-step procedures for monitoring and benchmarking of HVAC, lighting and power outlets within/across stores.
As this stage the PO has been received and works can be conducted. You must now aim at a quick meter installation with no surprises or delay.
This fourth post provides the checklist of what must happen before the meter(s) are installed.
1 – Gathering information for your project proposal
2 – Preparing your project proposal
3 – Preparing your quote
4 – Preparing your installation
5 – How to install your metering equipment
1. Get the MCB(s) installed for your meter(s) supply
Most 3-phase electrical meters must be supplied_0<<
Install breakers to supply your meters
It is important that the contracted electrician get the MCB wired outside hours before the installation date, as its wiring may require the distribution board to be shut down for a few minutes. If the distribution board cannot be shut down at installation date then the installation may need to be postponed.
2. Get your Internet points deployed and tested
Unless GPRS/3G routers are used, network points (cat5, cat5e, cat6) must be provided by the contracted IT company at the location where meters will be deployed (see 3 – Preparing your quote). These network points must provide Internet access when plugged to the meters).
Provide Internet access at the meter location
Very often network points end up being deployed that does not go as planned.
We therefore recommend that the network points are deployed prior to the installation to avoid any delay on installation date, and to notify any specific network configuration requirements (static IP address, MAC address registration, ports to be opened for outward data communication, etc) that may require meter network configuration at installation date.
You should now have your meter(s)’ MCBs pre-wired, your Internet points deployed and tested, and know the way your meter(s) will need to be configured for Internet access.
You can now move to
5 – How to install your metering equipment to review all the steps that must be taken for installation of the metering equipment.
>>IMAGE | http://docs.wattics.com/2016/03/21/4-preparing-your-installation-in-retail-stores/ | 2017-03-23T08:11:01 | CC-MAIN-2017-13 | 1490218186841.66 | [array(['/wp-content/uploads/2016/03/mcbref.png', None], dtype=object)
array(['/wp-content/uploads/2016/03/lanport.png', None], dtype=object)
array(['/wp-content/uploads/2016/02/sitesurbeng.jpg', None], dtype=object)] | docs.wattics.com |
Use Google Fonts in CSS
Use Google Fonts in your themes and plugins CSS.
Just type your CSS as usual.
Make sure to add the font exactly the same way it is written under "Manage Fonts".
For example:
If you added the font Annie Use Your Telescope in your TK Google Fonts Admin Settings and want to use it in your css:
It should look like this:
h2 { font-family: 'Annie Use Your Telescope', arial; }
Additional Notes:
** yes, with the spaces in the font name! ;-)
** add one (or more) fallback font(s). choose something like arial, helvetica, times, times new roman, serif, sans-serif, ... the last one should be 100% available, just to make sure. | http://docs.themekraft.com/article/337-use-google-fonts-in-css | 2017-03-23T08:13:12 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.themekraft.com |
Upgrading to AppDynamics APM v1.1.2
This topic describes how to upgrade to v1.1.2 of the the AppDynamics APM tile for Pivotal Cloud Foundry (PCF).
v1.1.2
Release Date: September 8, 2016
Features included in this release:
- Fixed bug in destroy-analytics-agent errand which prevented clean deletion of v1.1.1 of tile
- Updated the stem cell version to v3146.20
- UI changes to the tile.
Upgrade AppDynamics APM
In PCF Operations Manager, click Import a Product to import the new tile binary.
Click Upgrade on the left panel for AppDynamics.
Follow the procedures in the topics below:
Update to stemcell v3146.20 if prompted.
Click Apply Changes to install the updates. | https://docs.pivotal.io/partners/appdynamics/upgrade_to_1_1_2.html | 2017-03-23T08:16:09 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.pivotal.io |
The following are existing glossary resources. Please first check that you are indeed allowed to use them.
For Free and Open Source Software, the single most useful resource is probably at which allows you to view translations from many of the major FOSS applications. Support for Open-Tran.eu is integrated into some translation tools, such as Virtaal.
The FUEL project tries to coordinate terminology development amongst Indic languages. | http://docs.translatehouse.org/projects/localization-guide/en/latest/guide/existing_glossaries.html?id=guide/existing_glossaries | 2017-03-23T08:15:42 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.translatehouse.org |
UploadOperation UploadOperation UploadOperation UploadOperation Class
Performs an asynchronous upload operation. For an overview of Background Transfer capabilities, see Transferring data in the background. Download the Background Transfer sample for examples in JavaScript, C#, and C++.
Syntax
Declaration
public sealed class UploadOperation
public sealed class UploadOperation
Public NotInheritable Class UploadOperation
public sealed class UploadOperation); } }
Properties summary
Methods summary
Properties
- CostPolicyCostPolicyCostPolicyCostPolicy
Gets and sets the cost policy for the upload.
public BackgroundTransferCostPolicy CostPolicy { get; set; }
public BackgroundTransferCostPolicy CostPolicy { get; set; }
Public ReadWrite Property CostPolicy As BackgroundTransferCostPolicy
public BackgroundTransferCostPolicy CostPolicy { get; set; }
Property Value
Specifies whether the transfer can happen over costed networks.
- GroupGroupGroupGroup
Note
Group may be altered or unavailable for releases after Windows 8.1. Instead, use TransferGroup.
Gets a string value indicating the group the upload belongs to.
public string Group { get; }
public string Group { get; }
Public ReadOnly Property Group As string
public string Group { get; }
Property Value
- stringstringstringstring
The group name.
- GuidGuidGuidGuid
This is a unique identifier for a specific upload operation. A GUID associated to a upload operation will not change for the duration of the upload.
public Guid Guid { get; }
public Guid Guid { get; }
Public ReadOnly Property Guid As Guid
public Guid Guid { get; }
Property Value
- System.GuidSystem.GuidSystem.GuidSystem.Guid
The unique ID for this upload operation.
- MethodMethodMethodMethod
Gets the method to use for the upload.
public string Method { get; }
public string Method { get; }
Public ReadOnly Property Method As string
public string Method { get; }
Property Value
- stringstringstringstring
The method to use for the upload. This value can be GET, PUT, POST, RETR, STOR, or any custom value supported by the server.
- PriorityPriorityPriorityPriority
Gets or sets the transfer priority of this upload operation when within a BackgroundTransferGroup. Possible values are defined by BackgroundTransferPriority.
public BackgroundTransferPriority Priority { get; set; }
public BackgroundTransferPriority Priority { get; set; }
Public ReadWrite Property Priority As BackgroundTransferPriority
public BackgroundTransferPriority Priority { get; set; }
Property Value
The operation priority.
- ProgressProgressProgressProgress
Gets the current progress of the upload operation.
public BackgroundUploadProgress Progress { get; }
public BackgroundUploadProgress Progress { get; }
Public ReadOnly Property Progress As BackgroundUploadProgress
public BackgroundUploadProgress Progress { get; }
Property Value.
- RequestedUriRequestedUriRequestedUriRequestedUri
Gets the URI to upload from.
public Uri RequestedUri { get; }
public Uri RequestedUri { get; }
Public ReadOnly Property RequestedUri As Uri
public Uri RequestedUri { get; }
Property Value
The URI to upload from.
- SourceFileSourceFileSourceFileSourceFile
Specifies the IStorageFile to upload.
public IStorageFile SourceFile { get; }
public IStorageFile SourceFile { get; }
Public ReadOnly Property SourceFile As IStorageFile
public IStorageFile SourceFile { get; }
Property Value
The file item to upload.
- TransferGroupTransferGroupTransferGroupTransferGroup
Gets the group that this upload operation belongs to.
public BackgroundTransferGroup TransferGroup { get; }
public BackgroundTransferGroup TransferGroup { get; }
Public ReadOnly Property TransferGroup As BackgroundTransferGroup
public BackgroundTransferGroup TransferGroup { get; }
Property Value
The transfer group.
Methods
- AttachAsync()AttachAsync()AttachAsync()AttachAsync(), UploadOperation )
public IAsyncOperationWithProgress<UploadOperation, UploadOperation> AttachAsync()
Returns
Upload operation with callback.
Remarks
While this method can be called from multiple app instances, developers should not attach callbacks from the primary app instance in a background task. This will cause BackgroundTransferHost.exe to hang.
Examples
function AttachUpload (loadedUpload) { try { upload = loadedUpload; promise = upload.attachAsync().then(complete, error, progress); } catch (err) { displayError(err); } };
- GetResponseInformation()GetResponseInformation()GetResponseInformation()GetResponseInformation()
Gets the response information.
public ResponseInformation GetResponseInformation()
public ResponseInformation GetResponseInformation()
Public Function GetResponseInformation() As ResponseInformation
public ResponseInformation GetResponseInformation()
Returns(UInt64 position)
public IInputStream GetResultStreamAt(UInt64 position)
Public Function GetResultStreamAt(position As UInt64) As IInputStream
public IInputStream GetResultStreamAt(UInt64 position)
Parameters
- position
- System.UInt64System.UInt64System.UInt64System.UInt64
The position at which to start reading.
Returns
The result stream.
- StartAsync()StartAsync()StartAsync()StartAsync()
Starts an asynchronous upload operation.
public IAsyncOperationWithProgress<UploadOperation, UploadOperation> StartAsync()
public IAsyncOperationWithProgress<UploadOperation, UploadOperation> StartAsync()
Public Function StartAsync() As IAsyncOperationWithProgress( Of UploadOperation, UploadOperation )
public IAsyncOperationWithProgress<UploadOperation, UploadOperation> StartAsync()
Returns
An asynchronous upload operation that includes progress updates.
Remarks
An upload operation must be scheduled using one of the CreateUpload(Uri, IStorageFile), CreateUploadAsync(Uri, IIterable<BackgroundTransferContentPart>) , or CreateUploadFromStreamAsync(Uri, IInputStream)(); }); | https://docs.microsoft.com/en-us/uwp/api/Windows.Networking.BackgroundTransfer.UploadOperation | 2017-03-23T09:42:27 | CC-MAIN-2017-13 | 1490218186841.66 | [] | docs.microsoft.com |
general roadmap / internal plan
1. Now, think about your TOP 3 objectives so that they are “SMART” – specific, measurable, attainable, realistic, and time-based.
2. Describe how your social media objective supports or links to a goal in your organization’s communications plan.
For each of your goals, identify the following:
1. What is the purpose? Why is this goal important? What will be the benefit for your organization?
2 .How is it measurable? Come up with two or three quantifiable measurements to help you gauge your success.
3. What are you able to measure that will give you knowledge about your progress?
4. What defines success? Identify a benchmark for each measurement that will help you figure out how well you did in accomplishing your goal.
1. Who must you reach with your social media efforts to meet your objective? Why this target group?
2. Is this a target group identified in your organization’s communications plan?
3. What do they know or believe about your organization or issue? What will resonate with them?
4. What key points do you want to make with your audience?
Each social media channel is good for something different. Consider the strengths and weaknesses of each tool against your goals in order to determine which channels are right for your organization. | http://docs.joomla.org/index.php?title=Talk:Social_Media_Team&diff=prev&oldid=67782 | 2014-04-16T07:54:59 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.joomla.org |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up
sr63(1)(m)
(m) To postpone indefinitely, to reject or to nonconcur, as applicable (debatable, takes precedence over corresponding motion to approve,
see
rule
55
).
sr63(1)(n)
(n) To amend (debatable, must be germane,
see
rules
50
and
53
).
sr63(2)
(2) These several motions have precedence in the order in which they are set forth in this rule.
[(1)(m) and (n) rn. 1981 S.Res. 2]
[(1)(f) am. 1987 S.Res. 2, 1993 S.Res. 3]
[(1)(intro.), (d), (j) and (k) and (2) am. 2001 S.Res. 2]
sr64
Senate Rule
64.
Motion to adjourn always in order.
A motion to adjourn is always in order except when the senate is voting. However, a member may not move an adjournment when another member has the floor and 2 consecutive motions to adjourn are not in order unless other business intervenes. A motion to adjourn to a time certain or to recess has the same privilege as a motion to adjourn, but such motions have the order of precedence prescribed in rule
63
.
[am. 2001 S.Res. 2]
sr65
Senate Rule
65.
Laying on table.
sr65(1)
(1)
A motion to lay on the table has only the effect of disposing of the matter temporarily and it may be taken from the table at any time by order of the majority of those present.
sr65(2)
(2) A motion to lay a proposal on the table, if approved, has the effect of returning the matter to the committee on senate organization.
sr65(3)
(3) A motion to remove a proposal from the table, if approved, has the effect of withdrawing the matter from the committee on senate organization and placing it on the calendar.
[(2) and (3) am. 1987 S.Res. 2, 1993 S.Res. 3]
[am. 2001 S.Res. 2]
[(1) am. 2003 S.Res. 3]
sr66
Senate Rule
66.
Motion to postpone.
A motion to postpone to a day certain, to refer, or to postpone indefinitely, having failed, may not be again allowed on the same day unless the matter has been altered by amendment or advanced to a subsequent stage. A 2nd motion to reject an amendment is subject to this rule and may not be twice allowed on the same day unless the amendment was altered by amendment.
[am. 2001 S.Res. 2]
[am. 2005 S.Res. 2]
sr67
Senate Rule
67.
Motion to reconsider.
sr67(1)
(1)
A motion to reconsider a question may be made by a member having the floor who voted with the majority, or whose position recorded under rule
75
agreed with the majority. In the case of a voice vote or tie vote, the motion for reconsideration may be offered by a member not recorded absent on the question that is moved to be reconsidered. The motion for reconsideration is subject to all rules governing debate that apply to the question moved to reconsider.
sr67(2)
(2) On questions requiring by the constitution, statutes, rules, or otherwise, a specified number of affirmative votes, the prevailing side is the majority, but such minimum affirmative requirement does not apply to the question of reconsideration.
sr67(3)
(3) The motion for reconsideration shall be made on the same or the next succeeding roll call day and it shall be received under any order of business.
sr67(4)
(4) A motion to reconsider shall be put immediately after pending business of higher precedence is disposed of unless it is laid over to a future time by a majority vote. A motion for reconsideration may be laid on the table without debate.
sr67(5)
(5) After the time for receiving the motion has expired, a pending motion for reconsideration may not be challenged on the ground that the member making the motion did not vote with the majority.
sr67(6)
(6) A motion for reconsideration, when made on the same day as the action that is moved to be reconsidered, and not acted upon due to adjournment, other than adjournment under call on the question, expires with adjournment, but if made on the following day is not lost by adjournment. A motion to reconsider amendments to a proposal is in order notwithstanding the proposal's advancement to a 3rd reading and a motion to reconsider the advancement is in order notwithstanding the suspension of the rules to take final action if the motions for reconsideration are otherwise timely and in order. Reconsideration of amendments under this rule has the same priority as to order of action as to amend under rule
63
.
sr67(7)
(7) Whenever a proposal is returned from the assembly, the governor, or elsewhere for further action pursuant to the senate's request for the return, motions for reconsideration necessarily incident to opening the proposal for further action shall be admitted regardless of the time limitation otherwise imposed by this rule. Action on executive vetoes or appointments or any motion to suspend the rules is not subject to a motion for reconsideration.
sr67(8)
(8) A motion for reconsideration, once entered, may only be withdrawn by the member making the motion, and only within the time when the motion by another member would still be timely; later only by consent of or action by the senate.
sr67(9)
(9) The motion for reconsideration having been put and lost may not be renewed but, if carried, subsequent motions for reconsideration of the same action are in order.
[(1) am. 1979 S.Res. 3]
[(1), (2) and (5) to (9) am. 2001 S.Res. 2]
[(3), (6), (7) and (8) am. 2003 S.Res. 3]
sr68
Senate Rule
68.
Questions to be decided without debate and not placed on table.
A motion to adjourn, to adjourn to a fixed time, to take a recess, to lay on the table, to take from the table, to place a call, to raise a call, to grant a leave, to suspend the rules, or to reconsider a nondebatable question or a call for the current or previous question, are decided without debate and may not be placed on the table. All incidental questions of order arising after a motion is made for any of the questions named in this rule, and pending the motion, is decided, whether on appeal or otherwise, without debate.
[am. 2001 S.Res. 2]
[am. 2003 S.Res. 3]
[am. 2009 S.Res. 2]
sr69
Senate Rule
69.
Privileged motion or resolution.
A motion or resolution relating to the organization or proceedings of the senate, or to any of its officers, members, or committees, is privileged in that it need not lie over for consideration, but may be taken up immediately unless referred to the calendar or committee.
Any such resolution shall be read at length unless copies of the full text of the resolution have been distributed to the members.
[am. 2001 S.Res. 2]
[am. 2003 S.Res. 3]
[am. 2009 S.Res. 2]
sr70
Senate Rule
70.
Division of question.
sr70(1)
(1)
A member may call for the division of a question, which shall be divided if it consists of propositions in substance so distinct that, one being taken away, a substantive proposition remains for the decision of the senate. A motion to delete and substitute is indivisible, but a motion to delete being lost does not preclude an amendment or a motion to delete and substitute. Division of action directly upon the substance of a proposal, as to pass, advance to a 3rd reading, indefinitely postpone, or any equivalent, which division may be accomplished by an amendment, are not permitted under this rule.
sr70(2)
(2) A bill vetoed in its entirety by the governor may not be divided. When a bill has been vetoed in part and the senate considers a specific item for passage notwithstanding the objections of the governor, any member may request that the item be divided. The item may be divided on request by a member if:
sr70(2)(a)
(a) The request proposes to so divide the item that each separate proposition, if passed notwithstanding the objections of the governor, will result in a complete and workable law regardless of the action taken on any other part of the original item.
sr70(2)(b)
(b) It is the opinion of the presiding officer that the item involves distinct and independent propositions capable of division and that the division will not be unduly complex.
sr70(3)
(3) When a bill has been vetoed in part the committee on senate organization may, by a resolution offered under rule
17 (2)
,.
[am. 2001 S.Res. 2]
[(2) and (3) cr. 2005 S.Res. 2]
sr71
Senate Rule
71.
Putting question.
All questions may be put in this form: "Those who are of the opinion that the bill pass, be concurred in, etc., (as the case may be) say, `Aye'. Those of contrary opinion say, `No';" or other appropriate words may be used.
sr72
Senate Rule
72.
Ayes and noes.
sr72(1)
(1)
The ayes and noes may be ordered by the presiding officer for any vote and shall be ordered when demanded by one-sixth of the members present. The chief clerk shall record the votes taken by ayes and noes, report the result, and enter the report in the journal together with the names of those absent or not voting.
sr72(2)
(2) Members shall remain in their seats and may not be disturbed by any other person while the ayes and noes are being called.
sr72(3)
(3) A request for a roll call is not in order after the result of the vote has been announced.
[(1) am. 2001 S.Res. 2]
sr73
Senate Rule
73.
Every member to vote.
sr73(1)
(1)
All members present when a question is put shall vote as their names are called. For a special cause the senate may excuse a member from voting, but it is not in order for a member to be excused after the senate has commenced voting.
sr73(2)
(2) When the vote is by ayes and noes, a member entering the chamber after the question is put and before it is decided may have the question stated and vote, with the vote being counted in the outcome.
[(2) am. 2001 S.Res. 2]
sr73m
Senate Rule
73m.
Missed roll calls.
sr73m(1)
(1)
A member who does not vote during a roll call on a proposal may request unanimous consent to have his or her vote included in that roll after the roll is closed, if all of the following apply:
sr73m(1)(a)
(a) The request does not interrupt another roll call.
sr73m(1)(b)
(b) The request is made no later than immediately following the close of the next occurring roll call.
sr73m(1)(c)
(c) The member's vote, if included, will not change the result of the roll call.
sr73m(2)
(2) If sub.
(1)
precludes a member from making a request or if the request is objected to, the member may request unanimous consent to have the journal reflect how the member would have voted had he or she been in his or her seat when the roll call was taken. A member may not interrupt a roll call to make a request under this subsection.
[cr. 2005 S.Res. 2]
sr74
Senate Rule
74.
Explanation of vote not allowed.
Explanation by a member of his or her vote, at the time of the calling of the member's name, is not allowed.
sr75
Senate Rule
75.
Recording position of absent member.
Any member absent from all or part of a day's session by leave of the senate under rule
16
or
23
or pursuant to rule
13
may, within one week after returning, instruct the chief clerk in writing to have the journal show that had the member been present when a certain vote was taken the member would on that issue have voted aye or have voted no. If the member returns before the vote is taken, the statement of position is void and the member shall cast his or her vote as required under rule
73
.
[am. 2001 S.Res. 2]
[am. 2003 S.Res. 21]
[am. 2009 S.Res. 2]
Chapter 7:
LIMITING DEBATE
sr76
Senate Rule
76.
Scheduling time limits for debate.
sr76(1)
(1)
Time limits and schedules for debate may be designated in the manner described in sub.
(2)
. The time limits may be rejected or modified by majority vote of the members present, but this question is not debatable. The schedules and time limits shall be announced by the presiding officer immediately upon being presented. Promptly at the expiration of the time allotted, the presiding officer shall put the question.
sr76(2)
(2) Time limits and schedules for debate may be designated under sub.
(1)
by any of the following means:
sr76(2)(a)
(a) By the committee on senate organization.
sr76(2)(b)
(b) Jointly by the majority leader and the minority leader, if the committee on senate organization does not object.
sr76(2)(c)
(c) By the presiding officer, if the majority leader and the minority leader do not object.
.
Down
Down
/2011/related/rules/senate
true
legislativerules
/2011/related/rules/senate/6/70/1
legislativerules/2011/sr70(1)
legislativerules/2011/sr70(1)
section
true
PDF view
View toggle
Cross references for section
View sections affected
References to this
Reference lines
Clear highlighting
Permanent link here
Permanent link with tree | http://docs.legis.wi.gov/2011/related/rules/senate/6/70/1 | 2014-04-16T07:15:59 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.legis.wi.gov |
Message-ID: <1408956924.82490.1397635226941.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_82489_1599676131.1397635226940" ------=_Part_82489_1599676131.1397635226940 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Table of contents:=20
Student-programmer: Johannes Venzke=20
Mentor: Ralf Joachim=20
First of all I uncommented some dead links at the d= ownload page of castor.=20
I had an extensive session of testing. I ran through th=
e tests in cpactf package and compared the results with the
Some tests missed CREATE or DROP scripts and = I looked after it. Amongst other things I had the opportunity to get to run= test39.TestStoredProcedure for the first time. The tests giv= e validation for functionality of the whole program and I helped to in= crease coverage.=20
Move SQL-related function of ClassMolder to SQLEngine.=
The creation of SQLRelationLoader has already been moved to S=
QLEngine with CASTOR-3087.
As a first step the move to SQLEngine was completed and improve=
d by also moving "if statement" for many table decision and savin=
g a parameter.
After that SQLRelationLoader retrieves all information in a new= ly created class called RelationTableInfo. This class is used as i= t represents the relation table and holds information about left and right = foreign keys needed by SQLRelationLoader (CA= STOR-3096). No other parameter is needed and no other procedure in SQLE= ngine. This is a way more elegant way to bind parameters. I had this in min= d with CASTOR-3089, but the TableLink class has d= isappeared in the development process. It hasn't displayed n-to-n relation = tables well.=20
InfoFactory didn't create ColumnInfos well. Types and convertors were mi= ssing in some cases. This was fixed with C-3095 a= nd C-3100.=20
After a lot of time has elapsed trying to replace the old column retriev= al procedure for SQLRelationLoader in SQLEngine, it has turned out, that th= e information that was presently stored in ColumnInfo wasn't sufficient to = find proper RelationTableInfo. That is to say we needed to add an additiona= l field that stores the name of field in mapping to match correctly the fie= ld name of the stored foreign reference with the field name of the field de= scriptor.=20
This progress is included in the patch of CASTOR-3= 096.=20
Objects are used to represent a SQL tree and real queries are generated = automatically out of the tree structure. Refactorings of old code are being= done to make process more extensible and easier to understand.=20
CastorParameter objects were added, that deal with parameters of th= e form "... WHERE $1 =3D 1000" and "$(int)2 =3D 5" to m= ake the new CastorQLParser backward compatible (C-31= 17). These objects plus the following are the foundation of parsers to = work well.=20
As a next step new QueryObjects should get created, that could represent= queries with COUNT aggregation functions and translate it to simple SQL. W= e canceled this job in favor of the more important GSoC projects.=20
Old Lexer and Parser should be replaced with new ones. This new Parser, = CastorQLParser, is in the starting blocks.=20
There were only problems of two kinds, that hindered them to be used:=20
There is one problem left, which has to be solved before the new parser = can be activcated:=20
To approach this, I first added some Tests to TestCastorQLParseTreeWalke= r and to be able to view the current state of success.=20
The EJBQLParser should be brought to run. There is only one desired func= tionality that isn't supported at the moment. This is the simple COUNT aggr= egation function.=20
A test was added to TestEJBQLParseTreeWalker to show current state of qu= ery support. We didn't continue the development.=20
Meanwhile I'm going to resolve Checkstyle warnings in the cpa packages. = Did it with CASTOR-3110. Also with CASTOR-3112.=20
Tests are needed to validate all existing functionality of existing pars= er. As recently as the CastorQLParser supports all constructs that the old = one supported, it will be used instead of the old one.=20
I first created graphics that visualise the possible queries. Then I cre= ated tests for LIMIT and ORDER BY clause in C-3129= a>.=20
Ralf did the remaining tests.=20
I had a look at all classes in org.castor.cpa package (except the classe= s in driver package) and created an result table as an overview what has to= be done.=20
I created the JDO documentation in DocBook format.=20
First I took the old pages and replaced tags that are functional same wi= th the new ones.=20
Examples are:=20
The sections needed further adaptation and needed all an "id" = for links and so.=20
Example:=20
Sometimes I also needed to rearrange the structure of the document to fi= t for DocBook.=20
Some tags also disappeared as there is no DocBook correspondent like= =20
I had to change some tags as the functionality isn't supported by DocBoo= k.=20
There was a bug, that no fields of an extended table got stored. I creat= ed a test case in C-3183 that showed the bug.= =20 | http://docs.codehaus.org/exportword?pageId=209650137 | 2014-04-16T08:00:26 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.codehaus.org |
Unity Pro makes it possible to use real-time shadows on any light. Objects can cast shadows onto each other and onto parts of themselves ("self shadowing"). Directional, Spot and Point lights support shadows.
Using shadows can be as simple as choosing Hard Shadows or Soft Shadows on a Light. However, if you want optimal shadow quality and performance, there are some additional things to consider.
The Shadow Troubleshooting page contains solutions to common shadowing problems.
Curiously enough, the best shadows are non-realtime ones! Whenever your game level geometry and lighting is static, just precompute lightmaps in Unity. Computing shadows offline will always result in better quality and performance than displaying them in real time. Now onto the realtime ones...
Unity uses so called shadow maps to display shadows. Shadow mapping is a texture based approach, it's easiest to think of it as "shadow textures" projecting out from lights onto the scene. Thus much like regular texturing, quality of shadow mapping mostly depends on two factors:
Different Light types use different algorithms to calculate shadows.
Details on how shadow map sizes are computed are in Shadow Size Details page.
Realtime shadows are quite performance hungry, so use them sparingly. For each light to render its shadows, first any potential shadow casters must be rendered into the shadow map, then all shadow receivers are rendered with the shadow map. This makes shadow casting lights even more expensive than Pixel lights, but hey, computers are getting faster as well!
Soft shadows are more expensive to render than hard shadows. The cost is entirely on the graphics card though (it's only longer shaders), so hard vs. soft shadows don't make any impact on the CPU or memory.
Quality Settings contains a setting called - this is how far from the camera shadows are drawn. Often it makes no sense to calculate and display shadows that are 500 meters away from the camera, so use as low shadow distance as possible for your game. This will help performance (and will improve quality of directional light shadows, see above).
Built-in shadows require a fragment program (pixel shader 2.0) capable graphics card. The following cards are supported:
Page last updated: 2013-03-19 | http://docs.unity3d.com/Documentation/Manual/Shadows.html | 2014-04-16T07:14:49 | CC-MAIN-2014-15 | 1397609521558.37 | [] | docs.unity3d.com |
Every now and then a really brilliant upload competition comes along that blows all the others out the water. Link TV’s View Change film contest does just that, offering filmmakers the chance to make short films that will raise awareness of the UN’s Millenium Development Goals that aims to reduce extreme poverty by 2015.
You can be a first time filmmaker or a pro – it doesn’t matter. They’re just looking for work that inspires action and packs a punch. It needn’t even be a doc – they accept any genre. There are six contest categories: Sustainability, Innovation, Overcoming Conflict, Empowerment, Leadership & Governance and Local/Global Partnerships, so there’s plenty of room to interpret the brief in interesting and original ways. The Grand Prize is $20,000 and $5,000 is awarded to the winner of each category, with online voters determining the finalists.
The competition opens 30 April and the dealine is 31 August.
Tags: Link TV, United Nation's Development Goals, Upload Competition | http://www.4docs.org.uk/blog/2010/03/link-tv-upload-competition-20000-prize/comment-page-43/ | 2014-04-16T07:13:45 | CC-MAIN-2014-15 | 1397609521558.37 | [] | www.4docs.org.uk |
GumTree is
>.
Components
Documentations
GumTree 1.3.x (Current Release)
GumTree 1.4.x (Stable Release)
- User Guide (html / pdf)
- Developer Guide (html / pdf)
- API (Javadoc)
- New and Noteworthy
- Release Notes
- Project Plan
- User Tutorial.
Contact Information
Technical Questions: Send email to the developers' mailing list
Project Collaboration Issues: Contact Tony @ ANSTO | http://docs.codehaus.org/pages/viewpage.action?pageId=137625985 | 2014-04-16T08:22:53 | CC-MAIN-2014-15 | 1397609521558.37 | [array(['/download/thumbnails/71026/screenshot002.png?version=1&modificationDate=1170292610006&api=v2',
None], dtype=object)
array(['/download/thumbnails/71026/BestOpenSourceRCP.jpg?version=1&modificationDate=1170915632162&api=v2',
None], dtype=object) ] | docs.codehaus.org |
influxd generate
The
influxd generate command generates time series data direct to disk using a schema defined in a TOML file.
Important notes
influxd generatecannot run while the
influxdserver is running. The
generatecommand modifies index and Time-Structured Merge Tree (TSM) data.
- Use
influxd generatefor development and testing purposes only. Do not run it on a production server.
Usage
influxd generate <schema.toml> [flags] influxd generate . | https://v2.docs.influxdata.com/v2.0/reference/cli/influxd/generate/ | 2020-08-03T12:51:12 | CC-MAIN-2020-34 | 1596439735810.18 | [] | v2.docs.influxdata.com |
2006-09-29
A few notes about some contributions on this page.
1. It seemed to me that on the face it of all of the offerings to emulate "image_type_to_extension" function fell short of the mark in one way or another (See my comments below). That's why I wrote my own and submitted to this page below. In respect of my work any comments, bugs noted or improvements would be gratefully received.
2. Avoid using the Switch statement in an unconventional method to "Break" (I note the use of the return statement!). Also even if it does nothing at the inception of our code - Still put in the default case (It lets others realise that a default is not required or at worst forgotton.
3. In an environment that is under your control the risk of an error by determining the content by its extension or MIME type may seem an attractive solution to a problem. However, in the real world there's no guarantee that a MIME type or file extension is correct for it's associated file.
Consider using functions to get the image type:
getimagesize or (This is available without GD)
exif_imagetype
4. There's more to coding than just putting something together to do a job!!! But whatever is done is worthwhile - Hence expletives have no place in this forum!!
5. The idea from "oridan at hotmail dot com" is a very slick idea. I will be taking a closer look at this for my own project. | http://docs.php.net/manual/de/function.image-type-to-extension.php | 2017-06-22T20:24:35 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.php.net |
The mining structure defines the data from which mining models are built: it specifies the source data view, the number and type of columns, and an optional partition into training and testing sets. A single mining structure can support multiple mining models that share the same domain. The following diagram illustrates the relationship of the data mining structure to the data source, and to its constituent data mining models.
The mining structure in the diagram is based on a data source that contains multiple tables or views, joined on the CustomerID field. One table contains information about customers, such as the geographical region, age, income and gender, while the related nested table contains multiple rows of additional information about each customer, such as products the customer has purchased. The diagram shows that multiple models can be built on one mining structure, and that the models can use different columns from the structure.
Model 1 Uses CustomerID, Income, Age, Region, and filters the data on Region.
Model 2 Uses CustomerID, Income, Age, Region and filters the data on Age.
Model 3 Uses CustomerID, Age, Gender, and the nested table, with no filter.
Because the models use different columns for input, and because two of the models additionally restrict the data that is used in the model by applying a filter, the models might have very different results even though they are based on the same data. Note that the CustomerID column is required in all models because it is the only available column that can be used as the case key.
This section explains the basic architecture of data mining structures: how you define a mining structure, how you populate it with data, and how you use it to create models. For more information about how to manage or export existing data mining structures, see Management of Data Mining Solutions and Objects.
Defining a Mining Structure
Setting up a data mining structure includes the following steps:
Define a data source.
Select columns of data to include in the structure (not all columns need to be added to the model) and defining a key.
Define a key for the structure, including the key for the bested table, if applicable.
Specify whether the source data should be separate into a training set and testing set. This step is optional.
Process the structure.
These steps are described in more detail in the following sections.
Data Sources for Mining Structures
When you define a mining structure, you use columns that are available in an existing data source view. A data source view is a shared object that lets you combine multiple data sources and use them as a single source. The original data sources are not visible to client applications, and you can use the properties of the data source view to modify data types, create aggregations, or alias columns.
If you build multiple mining models from the same mining structure, the models can use different columns from the structure. For example, you can create a single structure and then build separate decision tree and clustering models from it, with each model using different columns and predicting different attributes.
Moreover, each model can use the columns from the structure in different ways. For example, your data source view might contain an Income column, which you can bin in different ways for different models.
The data mining structure stores the definition of the data source and the columns in it in the form of bindings to the source data. For more information about data source bindings, see Data Sources and Bindings (SSAS Multidimensional). However, note that you can also create a data mining structure without binding it to a specific data source by using the DMX CREATE MINING STRUCTURE (DMX) statement.
Mining Structure Columns
The building blocks of the mining structure are the mining structure columns, which describe the data that the data source contains. These columns contain information such as data type, content type, and how the data is distributed. The mining structure does not contain information about how columns are used for a specific mining model, or about the type of algorithm that is used to build a model; this information is defined in the mining model itself. (Analysis Services - Data Mining).
To create a data mining model in SQL Server Data Tools (SSDT), you must first create a data mining structure. The Data Mining wizard walks you through the process of creating a mining structure, choosing data, and adding a mining model.
If you create a mining model by using Data Mining Extensions (DMX), you can specify the model and the columns in it, and DMX will automatically create the required mining structure. For more information, see CREATE MINING MODEL (DMX).
For more information, see Mining Structure Columns.
Dividing the Data into Training and Testing Sets
When you define the data for the mining structure, you can also specify that some of the data be used for training, and some for testing. Therefore, it is no longer necessary to separate your data in advance of creating a data mining structure. Instead, while you create your model, you can specify that a certain percentage of the data be held out for testing, and the rest used for training, or you can specify a certain number of cases to use as the test data set. The information about the training and testing data sets is cached with the mining structure, and as a result, the same test set can be used with all models that are based on that structure.
For more information, see Training and Testing Data Sets.
Enabling Drillthrough
You can add columns to the mining structure even if you do not plan to use the column in a specific mining model. This is useful if, for example, you want to retrieve the e-mail addresses of customers in a clustering model, without using the e-mail address during the analysis process. To ignore a column during the analysis and prediction phase, you add it to the structure but do not specify a usage for the column, or set the usage flag to Ignore. Data flagged in this way can still be used in queries if drillthrough has been enabled on the mining model, and if you have the appropriate permissions. For example, you could review the clusters resulting from analysis of all customers, and then use a drillthrough query to get the names and e-mail addresses of customers in a particular cluster, even though those columns of data were not used to build the model.
For more information, see Drillthrough Queries (Data Mining).
Processing Mining Structures
A mining structure is just a metadata container until it is processed. When you process a mining structure, Analysis Services creates a cache that stores statistics about the data, information about how any continuous attributes are discretized, and other information that is later used by mining models. The mining model itself does not store this summary information, but instead references the information that was cached when the mining structure was processed. Therefore, you do not need to reprocess the structure each time you add a new model to an existing structure; you can process just the model.
You can opt to discard this cache after processing, if the cache is very large or you want to remove detailed data. If you do not want the data to be cached, you can change the CacheMode property of the mining structure to ClearAfterProcessing. This will destroy the cache after any models are processed. Setting the CacheMode property to ClearAfterProcessing will disable drillthrough from the mining model.
However, after you destroy the cache, you will not be able to add new models to the mining structure. If you add a new mining model to the structure, or change the properties of existing models, you would need to reprocess the mining structure first. For more information, see Processing Requirements and Considerations (Data Mining).
Viewing Mining Structures
You cannot use viewers to browse the data in a mining structure. However, in SQL Server Data Tools (SSDT), you can use the Mining Structure tab of Data Mining Designer to view the structure columns and their definitions. For more information, see Data Mining Designer.
If you want to review the data in the mining structure, you can create queries by using Data Mining Extensions (DMX). For example, the statement
SELECT * FROM <structure>.CASES returns all the data in the mining structure. To retrieve this information, the mining structure must have been processed, and the results of processing must be cached.
The statement
SELECT * FROM <model>.CASES returns the same columns, but only for the cases in that particular model. For more information, see SELECT FROM <structure>.CASES and SELECT FROM <model>.CASES (DMX).
Using Data Mining Models with Mining Structures
A data mining model applies a mining model algorithm to the data that is represented by a mining structure. A mining model is an object that belongs to a particular mining structure, and the model inherits all the values of the properties that are defined by the mining structure. The model can use all the columns that the mining structure contains or a subset of the columns. You can add multiple copies of a structure column to a structure. You can also add multiple copies of a structure column to a model, and then assign different names, or aliases, to each structure column in the model. For more information about aliasing structure columns, see Create an Alias for a Model Column and Mining Model Properties.
For more information about the architecture of data mining models, see Mining Models (Analysis Services - Data Mining).
Related Tasks
Use the links provided her to learn more about how to define, manage, and use mining structures.
See Also
Database Objects (Analysis Services - Multidimensional Data)
Mining Models (Analysis Services - Data Mining) | https://docs.microsoft.com/en-us/sql/analysis-services/data-mining/mining-structures-analysis-services-data-mining | 2017-06-22T20:50:38 | CC-MAIN-2017-26 | 1498128319902.52 | [array(['media/dmcon-modelarch.gif',
'Processing of data: source to structure to model Processing of data: source to structure to model'],
dtype=object) ] | docs.microsoft.com |
Worker Resources¶
Access to scarce resources like memory, GPUs, or special hardware may constrain how many of certain tasks can run on particular machines.
For example, we may have a cluster with ten computers, four of which have two GPUs each. We may have a thousand tasks, a hundred of which require a GPU and ten of which require two GPUs at once. In this case we want to balance tasks across the cluster with these resource constraints in mind, allocating GPU-constrained tasks to GPU-enabled workers. Additionally we need to be sure to constrain the number of GPU tasks that run concurrently on any given worker to ensure that we respect the provided limits.
This situation arises not only for GPUs but for many resources like tasks that require a large amount of memory at runtime, special disk access, or access to special hardware. Dask allows you to specify abstract arbitrary resources to constrain how your tasks run on your workers. Dask does not model these resources in any particular way (Dask does not know what a GPU is) and it is up to the user to specify resource availability on workers and resource demands on tasks.
Example¶
We consider a computation where we load data from many files, process each one with a function that requires a GPU, and then aggregate all of the intermediate results with a task that takes up 70GB of memory.
We operate on a three-node cluster that has two machines with two GPUs each and one machine with 100GB of RAM.
When we set up our cluster we define resources per worker:
dask-worker scheduler:8786 --resources "GPU=2" dask-worker scheduler:8786 --resources "GPU=2" dask-worker scheduler:8786 --resources "MEMORY=100e9"
When we submit tasks to the cluster we specify constraints per task
from distributed import Client client = Client('scheduler:8786') data = [client.submit(load, fn) for fn in filenames] processed = [client.submit(process, d, resources={'GPU': 1}) for d in data] final = client.submit(aggregate, processed, resources={'MEMORY': 70e9})
Resources are Abstract¶
Resources listed in this way are just abstract quantities. We could equally well have used terms “mem”, “memory”, “bytes” etc. above because, from Dask’s perspective, this is just an abstract term. You can choose any term as long as you are consistent across workers and clients.
It’s worth noting that Dask separately track number of cores and available memory as actual resources and uses these in normal scheduling operation. | http://distributed.readthedocs.io/en/latest/resources.html | 2017-06-22T20:40:28 | CC-MAIN-2017-26 | 1498128319902.52 | [] | distributed.readthedocs.io |
This information helps you to create advanced reports in Alfresco Analytics and explains how the REST API relates to SSO (Single Sign-On).
You are here
Developing with Analytics
Sending feedback to the Alfresco documentation team
You don't appear to have JavaScript enabled in your browser. With JavaScript enabled, you can provide feedback to us using our simple form. Here are some instructions on how to enable JavaScript in your web browser. | http://docs.alfresco.com/analytics/concepts/analytics-developing.html | 2017-06-22T20:34:15 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.alfresco.com |
Install using the RPM file of the Fuel plugins catalog¶
To install the StackLight Collector Fuel plugin using the RPM file of the Fuel plugins catalog:
Go to the Fuel plugins catalog.
From the Filter drop-down menu, select the Mirantis OpenStack version you are using and the Monitoring category.
Download the RPM file.
Copy the RPM file to the Fuel Master node:
[root@home ~]# scp lma_collector-1.0.0-1.0.0-1.noarch.rpm \ root@<Fuel Master node IP address>:
Install the plugin using the Fuel Plugins CLI:
[root@fuel ~]# fuel plugins --install lma_collector-1.0.0-1.0.0-1.noarch.rpm
Verify that the plugin is installed correctly:
[root@fuel ~]# fuel plugins --list id | name | version | package_version ---|----------------------|---------|---------------- 1 | lma_collector | 1.0.0 | 4.0.0
Install from source¶
Alternatively, you may want to build the plugin RPM file from source if, for example, you want to test the latest features of the master branch or customize the plugin.
Note
Running a Fuel plugin that you built yourself is at your own risk and will not be supported.
To install the StackLight Collector Plugin from source, first prepare an environment to build the RPM file. The recommended approach is to build the RPM file directly onto the Fuel Master node so that you will not have to copy that file later on.
To prepare an environment and build the plugin:
Install the standard Linux development tools:
[root@home ~] yum install createrepo rpm rpm-build dpkg-devel
Install the Fuel Plugin Builder. To do that, you should first get pip:
[root@home ~] easy_install pip
Then install the Fuel Plugin Builder (the fpb command line) with pip:
[root@home ~] pip install fuel-plugin-builder
Note
You may also need to build the Fuel Plugin Builder if the package version of the plugin is higher than the package version supported by the Fuel Plugin Builder you get from
pypi. For instructions on how to build the Fuel Plugin Builder, see the Install Fuel Plugin Builder section of the Fuel Plugin SDK Guide.
Clone the plugin repository:
[root@home ~] git clone
Verify that the plugin is valid:
[root@home ~] fpb --check ./fuel-plugin-lma-collector
Build the plugin:
[root@home ~] fpb --build ./fuel-plugin-lma-collector
To install the plugin:
Once you have created the RPM file, install the plugin:
[root@fuel ~] fuel plugins --install ./fuel-plugin-lma-collector/*.noarch.rpm
Verify that the plugin is installed correctly:
[root@fuel ~]# fuel plugins --list id | name | version | package_version ---|----------------------|---------|---------------- 1 | lma_collector | 1.0.0 | 4.0.0 | http://fuel-plugin-lma-collector.readthedocs.io/en/stable/install.html | 2017-06-22T20:37:46 | CC-MAIN-2017-26 | 1498128319902.52 | [] | fuel-plugin-lma-collector.readthedocs.io |
Using the Github UI to Make Pull Requests
This article helps you understanding and creating Pull Requests on Github, so you can contribute to a project like Joomla!.
Contents
What is a Pull Request?
A Pull Request is a request to pull some code to a repository (project) on GitHub.
In basic language you are asking if some code changes/additions you made can be used in the project. This changes can be the solution for a bug or a new feature. Github has a simple web interface that makes it very easy to propose a simple change to code. You don't need to install any software or do anything beside register for a git hub account.
Identify the change you like to made
First of all, map what changes you like to made.
For example, we like to add an icon for at the article info block, before the authors name. On the moment of writing, this icon is not displayed yet.
Find the file you want to modify on Github
If you don't have a GitHub account yet, sign up on GitHub. It is free and very easy to do. After that, go to the Joomla! CMS repository and find the file you like to edit. You can navigate through by clicking on the folder and file name.
This can be a hard step sometimes, because Joomla! counts more than 6000 files. In our example we have to edit the following file: /layouts/joomla/content/info_block/author.php.
Make your changes
Navigate to the file, and edit the file by clicking on the pencil icon on the right.
In our example, we add the following code to line 14 of the file: autor.php
<span class="icon-user"></span>
Note: you may have noticed the blue message above the page. This message is telling you that GitHub made a copy of the project for you, where you can make changes. This copy is called a Fork. The changes you made in this copy can be used in the original project. If you like to read more about how GitHub works, you can read this article for some background information.
Add a title and description
Below the editor, we can specify our Pull Request by adding a title and a description.
The title has to be short, and must tell what this pull request does.
The description contains more detailed information about the Pull request and some information how to test it. Make this information so complete and clear as possible. When you made a Pull Request of an issue on GitHub, it is very common to add also the issue ID in the description. You can do this by typing a # (hash tag) with directly following the issue ID. You can find this ID directly after the title of the issue, in the same notation.
Send the Pull Request
Click on the button "Propose file changes" and after that on the button "Create Pull Request". Your pull request is made now!
And now?
The only thing you have to do now is wait until someone will test this PR. When someone comments, you will be notified via an email. It can happen that someone requests addition information, so try to stay up-to-date with you Pull Request.
If a Pull Request is successfully tested twice, a moderator will add the label 'RTC' to it. RTC means: Ready To Commit'. In basic language it tells someone who has commit rights: This Pull request is successfully tested, and can be added to Joomla!. The admin will add (Merge) the project to the Joomla! CMS github repository. Your Pull Request is definitive now, and will be present in the next version of Joomla! if it was about fixing a bug or in the next minor version if the PR was about a new feature. | https://docs.joomla.org/Using_the_Github_UI_to_Make_Pull_Requests | 2017-06-22T20:33:54 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.joomla.org |
Use this information to install and configure Alfresco S3 Connector as an alternative content store.
Using an Alfresco Module Package, the connector supplies a new content store which replaces the default file system GB
Note:
Always install the S3 connector cleanly. Upgrades from a local content store to S3 are not supported, and will corrupt the Alfresco repository. | http://docs.alfresco.com/5.1/concepts/S3content-intro.html | 2017-06-22T20:44:40 | CC-MAIN-2017-26 | 1498128319902.52 | [] | docs.alfresco.com |
Message-ID: <192465965.213551.1560671912563.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_213550_1971252278.1560671912563" ------=_Part_213550_1971252278.1560671912563 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
High availability (HA) is a redundancy operation that automati= cally switches to a standby server if the primary server fails or is t= emporarily shut down for maintenance. HA enables the Presentation Server to= continuously function so that monitoring of your mission-critical systems = is continuously available. The Presentation Server supports out-of-the= box HA that eliminates the need for third-party software and reduces the m= anual steps required to deploy. HA utilizes a load balancer software compon= ent as an HA proxy to switch operations between the primary and standby ser= ver.
Performing the Presentation Server installation
2019-02-13_04-12-11_Load balancer req= uirements and guidelines for Atrium Single Sign-On high availability
High-availability deployment and best= practices for Infrastructure Management
Transferring the service between th= e active and standby Presentation Servers=20
Presentation Server= in HA mode
The Presentation Server in HA mode consists of two servers with ide= ntical configurations. The first server is referred to as the primary serve= r, and the standby server is referred to as the secondary server. The= primary server always takes on an active role and all the Presentatio= n Server processes that are running during this time. When the primary serv= er is down or in case of a failover, the secondary server takes on an activ= e role. Only one Presentation Server can be active at a time. In case = of a failover due to an event that triggers a server shutdown, the secondar= y server takes over the active role, and all of the processes change from s= tandby mode to operation mode on the secondary server.
The detection and management of a failover is built in to the Presentati= on Server. However, it does not manage the failback transfer back to = the primary server. You must issue CLI commands to restart the primary serv= er and re-establish its role as the active server.
the TrueSight Operations Management HA deployment comprises the followin= g systems:
A load balancer is a software component that routes the client requests = to the active server. In the context of the Presentation Server system= , the load balancer is more of a proxy server that accepts client requests = and directs these requests to the active server. The load balancer resides = on a separate computer and redirects requests to the active server.
In a successful HA deployment, the secondary server must take over when = the primary server is not working, or the primary server is ready to take o= ver a load balancer is required to direct the client requests to the active= server.
A load balancer:
There are two ways to deploy the TrueSight Presentation Server in HA mod= e:
You can choose to deploy the TrueSight Presentation Server in HA mo= de during installation by selecting the Enabled option, If= you choose to enable HA, you must specify which system is the primary serv= er and which system is the secondary server. For information about deployin= g in HA mode during installation, see Performing the Presentation S= erver installation.
You can choose to deploy the TrueSight Presentation Server in = HA Primary mode or HA Secondary mode post installation.
Navigate to the <Presentation Server installation direct= ory>\truesightpserver\bin directory, open a CLI command pr= ompt, and run the following commands:
tssh server stopcommand on the primary = and the secondary server.
tssh process start databasecommand on t= he primary server.
tssh ha configure mastercommand on the = primary server.
tssh process stop databaseon the primar= y server.
tssh server startcommand on the pr= imary server.
tssh ha configure standbycommand on the= secondary server.
tssh ha copysnapshoton the seco= ndary server.
tssh server startcommand on the seconda= ry server.
In the Presentation Server HA mode, the secondary server becomes th= e active server if the primary server stops operating, due to an event that= triggers a server shutdown. Once the primary server is up and running, it = does not become the active server by default. The primary server is still i= n a standby mode. The service can be transferred back to the primary server= , or the primary server can remain in standby mode.
During a failover, if you are accessing the TrueSight console using a lo=
ad balancer, you might view error messages such as
The Server is unab=
le to respond. Any operation using the TrueSight console might fail.=
This is a temporary condition and normalcy is restored when the active nod=
e becomes fully operational.
Note
Do not stop the active server immediately after restarting the standby s= erver. The cache synchronization takes time to complete and is dependent on= your environment.
If the primary and secondary= servers are both disconnected from the network, you must restart both serv= ers, and set one server to the active mode and the other server to standby = mode. | https://docs.bmc.com/docs/exportword?pageId=661342687 | 2019-06-16T07:58:32 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.bmc.com |
About the Extension Samples
Contents
Introducing Interaction Workspace Extension Samples
The Interaction Workspace Extension Samples provide developers with examples of various use cases. Recommended best practices to modify the out-of-the-box version of Interaction Workspace are used in these code samples. Genesys recommends that you examine the samples before making changes to Interaction Workspace.
Locating the Extension Samples
The Interaction Workspace Extension Samples are included in the Interaction Workspace API along with the Interaction Workspace API Reference documentation. You can also download the samples available on this wiki:
- InteractionWorkspaceExtensionSamples812.zip
- InteractionWorkspaceExtensionSamples811.zip
- InteractionWorkspaceExtensionSamples810.zip
This version might differ from the version available in the latest installation package of the Interaction Workspace.
The Interaction Workspace API contains everything that a software developer requires for customizing Interaction Workspace, including:
- A Bin directory that contains the Interaction Workspace API
- A Samples directory that contains code samples for developers that demonstrate Genesys' best practices recommendations
- An InteractionWorkspace directory that contains Interaction Workspace in the Wiki
The following use cases are included in the samples:
Deploying and Executing the Extension Samples
- Run the setup.exe program to use the wizard to install the Interaction Workspace.
- Click Next in the Welcome dialog box.
- Select Install Interaction Workspace Developer's Toolkit from the Select Options dialog.
- Click Next. The Ready to Install dialog box opens.
- Click Install.
- When installation completes, the Installation Complete window opens. Click Finished. For more information, see the online Interaction Workspace Deployment Guide.
- Verify that the following directories are installed:
- C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspace\ (This folder contains all of the required binaries).
- C:\Program Files\GCTI\Interaction Workspace\Samples\Genesyslab.Desktop.Modules.ExtensionSample\ (This folder contains the sample solution file).
- To build and debug your custom module in Interaction Workspace combined with "Interaction Workspace SIP Endpoint" or any Interaction Workspace\Interaction Workspace\InteractionWorkspace\".
For example:
- For Interaction Workspace SIP Endpoint, after you install this add-on copy the directory "C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspaceSIPEndpoint" into the following location: "C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspace\".
- For the Twitter plug-in, after you install the plug-in, copy the following files "C:\Program Files\GCTI\Interaction Workspace\Genesyslab.Desktop.Modules.Twitter.dll" and "C:\Program Files\GCTI\Interaction Workspace\Genesyslab.Desktop.Modules.Twitter.module-config" into the following location: "C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspace\", and "C:\Program Files\GCTI\Interaction Workspace\Languages\Genesyslab.Desktop.Modules.Twitter.en-US.xml" into the following location: "C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspace\Languages".
- To open the Extension Sample in Visual Studio 2008, click the Genesyslab.Desktop.Modules.ExtensionSample.sln solution file.
- Build the solution. Note: Building the solution also copies the content of "C:\Program Files\GCTI\Interaction Workspace\InteractionWorkspace\" to the following location: "C:\Program Files\GCTI\Interaction Workspace\Samples\Genesyslab.Desktop.Modules.ExtensionSample\bin\Debug".
- Open the project property dialog box, and click the Debug tab.
- In the Start Action section, select the Start external program option, and in the text field type: C:\Program Files\GCTI\Interaction Workspace.
Read Next
Write Custom Applications
This page was last modified on March 28, 2013, at 05:57.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IW/8.1.3/Developer/AbouttheExtensionSamples | 2019-06-16T06:32:05 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.genesys.com |
Connect to Exchange Online Protection PowerShell
Exchange Online Protection PowerShell allows you to manage your Exchange Online Protection settings from the command line. You use Windows PowerShell on your local computer to create a remote PowerShell session to Exchange Online Protection. It's a simple three-step process where you enter your Office 365 credentials, provide the required connection settings, and then import the Exchange Online Protection cmdlets into your local Windows PowerShell session so that you can use them..
Tip
Having problems? Ask for help in the Exchange forums. Visit the forums at: Exchange Server, Exchange Online, or Exchange Online Protection.
Connect to Exchange Online Protection Germany, use the ConnectionUri value:
For Exchange Online Protection subscriptions that are Exchange Enterprise CAL with Services (includes data loss prevention (DLP) and reporting using web services), use the ConnectionUri value: Protection cmdlets are imported into your local Windows PowerShell session and tracked by a progress bar. If you don't receive any errors, you connected successfully. A quick test is to run an Exchange Online Protection cmdlet, for example, Get-TransportRule, Protection organization. Exchange Online Protection PowerShell endpoint.
See also
The cmdlets that you use in this topic are Windows PowerShell cmdlets. For more information about these cmdlets, see the following topics.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/powershell/exchange/exchange-eop/connect-to-exchange-online-protection-powershell?view=exchange-ps | 2019-06-16T06:28:37 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.microsoft.com |
gos_i2c_device_t Struct ReferencePeripherals > I2C Master > Types I2C peripheral context used by direct APIs. More... Data Fields gos_i2c_t port The I2C peripheral port. uint32_t speed I2C clock speed. uint16_t address I2C slave address. uint16_t retries Number of times to retry a read/write. uint16_t read_timeout Max time in milliseconds to wait for each read byte (. More... uint8_t flags Device flags. gos_i2c_address_width_t address_width Indicates the number of bits that the slave device uses for addressing. Detailed Description I2C peripheral context used by direct APIs. Field Documentation ◆ read_timeout uint16_t gos_i2c_device_t::read_timeout Max time in milliseconds to wait for each read byte (. Noteif set to 0 then defaults to 10ms) | https://docs.silabs.com/gecko-os/4/standard/latest/sdk/structgos-i2c-device-t | 2019-06-16T06:36:35 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.silabs.com |
AWS Account - IAM Roles and Access Policies
The Amazon IAM service (see) enables the creation and enforcement of access privilege policies. For the Velostrata deployment we leverage IAM Groups and Instance Roles. As a minimal setup we recommend the following configuration:
- Create an IAM Group (for example, VelosMgrGroup) for use by the Velostrata service user account. This group will enforce an access policy with the minimum privileges required by the Velostrata Manager VM on-prem, to allow provisioning and monitoring of both the Velostrata cloud-side components as well as the Velostrata Run-in-Cloud workload VMs. The Velostrata service account will be used by the Velostrata Manager VM on-prem.
- Create an IAM Role (for example, VelosEdgeRole) for use by Velostrata Cloud Edge instances. This role provides the minimum privileges required to access AWS services such as S3, without managing persistent credentials per instance.
- Create Access Policies associated with VelosMgrGroup and VelosEdgeRole with applicable minimum privileges required for the Velostrata service user and for Velostrata Cloud Edge instances.
Note: For more information on creating the AWS service user, see Creating the AWS Service User for Velostrata. | http://docs.velostrata.com/m/75846/l/470323-aws-account-iam-roles-and-access-policies | 2019-06-16T06:49:50 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.velostrata.com |
The Contrast agent is designed to require little to no interaction from the user to set up instrumentation on a .NET Core application. Once the environment is set up through environment variables or application launch profile, the .NET Core agent automatically instruments the ASP.NET Core application. The agent performs analysis as users (or automated scripts or tests) exercise applications. You can view the results of the agent's analysis in the Contrast UI.
The Contrast .NET Core agent consists of two components that run within the same process as your application.
The .NET Profiler that instruments applications to weave in method calls out to agent sensors.
Sensors that gather security, architecture and library information.
These components are located in several DLL files that you may download from the Contrast UI. You can place them anywhere on disk (and they don't need to be placed in your application folder).
To update the agent, replace the agent files in the agent directory and restart your application. As the agent is running alongside your application, it can't update itself.
The agent automatically starts with your application as long as the environment is setup as described in .NET Core installation.
To stop the agent, stop the application and remove agent from its environment. Alternatively, you may change the
CORECLR_ENABLE_PROFILING setting to
0. | https://docs.contrastsecurity.com/installation-netcoreusage.html | 2019-06-16T07:01:29 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.contrastsecurity.com |
Message-ID: <1418729185.36982.1560667468816.JavaMail.confluence@docs1.parasoft.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_36981_929782473.1560667468813" ------=_Part_36981_929782473.1560667468813 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Both C++test standalone and the C++test Eclipse plugin allow C++= test to be used with IAR Embedded Workbench-no special integration is requi= red.
C++test support in the IAR Embedded Workbench should not be confused wit= h the ability to fully integrate with the development environment; rather, = it is preconfigured to work with IAR in the following ways:
This chapter contains information common to all supported Embedded Workb= ench architectures and toolchains. Information about specific toolchains is= organized into subsections. The following toolchains are currently support= ed: | https://docs.parasoft.com/exportword?pageId=38636506 | 2019-06-16T06:44:28 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.parasoft.com |
.
cd ..; cp -r unit_testing functional_testing; cd functional_testing
Add
webtestto our project's dependencies in
setup.pyas a Setuptools "extra":
Install our project and its newly added dependency. Note that we use the extra specifier
[dev]to install testing requirements for development and surround it and the period with double quote marks.
$VENV/bin/pip install -e ".[dev]"
Let's extend
functional_testing/tutorial/tests.pyto include a functional test:
Be sure this file is not executable, or
pytestmay not include your tests.
Now run the tests:
$VENV/bin/pytest tutorial/tests.py -q .. 2 passed in 0.25 seconds
Analysis¶
We now have the end-to-end testing we were looking for. WebTest lets us simply
extend our existing
pytest-based test approach with functional tests that
are reported in the same output. These new tests not only cover our templating,
but they didn't dramatically increase the execution time of our tests. | https://docs.pylonsproject.org/projects/pyramid/en/latest/quick_tutorial/functional_testing.html | 2019-06-16T06:41:37 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.pylonsproject.org |
Contents Now Platform Administration Previous Topic Next Topic Global variables in business rules Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Global variables in business rules Predefined global variables are available for use in business rules. Use the following predefined global variables to reference the system in a business rule script. Global variable Description current The current state of the record being referenced. Check for null before using this variable. previous The state of the referenced record prior to any updates made during the execution context, where the execution context begins with the first update or delete operation and ends after the script and any referenced business rules are executed. If multiple updates are made to the record within one execution context, previous will continue to hold the state of the record before the first update or delete operation. Available on update and delete operations only. Not available on asynch operations. Check for null before using this variable. g_scratchpad The scratchpad object is available on display rules, and is used to pass information to the client to be accessed from client scripts. gs References to GlideSystem functions. The variables current, previous, and g_scratchpad are global across all business rules that run for a transaction. Prevent null pointer exceptions In some cases, there may not be a current or previous state for the record when a business rule runs, which means that the variables will be null. To check for null before using a variable, add the following code to your business rule: if (current == null) // to prevent null pointer exceptions. return; Define variables User-defined variables are globally scoped by default. If a new variable is declared in an order 100 business rule, the business rule that runs next at order 200 also has access to the variable. This may introduce unexpected behavior. To prevent such unexpected behavior, always wrap your code in a function. This protects your variables from conflicting with system variables or global variables in other business rules that are not wrapped in a function. Additionally, variables such as current must be available when a function is invoked in order to be used. The following script } } On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/istanbul-platform-administration/page/script/business-rules/concept/c_UsingPredefinedGlobalVariables.html | 2019-06-16T07:10:58 | CC-MAIN-2019-26 | 1560627997801.20 | [] | docs.servicenow.com |
Description
The OSBREAD command reads data from a file starting at a specified byte location for a certain length of bytes, and assigns the data to a variable. It takes the general form:
OSBREAD var FROM file.var [AT byte.expr] LENGTH length.expr [ON ERROR statements]
Where:
- var specifies a variable to which to assign the data read,
- FROM file.var specifies a file from which to read the data,
- AT byte.expr specifies a location in the file from which to begin reading data. If byte.expr is 0, the read begins at the beginning of the file,
- LENGTH length.expr specifies a length of data to read from the file, starting at byte.expr. length.expr cannot be longer than the maximum string length determined by system configuration,
- ON ERROR statements specifies statements to execute if a fatal error occurs (if the file is not open, or if the file is a read-only file). If the ON ERROR clause is not specified, the program terminates under such fatal error conditions.
Note:
- Before using OSBREAD, a file must be opened using the OSOPEN or OPENSEQ command.
- The ASCII 0 character [CHAR (0)] is used as a string-end delimiter. Therefore, ASCII 0 cannot be used in any string variable within jBASE. OSBREAD converts CHAR(0) to CHAR(128) when reading a block of data.
- After execution of OSBREAD, the STATUS function returns either 0 or a failure code.
- OSBREAD performs an operating system block read on a UNIX or Windows file.
An example of use is a program statement that reads 10,000 bytes of the file MYPIPE starting from the beginning of the file. The program assigns the data it reads to the variable Data.
OSBREAD Data FROM MYPIPE AT 0 LENGTH 10000
Go back to jBASE BASIC. | https://docs.jbase.com/277546-osbread | 2019-12-06T03:11:30 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jbase.com |
Have a question about using AppDynamics or have you run into a problem? Try the following steps.
Search the Documentation
You can search for information from the following:
- The search field on the right side of the top menu bar initiates a search of the entire documentation set.
- The search field in the left navigation pane searches within the current version only.
You can search across AppDynamics information resources, including the community, knowledge base, support site, and more, from the AppDynamics Support Center. Help Center.Help Center.
Ask the Community
If you have questions about using AppDynamics, try asking the AppDynamics community.
Contact Support
If you need further assistance, contact your account representative or technical support.
For technical support, click the Help tab while logged in with your AppDynamics account in the AppDynamics Support Center.
When requesting support, attach relevant logs for your issue:
- For log files from the Controller, see Platform Log Files.
- For the heap, histogram, and thread dumps, see Controller Dump Files. | https://docs.appdynamics.com/display/PRO45/AppDynamics+Support | 2019-12-06T02:41:57 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.appdynamics.com |
All content with label as5+buddy_replication+dist+distribution+docbook+gridfs+import+infinispan+jta+loader+lock_striping+murmurhash2+notification+rebalance+write_through.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, recovery, transactionmanager, release, partitioning, query, deadlock, archetype, jbossas, nexus, guide, schema,
listener, state_transfer, cache, amazon, s3, grid, jcache, test, api, xsd, ehcache, maven, documentation, write_behind, 缓存, ec2, hibernate, aws, interface, custom_interceptor, setup, clustering, eviction, concurrency, out_of_memory, jboss_cache, index, events, configuration, hash_function, batch, colocation, xa, cloud, mvcc, tutorial, xml, jbosscache3x, read_committed, meeting, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, permission, transaction, interactive, xaresource, build, hinting, searchable, demo, installation, scala, client, non-blocking, migration, filesystem, jpa, tx, eventing, client_server, testng, murmurhash, infinispan_user_guide, standalone, repeatable_read, snapshot, webdav, hotrod, docs, batching, consistent_hash, store, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - as5, - buddy_replication, - dist, - distribution, - docbook, - gridfs, - import, - infinispan, - jta, - loader, - lock_striping, - murmurhash2, - notification, - rebalance, - write_through )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/as5+buddy_replication+dist+distribution+docbook+gridfs+import+infinispan+jta+loader+lock_striping+murmurhash2+notification+rebalance+write_through | 2019-12-06T03:40:37 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.jboss.org |
Summary of additions and changes to the Mobile Wallet.
31-10-2019
Bug fixes:
- Fixed a bug where the receive screen crashed when entering a note.
- Fixed a bug where values on the success screen would switch to 0.
Changes:
- Admins can now configure wallets for sending to only mobile numbers or only email addresses.
- Added file size limit information to file uploads.
- Changed text highlight colour to neutral grey instead of primary company color.
- Admins can now display a custom message on any confirm screen.
08.
- Issue where recipient was intermittantly was blank in transaction history has been fixed.. | https://docs.rehive.com/wallets/mobile-wallet/changelog/ | 2019-12-06T03:56:45 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.rehive.com |
...
In the examples we discuss here, we use an API named PhoneVerify
FindFeeds, which is based on the online search functionality provided by YouTube (). We use as the Production Url/Endpoint.
...
...
- Log in to API Publisher Web interface (), and go to Add API page. Create new API with following information.
- Name: FindFeedsPhoneVerify
- Context: /feedsphoneverify
- Version: 1.0.0
- Appropriate image as the thumbnail
- Endpoint:
- Tiers: Tier: Gold, Silver, Bronze (select all 3 – this field supports multiple values)
- Business owner: Bruce Wayne
- Business owner e-mail: [email protected]
- Technical owner: Peter Parker
- Technical owner e-mail: [email protected]: HTTP/HTTPS
- Production Endpoint:
- Define API resources for the operations you need to perform.
Specify None as the Auth Type of OPTIONS
For each resource that has HTTP verbs requiring Authentication (i.e., Auth Type is not NONE), enable OPTIONS with None Auth type. For example, as the following screen shot shows, resources with
/feedsphoneverify/1.0.0URL Pattern has HTTP verbs with Auth Type as
Application & Application User. Therefore, we must give None as the Auth Type of OPTIONS. This is to support CORS between the API Store and Gateway. But, if no authentication is needed for any of the HTTP verbs, you don't have to specify None Auth type to OPTIONS.
Publish the API to the API Store.
... | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=35618879&selectedPageVersions=2&selectedPageVersions=3 | 2019-12-06T03:39:21 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.wso2.com |
TODO: Add the floating point ops
There's a bunch of prepackaged behavior that you could implement by analyzing the ASTs and composing sets of operations, but here's an easier way to do it:
You can chop a bitvector into a list of chunks of
n bits with
val.chop(n)
You can endian-reverse a bitvector with
x.reversed
You can get the width of a bitvector in bits with
val.length
You can test if an AST has any symbolic components with
val.symbolic
You can get a set of the names of all the symbolic variables implicated in the construction of an AST with
val.variables | https://docs.angr.io/appendix/ops | 2019-12-06T03:05:15 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.angr.io |
Net
Tcp Context Binding Class
Definition
Provides a context-enabled binding for the NetTcpContextBinding binding.
public ref class NetTcpContextBinding : System::ServiceModel::NetTcpBinding
public class NetTcpContextBinding : System.ServiceModel.NetTcpBinding
type NetTcpContextBinding = class inherit NetTcpBinding
Public Class NetTcpContextBinding Inherits NetTcpBinding
- Inheritance
- NetTcpContextBinding contextManagementEnabled attribute in the binding configuration. This attribute is not recognized by the .NET Framework 3.5 runtime and the application will thrown an ConfigurationErrorsException with the message "Unrecognized attribute 'contextManagementEnabled". To workaround this problem, remove the contextManagementEnabled attribute from the binding configuration. | https://docs.microsoft.com/en-us/dotnet/api/system.servicemodel.nettcpcontextbinding?view=netframework-4.8 | 2019-12-06T04:09:15 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
and Teams may not know the actual location of a caller making an Emergency Services call, which could result in the call being routed to the wrong Emergency Services call center and/or emergency services being dispatched to the wrong location; (ii) if the user's Teams client is offline, or if the user's device is unable to access the internet for any reason, such as a network outage or power outage, Emergency Services calls through Phone System in Office 365 are not supported and are not expected to work;
Emergency Calling disclaimer label
Feedback | https://docs.microsoft.com/en-us/microsoftteams/emergency-calling-terms-and-conditions | 2019-12-06T03:08:20 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.microsoft.com |
Install Northwind Traders database and apps
By following the steps in this series of topics, you can discover concepts about relational data as implemented in a sample database in Common Data Service. You can also explore sample business apps, both canvas and model-driven, for managing that data and earn practical experience by creating such an app.. This sample appeared with the first versions of Microsoft Access and is still available as an Access template.
Prerequisites
- A Power Apps license that supports Common Data Service. You can use a free trial license for 30 days.
- An environment with a Common Data Service database. You can create such an environment if you have appropriate permissions.
Download the solution
This solution file (.zip) contains the definitions of entities, option sets, and business processes; the canvas and model-driven apps; and any other pieces that are used together.
Install the solution
Sign in to Power Apps, and then ensure that you're working in an environment that contains a Common Data Service database.
In the left navigation pane, select Solutions, and then select Import in the toolbar near the top of the screen:
In the Select Solution Package page, select Browse.
Find the file that you downloaded, and then select Open.
Unless you selected a different location, the file will be in your Downloads folder.
If you have the correct file (the version number might vary), select Next:
In the Solution Information page, select Next if the name of the solution and the publisher are correct.
In the Import Options page, select Import to confirm SDK message handling, which the sample requires:
Another page appears and shows progress as the solution is installed over the next few minutes:
When the installation finishes, the original page shows the result:
Select Close.
Load the sample data
Select Apps, and then select Northwind Sample Data.
Wait a few minutes if the Northwind apps don't appear immediately after you install the solution:
When the app asks for permission to interact with Common Data Service, select Allow:
After the app loads and shows that the sample entities contain no records, select Load Data to populate the entities:
As the app loads the data, dots march across the top of the app, and the number of records increases.
Entities are loaded in a specific order so that relationships can be established between records. For example, the Order Details entity has a many-to-one relationship with the Orders and Order Products entities, which are loaded first.
You can cancel the process at any time by selecting Cancel, and you can remove the data at any time by selecting Remove Data:
When the data finishes loading, the last row (Many to Many Relationships) shows Done, and the Load Data and Remove Data buttons are enabled again:
Sample apps
The Northwind solution includes these apps for interacting with this data:
- Northwind Orders (Canvas)
- Northwind Orders (Model-driven)
You open these apps the same way that you opened the app in the previous procedure.
Canvas
This single-screen app offers a simple master-detail view of the Orders entity, where you can view and edit a summary of the order and each line item for an order. A list of orders appears near the left edge, and you can select an arrow in that list to show a summary and the details of that order. More information: Overview of the canvas app for Northwind Traders.
Model-driven
This app operates on the same data (in the Orders entity) as the canvas app. In the list of orders, show more information about an order by selecting its number:
A summary of the order appears on a separate form:
If you scroll down the form, it shows the same line items as the canvas app does:
Do it yourself
You can follow step-by-step instructions to create the canvas app shown earlier in this topic. The instructions are divided into three parts:
If you want to skip ahead, the solution contains a starting-point app for each part. In the list of apps, look for Northwind Orders (Canvas) - Begin Part 1 and so on.
This overview of the app explains the user interface, data sources, and how relationships are used.
Feedback | https://docs.microsoft.com/en-us/powerapps/maker/canvas-apps/northwind-install | 2019-12-06T03:34:28 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['media/northwind-install/orders-canvas.png',
'List of orders and details in Northwind canvas app'], dtype=object)
array(['media/northwind-install/orders-model.png',
'list of orders in Northwind model-driven app'], dtype=object)
array(['media/northwind-install/orders-model-2.png',
'order details in model-driven app'], dtype=object)
array(['media/northwind-install/orders-model-3.png',
'more order details in model-driven app'], dtype=object)] | docs.microsoft.com |
Splunk¶
Overview¶
The Morpheus Splunk Integration allows forwarding logs from managed Linux hosts and vm’s to a target Splunk listener by changing the rsyslogd config on linux vm’s to point to Splunk forwarders. The logs will be forwarded from the clients, not from the Morpheus Appliance.
Adding Splunk Integration¶
Add a syslog listener configuration in Splunk.
Navigate to
Administration -> Logs
Expand the Splunk section
Enable the integration
Fill in the following:
- Enabled
Enable the Splunk integration
- Host
IP or Hostname of the Splunk server.
- Port
Port configured to access the Splunk server.
SAVE
Once added, syslogs from managed Linux hosts and vm’s will be forwards from the clients to the target Splunk listener. | https://docs.morpheusdata.com/en/3.6.3/integration_guides/Logs/Splunk.html | 2019-12-06T02:55:36 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.morpheusdata.com |
Quick Reflection Environment Setup
Setting Up a Level to use the Reflection Environment
-
Reflection Capture Lightmap Mixing
-
Reflection capture shapes
Editing Reflection Probes
Performance Considerations
-:
Add a few lights to your level and build the lighting once as there must be some indirect diffuse lighting for the Reflection Environment to show up at all.
From the Modes panel under the Visual Effects section, select and drag a Sphere Reflection Capture Actor into the level.
If you fail to see reflections in your level or your reflections are not as strong as you require you can try the following:
Make sure that your Materials have a noticeable Specular and a low Roughness to better show reflections.
Use the Reflection Override view mode to visualize what is being captured to gain a better idea of what values in your Materials need to be adjusted.
Setting Up a Level to use the Reflection Environment
The first step toward having good reflections is setting up diffuse lighting including indirect lighting through the use of lightmaps. The Lightmass page contains more info on how to accomplish this if you are unfamiliar with using it. Common errors preventing Lightmass indirect lighting from working after a build include but are not limited to the following:
A shadow casting skybox.
Lack of a LightmassImportanceVolume.
Missing or incorrectly setup lightmap UVs
Having Force No Precomputed Lighting set to True in the World Properties.
Since the level's diffuse color is what will be reflected through the Reflection Environment you will need to do the following for the best results.
Ensure significant contrast between directly lit and shadowed areas.
Remember that the bright diffuse lit areas are what will show up clearly in reflections.
Darker shadowed areas are where the reflections will be most visible.
Use the Lit viewmode together with the Specular show flag disabled to see the level as the reflection captures see it.
It is also extremely important to setup your level's Materials to work well with the Reflection Environment by keeping the following in mind.
A flat, mirror surface will reveal the inaccuracies of combining cubemaps projected onto simple shapes.
Curvy geometry or rough surfaces can both obscure these artifacts and provide convincing results.
It is important to use detail Normal maps and specify some degree of roughness on Materials that will be used in flat areas as this will help them better show off reflections.
Place reflection captures in the areas that you want to have reflections. Try to place the sphere captures such that the part of the level you want to reflect is contained just inside their radius since the level will be reprojected onto that sphere shape. Try to avoid placing captures too close to any level geometry, as that nearby geometry will dominate and block important details behind it.
Glossy Indirect Specular
In technical terms, the Reflection Environment provides indirect specular. We get direct specular through analytical lights, but that only provides reflections in a few bright directions. We also get specular from the sky through a Sky Light, but that does not provide local reflections since the Sky Light cubemap is infinitely far away. Indirect specular allows all parts of the level to reflect on all other parts, which is especially important for Materials like metal which have no diffuse response.
Full lighting
.
Materials with varying glossiness are supported by generating blurry mipmaps from the captured cubemaps.
However, just using the cubemap reflections on a very rough surface results in an overly bright reflection that has significant leaking due to lack of local occlusion. This is solved by reusing the lightmap data generated by Lightmass. The cubemap reflection is mixed together with the lightmap indirect specular based on how rough the material is. A very rough material (fully diffuse) will converge on the lightmap result. This mixing is essentially combining the high detail part of one set of lighting data (cubemaps) with the low-frequency part of another set of lighting data (lightmaps). For this to work correctly, though, only indirect lighting can be in the lightmap. This means that only the indirect lighting from Stationary lights can improve the quality of reflections on rough surfaces. Static light types should not be used together with the Reflection Environment as they will put direct lighting in the lightmap. Note that this mixing with the lightmap also means that the map must have meaningful indirect diffuse lighting and that lighting must already be built to see results.
Reflection Capture Lightmap Mixing
When you use Reflection Capture Actors, UE4 mixes the indirect Specular from the Reflection Capture with the indirect Diffuse lighting from lightmaps. This helps to reduce leaking since the reflection cubemap was only captured at one point in space, but the lightmaps were computed on all the receiver surfaces and contain local shadowing information.
While lightmap mixing works well for rough surfaces, this method breaks down on smooth surfaces as reflections from Reflection Capture Actors no longer match reflections from other methods like Screen Space Reflections or Planar Reflections. Because of this, lightmap mixing is no longer applied to very smooth surfaces. A surface with Roughness value of 0.3 will get full lightmap mixing, fading out to no lightmap mixing by Roughness 0.1 and below. This also allows Reflection Captures and Screen Space Reflections to match better and make it harder to spot transitions between the two.
Lightmap Mixing and Existing Content
By default, lightmap mixing will be enabled which means it will affect existing content. In cases where you had reflections leaking on smooth surfaces, that leaking will be more apparent. To solve this, you can either place additional Reflection Capture Actors around the level to help reduce the leaking. Or you can revert to the old lightmap mixing behavior by going to Edit > Project Settings > Rendering > Reflections and then un-check the Reduce lightmap mixing on smooth surfaces.
You can fine tune the amount of lightmap mixing that will take place by adjusting the following commands via the UE4 console.
r.ReflectionEnvironmentBeginMixingRoughness (default = 0.1)
r.ReflectionEnvironmentEndMixingRoughness (default = 0.3)
r.ReflectionEnvironmentLightmapMixBasedOnRoughness (default = 1)
r.ReflectionEnvironmentLightmapMixLargestWeight (default = 1000)
High Quality Reflections
While the default reflection quality settings strike a good balance between performance and visual quality, there could be instances where you want to achieve even higher quality reflections. The following sections describe the available methods for achieving high quality reflections.
High Precision Static Mesh Vertex Normal and Tangent Encoding
An important factor in achieving high quality reflections is how accurately the vertex normal and tangent can be represented. Very high density meshes may lead to adjacent vertices quantizing to the same vertex normal and tangent values. This can lead to blocky jumps in normal orientation. We added the option to encode normals and tangents as 16 bits per channel vectors which enables developers to make the trade off between higher quality and how much additional memory is used encoding vertex buffers.
To enable High Precision Static Mesh Vertex Normal and Tangent Encoding:
In the Content Browser, double-click on a Static Mesh to open it up in the Static Mesh Editor.
In the Static Mesh Editor, go the Details panel and expand the LOD0 option.
At the bottom of the LOD0, there is a section called Build Settings. Click on the small triangle next to Build Settings to expand the Build Settings options.
Enable the Use High Precision Tangent Basis option by clicking on the check box next to its name and then press the Apply Changes button to apply the new settings.
The viewport will automatically update to reflect the changes.
The quality of the reflection that is viewed is directly related to how densely tessellated the Static Mesh is. Static Meshes that have less tessellation will have more stretching artifacts in the reflection than Static Meshes that have more tessellation.
High Precision GBuffer Normal Encoding
Enabling the High Precision GBuffer Normal Encoding option will allow the GBuffer to use a higher precision Normal encoding. This higher precision GBuffer Normal encoding encodes the Normal vector into three channels with each channel having 16 bits per. Using this higher precision encoding allows techniques like Screen Space Reflections (SSR) to rely on high precision normals.
To enable High Precision GBuffer Normal Encoding:
In the Main Toolbar, select Edit > Project Settings to open the Project Settings.
In the Project Settings under the Engine section, click on the Rendering option and under the Optimizations section change the Gbuffer Format from Default to High Precision Normals.
Keep in mind that this encoding requires increased GPU memory and enabling this will have a direct impact on your project's performance.
Since changing the GBuffer format does not require you to restart the Editor, you can quickly change between the different GBuffer formats to see the impact they will have on the reflection visuals. In the image below we can see how changing the GBuffer format from the Default to High Precision Normals changes the look and quality of the reflection.
Reflection capture shapes).
Sphere shape
The sphere shape is currently the most useful. It never matches the shape of the geometry being reflected but has no discontinuities or corners, therefore, the error is uniform.
The sphere shape has an orange influence radius that controls which pixels can be affected by that cubemap, and the sphere that the level will be reprojected onto.
Smaller captures will override larger ones, so you can provide refinement by placing smaller captures around the level.
Box shape
The box shape is very limited in usefulness and generally only works for hallways or rectangular rooms. The reason is that only pixels inside the box can see the reflection, and at the same time all geometry inside the box is projected onto the box shape, creating significant artifacts in many cases.
The box has an orange preview for the projection shape when selected. It only captures the level within Box Transition Distance outside this box. The influence of this capture fades in over the transition distance as well, inside the box.
Editing Reflection Probes
When making edits to Reflection Probes there are a number of different things that you must remember to do to ensure that you get the results you are after. In the following section we will cover what these things are and how you can make sure you are getting the best quality reflections in your projects.
Updating Reflection Probes
It is important to note that Reflection Probes are not automatically kept up to date. Only the following actions will automatically update the Reflection Probes place in a level.
Directly editing a Reflection Capture Actor properties.
Building the levels lighting.
If you make any other kind of edit to the level like change a light's brightness or move around level geometry, you will need to select a Reflection Capture Actor and click the Update Captures button to propagate the changes.
Using a Custom HDRI Cubemap in a Reflection Probe
Reflection Probes have the ability to not only specify which cubemap they should be using for reflection data but also what size that cubemap should be. Previously UE4 hard-coded the resolution of the cooked cubemaps reflection probes would use. Now developers can choose the resolution that best suits their needs based on performance, memory, and quality tradeoffs. Below is a comparison image that shows the difference between using the Captured Scene option versus the Specified Cubemap option.
To specify a custom HDRI Cubemap for your project's Reflection Probes to use you will need to do the following:
First, make sure that you have an HDRI Cubemap Texture available for use. If you do not have an HDRI Cubemap Texture in your project, one comes bundled with the Starter Content called HDRI_Epic_Courtyard_Daylight.
Select a Reflection Probe Actor that has been placed in the level and in the Details panel under the Reflection Capture section change the Reflection Source Type from Captured Scene to Specified Cubemap
With the Reflection Probe still selected in the level, go to the Content Browser and select the HDRI Texture you want to use. Then in the Reflection Capture Actor, under the Reflection Capture drag the HDRI Texture from the Content Browser to the Cubemap input.
Press the Update Capture button to refresh the Reflection Capture Actor to use the new HDRI Cubemap Texture that was just specified.
Adjusting Reflection Probe Resolution
You can globally adjust the resolution of the HDRI Cubemaps that are used for the Reflection Capture Actors by doing the following:
Open up your Project settings by going to the Main Toolbar and then selecting Edit > Project Settings.
From the Project Settings menu go to the Engine > Rendering section and then look for the Textures option.
By adjusting the Reflection Capture Resolution option you can increase or decrease the size of the HDRI Cubemap Texture that was specified..
The following images show what how the reflections will look when the Reflection Capture Resolution is set to 1, 4, 8, 16, 32, 64, 128, 256, 512 and 1024.
Drag the slider to see how the different resolutions affect the look of the reflection.
Adjusting Skylight Reflection Resolution
Like with the Reflection Probes, Skylights also have the ability to define and adjust the resolution of the HDRI cubemap that they use for reflections. To utlize this functionality in your UE4 project you will need to do the following:
From the Mode panel under the Lights section, select and then drag a Skylight into your level.
Select the Skylight and in the Details panel under the Light section, change the Source Type from SLS Captured Scene to SLS Specified Cubemap.
Click on the drop down box in the Cubemap section and select an HDRI cube map from the list.
After the cubemap has been selected you can adjust its resolution by changing the value in the Cubemap Resolution input..
Blending Multiple Reflection Probe Data
You can blend between multiple different cubemap reflections by providing the Reflection Capture Actors with different HDRI cubemaps. To accomplish this in your UE4 project all you need to is the following:
First, make sure that you have at least one Reflection Probe added to your level and that you have changed the Reflection Source Type to Specified Cubemap and input an HDRI Texture into the Cubemap input.
Duplicate or add a new Reflection Probe to the level and position / adjust its Influence Radius it so that part of it's yellow influence radius is intersecting with the first Reflection Probe.
Select the newly duplicated / created Reflection Probe Actor and in the Details panel under the Cubemap section change the HDRI cubemap to a different one.
With the Reflection Probe that was added / duplicated still selected, go to the Details panel in the Reflection Capture section and press the Update Captures button to update the reflection to use what was input in the Cubemap input.
If you select and move the Reflection Probe around the level you can get a better idea for how the two HDRI cubemaps will blend together.
Visualizing
The Reflection Override viewmode has been added to make it easier to see how well the reflections are set up. This viewmode overrides all normals to be the smooth vertex normal, and makes all surfaces fully specular and completely smooth (mirror like). Limitations and artifacts of the Reflection Environment are also clearly visible in this mode so it is important to switch to Lit periodically to see what the reflections look like in normal conditions (bumpy normals, varying glossiness, dim specular).
Some new show flags have been added which are useful for isolating down the components of the lighting:
Performance Considerations
The Reflection Environment cost is only dependent on how many captures influence the pixels on the screen. It is very similar to deferred lighting in this sense. Reflection captures are bounded by their influence radius and therefore they are culled very efficiently. Glossiness is implemented through the cubemap mipmaps so there is little performance difference between sharp or rough reflections.
Limitations
Reflections through this method are approximate. Specifically, the reflection of an object will rarely match up with the actual object in the level due to projection onto simple shapes. This tends to create multiple versions of that object in reflections as many cubemaps are being blended together. Flat, smooth surfaces that cause mirror reflections will show the error very noticeably. Use detail normal maps and roughness to help break up the reflection and these artifacts.
Capturing the level into cubemaps is a slow process which must be done outside of the game session. This means dynamic objects cannot be reflected, although they can receive reflections from the static level.
Only the level's diffuse is captured to reduce error. Purely specular surfaces (metals) will have their specular applied as if it were diffuse during the capture.
There can be significant leaking when there are different lighting conditions on both sides of a wall. One side can be setup to have correct reflections, but it will always leak into the other side.
Due to DX11 hardware limitations, the cubemaps used to capture the level are all 128 on each side, and the world can have at most 341 reflection captures enabled at one time. | https://docs.unrealengine.com/en-US/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/index.html | 2019-12-06T03:50:25 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/reflection_environment.jpg',
'Reflection Environment'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/DiffuseOnly.jpg',
'Diffuse Only'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/ReflectionOnly.jpg',
'Reflection Only'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/Complete.jpg',
'Full Scene'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/VaryingGlossiness.jpg',
'Varying Glossiness'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/2RoughWithNoShadowing.jpg',
'Reflections on a rough surface with no shadowing'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/2RoughWithShadowing.jpg',
'Rough with Shadowing'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/ReduceLightmapMixingOnSmoothSurfaces_Off.jpg',
'Reduce Lightmap Mixing On Smooth Surfaces Off'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/ReduceLightmapMixingOnSmoothSurfaces_On.jpg',
'Reduce Lightmap Mixing On Smooth Surfaces On.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/Enable_LM_Mixing.jpg',
'Enable_LM_Mixing.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/SM_HP_Vertex_Normals_Off.jpg',
'High Precision Vertex Normal Off'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/SM_HP_Vertex_Normals_On.jpg',
'High Precision Vertex Normal On'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/Default_GBuffer_Format.jpg',
'Default GBuffer Format'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/Hight_P_GBuffer_Format.jpg',
'High Precision GBuffer Format'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/SphereShape.jpg',
'Sphere Shape'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/BoxShape.jpg',
'Box Shape'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/CC_Capture_Scene.jpg',
'Captured Scene'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/CC_Specified_Cubemap.jpg',
'Specified Cubemap'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_1.jpg',
'RCR_1.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_2.jpg',
'RCR_2.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_3.jpg',
'RCR_3.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_4.jpg',
'RCR_4.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_5.jpg',
'RCR_5.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_6.jpg',
'RCR_6.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_7.jpg',
'RCR_7.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_8.jpg',
'RCR_8.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_9.jpg',
'RCR_9.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/RCR_10.jpg',
'RCR_10.png'], dtype=object)
array(['./../../../../../Images/Engine/Rendering/LightingAndShadows/ReflectionEnvironment/ReflectionOverride.jpg',
'Reflection Override'], dtype=object) ] | docs.unrealengine.com |
AWS Lambda Layers.
Layers let you keep your deployment package small, which makes development easier. You can avoid errors that can occur when you install and package dependencies with your function code. For Node.js, Python, and Ruby functions, you can develop your function code in the Lambda console as long as you keep your deployment package under 3 MB.
Note
A function can use up to 5 layers at a time. The total unzipped size of the function and all layers can't exceed the unzipped deployment package size limit of 250 MB. For more information, see AWS Lambda Limits.
You can create layers, or use layers published by AWS and other AWS customers. Layers support resource-based policies for granting layer usage permissions to specific AWS accounts, AWS Organizations, or all accounts.
Layers are extracted to the
/opt directory in the function execution environment. Each runtime
looks for libraries in a different location under
/opt, depending on the language. Structure your layer so that function code can access libraries without
additional configuration.
You can also use AWS Serverless Application Model (AWS SAM) to manage layers and your function's layer configuration. For instructions, see Declaring Serverless Resources in the AWS Serverless Application Model Developer Guide.
Sections
Configuring a Function to Use Layers
You can specify up to 5 layers in your function's configuration, during or after function creation. You choose a specific version of a layer to use. If you want to use a different version later, update your function's configuration.
To add layers to your function, use the
update-function-configuration command. The following
example adds two layers: one from the same account as the function, and one from a
different account.
$
aws lambda update-function-configuration --function-name my-function \ --layers{ "FunctionName": "test-layers", "FunctionArn": "arn:aws:lambda:us-east-2:123456789012:function:my-function", "Runtime": "nodejs12.x", "Role": "arn:aws:iam::123456789012:role/service-role/lambda-role", "Handler": "index.handler", "CodeSize": 402, "Description": "", "Timeout": 5, "MemorySize": 128, "LastModified": "2018-11-14T22:47:04.542+0000", "CodeSha256": "kDHAEY62Ni3OovMwVO8tNvgbRoRa6IOOKqShm7bSWF4=", "Version": "$LATEST", "TracingConfig": { "Mode": "Active" }, "RevisionId": "81cc64f5-5772-449a-b63e-12330476bcc4", "Layers": [ { "Arn": "arn:aws:lambda:us-east-2:123456789012:layer:my-layer:3", "CodeSize": 169 }, { "Arn": "arn:aws:lambda:us-east-2:210987654321:layer:their-layer:2", "CodeSize": 169 } ] }
arn:aws:lambda:us-east-2:123456789012:layer:my-layer:3\
arn:aws:lambda:us-east-2:210987654321:layer:their-layer:2
You must specify the version of each layer to use by providing the full ARN of the layer version. When you add layers to a function that already has layers, the previous list is overwritten by the new one. Include all layers every time you update the layer configuration. To remove all layers, specify an empty list.
$
aws lambda update-function-configuration --function-name my-function --layers []
Your function can access the content of the layer during execution in the
/opt directory. Layers
are applied in the order that's specified, merging any folders with the same name.
If the same file appears in
multiple layers, the version in the last applied layer is used.
The creator of a layer can delete the version of the layer that you're using. When this happens, your function continues to run as though the layer version still existed. However, when you update the layer configuration, you must remove the reference to the deleted version.
Managing Layers
To create a layer, use the
publish-layer-version command with a name, description, ZIP archive,
and a list of runtimes that are compatible with the layer. The list of
runtimes is optional, but it makes the layer easier to discover.
$
aws lambda publish-layer-version --layer-name my-layer --description "My layer" --license-info "MIT" \ --content S3Bucket=lambda-layers-us-east-2-123456789012,S3Key=layer.zip --compatible-runtimes python3.6 python3.7{ :1", "Description": "My layer", "CreatedDate": "2018-11-14T23:03:52.894+0000", "Version": 1, "LicenseInfo": "MIT", "CompatibleRuntimes": [ "python3.6", "python3.7", "python3.8" ] }
Each time you call
publish-layer-version, you create a new version. Functions that use the layer
refer directly to a layer version. You can configure
permissions on an existing layer version, but to make any other changes, you must create a new
version.
To find layers that are compatible with your function's runtime, use the
list-layers
command.
$
aws lambda list-layers --compatible-runtime python3.8{ "Layers": [ { "LayerName": "my-layer", "LayerArn": "arn:aws:lambda:us-east-2:123456789012:layer:my-layer", "LatestMatchingVersion": { "LayerVersionArn": "arn:aws:lambda:us-east-2:123456789012:layer:my-layer:2", "Version": 2, "Description": "My layer", "CreatedDate": "2018-11-15T00:37:46.592+0000", "CompatibleRuntimes": [ "python3.6", "python3.7", "python3.8", ] } } ] }
You can omit the runtime option to list all layers. The details in the response reflect
the latest version of
the layer. See all the versions of a layer with
list-layer-versions. To see more information about a
version, use
get-layer-version.
$
aws lambda get-layer-version --layer-name my-layer --version-number 2{ :2", "Description": "My layer", "CreatedDate": "2018-11-15T00:37:46.592+0000", "Version": 2, "CompatibleRuntimes": [ "python3.6", "python3.7", "python3.8" ] }
The link in the response lets you download the layer archive and is valid for 10 minutes.
To delete a layer
version, use the
delete-layer-version command.
$
aws lambda delete-layer-version --layer-name my-layer --version-number 1
When you delete a layer version, you can no longer configure functions to use it. However, any function that already uses the version continues to have access to it. Version numbers are never re-used for a layer name.
Including Library Dependencies in a Layer
You can move runtime dependencies out of your function code by placing them in a layer.
Lambda runtimes include
paths in the
/opt directory to ensure that your function code has access to libraries that are
included in layers.
To include libraries in a layer, place them in one of the folders supported by your runtime.
Node.js –
nodejs/node_modules,
nodejs/node8/node_modules(
NODE_PATH)
Example AWS X-Ray SDK for Node.js
xray-sdk.zip └ nodejs/node_modules/aws-xray-sdk
Python –
python,
python/lib/python3.8/site-packages(site directories)
Example Pillow
pillow.zip │ python/PIL └ python/Pillow-5.3.0.dist-info
Java –
java/lib(classpath)
Example Jackson
jackson.zip └ java/lib/jackson-core-2.2.3.jar
Ruby –
ruby/gems/2.5.0(
GEM_PATH),
ruby/lib(
RUBY_LIB)
Example JSON
json.zip └ ruby/gems/2.5.0/ | build_info | cache | doc | extensions | gems | └ json-2.1.0 └ specifications └ json-2.1.0.gemspec
All –
bin(
PATH),
lib(
LD_LIBRARY_PATH)
Example JQ
jq.zip └ bin/jq
For more information about path settings in the Lambda execution environment, see Environment Variables Available to Lambda Functions.
Layer Permissions
Layer usage permissions are managed on the resource. To configure a function with
a layer, you need permission
to call
GetLayerVersion on the layer version. For functions in your account, you can get this
permission from your user policy or from the function's resource-based policy. To use a layer in another account, you
need permission on your user policy, and the owner of the other account must grant
your account permission with a
resource-based policy.
To grant layer-usage permission to another account, add a statement to the layer version's
permissions policy
with the
add-layer-version-permission command. In each statement, you can grant permission to a
single account, all accounts, or an organization.
$
aws lambda add-layer-version-permission --layer-name xray-sdk-nodejs --statement-id xaccount \ --action lambda:GetLayerVersion --principal 210987654321 --version-number 1 --output texte210ffdc-e901-43b0-824b-5fcd0dd26d16 {"Sid":"xaccount","Effect":"Allow","Principal":{"AWS":"arn:aws:iam::210987654321:root"},"Action":"lambda:GetLayerVersion","Resource":"arn:aws:lambda:us-east-2:123456789012:layer:xray-sdk-nodejs:1"}
Permissions only apply to a single version of a layer. Repeat the procedure each time you create a new layer version.
For more examples, see Granting Layer Access to Other Accounts. | https://docs.aws.amazon.com/en_us/lambda/latest/dg/configuration-layers.html | 2019-12-06T02:39:36 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.aws.amazon.com |
.
J repeating units, there is also a flip(f) option that defines that the top and bottom crossing bonds are flipped during each connection. repeating groups with specified repetition ranges.
Substructure search is not yet prepared to handle the case when
Repeating unit drawing is described in the Marvin Sketch Help here, and ladder-type bracket drawing is described at the polymer drawing section.
Position here .
Limitations:.:
Homology translation.
number of enumerates does not exceed 100.
R-group queries of Markush targets are not supported with
undefinedRAtom:g/gh/ghe options when query structures contain undefined R-atom(s). They are supported only with
undefinedRAtom:a and
undefinedRAtom:u options..
molconvert mrv scaffold.mrv -R1 r1_definitions.mrv -R2 r2_definitions.mrv | https://docs.chemaxon.com/display/docs/Searching+in+Markush+targets+tables?reload=true | 2019-12-06T04:05:16 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.chemaxon.com |
Blank Page When Try to Activate or Deactivate a Plugin
Getting a blank page means that your website is throwing an php error but because of the sites settings this error is not shown to the frontend.
You have several options to check the exact error message and to get know what plugin is causing the issue:
- Install the free Query Monitor plugin. It will show you any error on your site. We use that plugin everyday!
- Enable the WordPress debug mode. Read here how to do so.
- Check your php or WordPress log files. | https://docs.mashshare.net/article/75-blank-page-when-i-try-to-activate-or-deactivate-a-plugin | 2019-12-06T03:26:48 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.mashshare.net |
This section provides a high-level example of how service model objects in BMC Atrium CMDB are published to the cell and how they are viewed and monitored in Infrastructure Management.
In this example, your BMC Atrium CMDB maintains an online ordering service model that has three services - online ordering, databases, and web servers.
The following illustration describes this relationship:
In BMC Atrium CMDB, you use the BMC Impact Model Designer to plot out the service model objects. It uses the Atrium - Publish Me and My Providers publication filter to publish the service model to Infrastructure Management. In a sandbox dataset, you specify how each component in the service model is published to the cell. Because the online ordering service is a top-level consumer component, you configure it to publish with its provider components.
Note
Do not change the default publication setting for the provider components; by default, their publication is determined by the setting of their consumer components.
After setting up the service model components, you promote the service model. Promotion reconciles objects from the sandbox dataset to the production dataset. By default, service model objects are automatically published to the cell.
The following figure shows an example of how the service model looks in the administrator console. The lock icon that is displayed by each component indicates that the component cannot be edited in the administrator console. You can only edit a service model object in its source environment.
Example of the published service model in the administrator console
After you publish components, you associate the required monitors with these components.
By default, the services that you publish from BMC Atrium CMDB do not contain any metrics. For an effective Probable Cause Analysis, you have to add those metrics that indicate the health/status of the services. | https://docs.bmc.com/docs/display/tsim107/How+service+model+objects+in+BMC+Atrium+CMDB+are+published+and+monitored+in+Infrastructure+Management | 2019-12-06T04:35:43 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.bmc.com |
This example runs using the
dapp-server-js repository to deploy a serverless API. It provides a default implementation of a Mobius DApp backend running on Webtask.io and exposes some generic endpoints like
/api/balance and
/api/charge. If your DApp frontend requires something different, you can fork the repo and extend the API up to your needs.
Cloning flappy-dapp Repository
All Flappy Bird examples are contained in a single repository. Clone the flappy-dapp repository to get started.
Generating Key Pairs
If this is the first time running one of the examples, you will need to generate testnet key pairs along with a
dev-wallet.html file.
Navigate to the root directory of the repository and use
mobius-cli create dev-wallet to generate a new
dev-wallet.html file and account key pairs.
Installing Mobius CLI
If Mobius CLI has not yet been installed on your machine run
gem install mobius-client. See Installation docs for more details on generating key pairs.
Install the backend dependencies, register a Webtask account, setup the server and run it.
Installing Dependencies
Navigate to
backend/serverless-quickstart and install dependencies.
Register Account
If you have never used Webtask.io before, quickly register an account with webtask.io by running:
$ yarn wt profile init
Installing Webtask CLI
If Webtask CLI has not been installed on your machine, see Webtask's official docs.
The setup will ask 4 quick questions:
What network will your DApp operate on?
Test SDF Network ; September 2015
What is your DApp secret key?
Open dev-wallet.html file and use application private key.
What is your DApp Name?
Flappy Bird
What is your DApp Domain?
flappy.mobius.network - used for setting JSON web token issuer.
Running the Server
Run the server locally and note the port that the server is running on, by default the port will be set to
8080.
Install the frontend dependencies, set the correct endpoints and run it.
Installing Dependencies
In a new terminal window navigate to the
frontend/ from the root directory and install dependencies.
Setting Endpoints
Set the API endpoints to connect to for functions such as
/charge or
/balance according to the address your server is running on.
- Open
main.jsfound at
frontend/public/js/main.js.
- On line #18, change the
DAPP_APIvariable to match your current servers address followed by
/api.
// Server running in on localhost:8080 const DAPP_API = ''; // or // Server running in production on webtask.io after using '$yarn deploy' const DAPP_API = '';
/api
Make sure Flappy Bird
DAPP_API variable ends with
/api.
Running the Frontend
At the root of the
frontend/ folder run the project locally and take note of the localhost address it is running on.
Entering the Flappy Bird DApp requires authentication, this would be the process of a user opening your DApp from the DApp Store. To simulate this for development environments, open
dev-wallet.html in your browser.
Set Auth Endpoint
The Auth Endpoint is where the authorization requests will be sent too, this needs to be adjusted either in the browser or in the HTML itself.
# Using the port the server is currently running on # or # Server running in production on webtask.io after using '$yarn deploy'
/auth
Make sure the Auth Endpoint value ends with
/auth.
Set Redirect URI
The Redirect URI is where the user is redirected to with a Token after being successfully authorized. This is the localhost address that the frontend of the example is currently running on. If you are having trouble finding this address see Running the Frontend.
# Using the port the frontend is currently running on
Once the endpoints are set, the backend and frontend servers are running, and the
dev-wallet.html file has been correctly set with the auth endpoint and redirect URI, the project can be run.
Use the "Open" button under "Normal Account" to use an authenticated and funded user account to play Flappy Bird using testnet MOBI!
dapp-server-js
This example is running using the dapp-server-js repository available on GitHub. Clone for quick project reuse or fork for customization and quickly deploy a serverless solution for your Mobius DApps.
What's Next
View the API endpoints built into this example. | https://docs.mobius.network/docs/serverless-quickstart-api | 2019-12-06T03:07:50 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.mobius.network |
Step options for Generic, PowerShell, and Python plugins
Important: Although the content in this topics is relevant for this version of XL Deploy, we recommend that you use the rules system for customizing deployment plans. For more information, see Getting started with XL Deploy rules.
If you create a plugin based on the Generic or PowerShell plugin, you can specify step options that control the data that is sent when performing a
CREATE,
MODIFY,
DESTROY or
NOOP deployment step defined by a configuration item (CI) type. Step options also control the variables that are available in templates or scripts.
What is a step option?
A step option specifies the extra resources that are available when performing a deployment step. A step option is typically used when the step executes a script on a remote host. This script, or the action to be performed, may have zero or more of the following requirements:
- The artifact associated with this step needed in the step’s
workdir.
- External file(s) in the
workdir.
- Resolved FreeMarker template(s) in the
workdir.
- Details of the previously deployed artifact in a variable in the script context.
- Details of the deployed application in a variable in the script context.
The type definition must specify the external files and templates involved by setting its
classpathResources and
templateClasspathResources properties. For example, see the
shellScript delegate in the Generic plugin. Information on the previously deployed artifact and deployed application are available when applicable.
When are step options needed?
For some types, especially types based on the Generic plugin, the default behavior is that all classpath resources are uploaded and all FreeMarker templates are resolved and uploaded, regardless of the deployment step type. These resources may result in a large amount of data, especially if the artifact is large. For some steps, you may not need to upload all resources.
For example, creating the deployed on the target machine may involve executing a complex script that needs the artifact and some external files, modifying it involves a template, but deleting the deployed is completed by removing a file from a fixed location. In this case, it is not necessary to upload everything each time, because it is not all needed.
Step options enable you to use the
createOptions,
modifyOptions,
destroyOptions and
noopOptions properties on a type, and to specify the resources to upload before executing the step itself.
If you want a deployment script to refer to the previous deployed, or to have information about the deployed application. You can make this information available by setting the step options.
Generic plugin and PowerShell plugin options
The following step options are available for the Generic plugin and PowerShell plugin:
none: Do not upload anything extra as part of this step. You can also use this option to unset step options from a supertype.
uploadArtifactData: Upload the artifact associated with this deployed to the working directory before executing this step.
uploadClasspathResources: Upload the classpath resources, as specified by the deployed type, to the working directory when executing this step.
Generic plugin options
The following additional step option is available in the Generic plugin:
uploadTemplateClasspathResources: Resolve the template classpath resources, as specified by the deployed type, then upload the result into the working directory when executing this step.
PowerShell plugin options
The following additional step option is available in the PowerShell plugin:
exposePreviousDeployed: Add the
previousDeployedvariable to the PowerShell context. This variable points to the previous version of the deployed CI, which must not be null.
exposeDeployedApplication: Add the
deployedApplicationvariable to PowerShell context, which describes the version, environment, and deployeds of the currently deployed application. Refer to the
udm.DeployedApplicationCI for more information.
When can my plugin CI types use step options?
Your plugin CI types can use step options when they inherit from one of the following Generic or PowerShell plugin deployed types:
- generic.AbstractDeployed
- generic.AbstractDeployedArtifact
- generic.CopiedArtifact
- generic.ExecutedFolder
- generic.ExecutedScript
- generic.ExecutedScriptWithDerivedArtifact
- generic.ManualProcess
- generic.ProcessedTemplate
- powershell.BasePowerShellDeployed
- powershell.BaseExtensiblePowerShellDeployed
- powershell.ExtensiblePowerShellDeployed
- powershell.ExtensiblePowerShellDeployedArtifact
These types provide the hidden
SET_OF_STRING properties
createOptions,
modifyOptions,
destroyOptions, and
noopOptions that your type inherits.
What are the default step option settings for existing types?
XL Deploy comes with various predefined CI types based on the Generic and the PowerShell plugins. For the default settings of
createOptions,
modifyOptions,
destroyOptions and
noopOptions, see Generic Plugin Manual and PowerShell Plugin Manual.
You can override the default type definitions settings in the
synthetic.xml file. You can change the defaults in the
conf/deployit-defaults.properties file.
Step options in the Python plugin
The Python plugin does not have step options. However, the
python.PythonManagedDeployed CI has a property that is similar to one of the PowerShell step options:
exposeDeployedApplication: Add the
deployedApplicationobject to the Python context (
udm.DeployedApplication).
There are no additional classpath resources in the Python plugin, so only the current deployed is uploaded to a working directory when the Python script is executed. | https://docs.xebialabs.com/v.9.0/xl-deploy/concept/step-options-for-generic-powershell-and-python-plugins/ | 2019-12-06T04:23:04 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.xebialabs.com |
Within settings under Chat Widget, you can change the look and feel of the Chat Widget to match your branding and change any messaging that is relevant to you. You will see a preview of your changes on the right side of the page.
Default Language - Use this to set a default language to your chat widget.
Greeting - Set the default title you would like visitors to see when they see the Live Chat Widget.
Team Intro - Use this as a call-to-action message to prompt the visitor to ask a question.
Show agent avatar on widget home: This will show the user profile picture on the Live Chat Widget.
Choose Custom Color - You can choose a custom color of your widget using color hex codes. i.e. #0265ff
Widget Dark Theme - Select your widget theme here and choose to darken the icons and text.
Initial Button - Select the button type to initiate the widget.
Initial Button Position - Choose which area of your site you would like the widget to be placed.
Side Spacing (in px) - Choose the amount of space that you want from the side of the widget by inputting the number of pixels.
Bottom Spacing (in px) - Choose the amount of space that you want from the side of the widget by inputting the number of pixels.
Apply This Theme - Select this to make sure the theme is applied.
Advanced Options - This leads to the Translated Messages page.
Translated Messages - This feature allows you to change any of the default messaging that Acquire Provides. Search for the message you would like to change and input the text of the changed message and click "Save." It may take some time to see the reflected changes take effect. Please contact us through live chat or email us at [email protected].
Signup today or request a demoGet Started Now Book a demo | https://docs.acquire.io/chat-widget-settings | 2019-12-06T03:15:34 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.acquire.io |
To upgrade your plan, click on the Upgrade Now button, Once you click on it, you can find the package Comparison Screen.
Once the Agent clicks on Upgrade now button, it will show the current plan of the Agent is, and also it will display all the plans, you can select any of the plans and can upgrade the plan as per requirement.
Under Plan comparison you can Select the plan you would like to purchase for your team, you can also add agents to your plan and click on “Upgrade Plan,” you final payment details will be displayed, you will need to enter the payment method you are willing to pay with and Finalise the payment.
Signup today or request a demoGet Started Now Book a demo | https://docs.acquire.io/upgrading-your-account | 2019-12-06T03:58:52 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.acquire.io |
Retrieves the complete history of the last 10 upgrades that were performed on the domain.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-upgrade: UpgradeHistories
get-upgrade-history --domain-name <value> [--cli-input-json <value>] [--starting-token <value>] [--page-size <value>] [--max-items <value>] [--generate-cli-skeleton <value>]
--domain-name (string)
The name of an Elasticsearch domain. Domain names are unique across the domains owned by an account within an AWS region. Domain names start with a letter or number and can contain the following characters: a-z (lowercase), 0-9, and - (hyp.
UpgradeHistories -> (list)
A list of `` UpgradeHistory `` objects corresponding to each Upgrade or Upgrade Eligibility Check performed on a domain returned as part of `` GetUpgradeHistoryResponse `` object.
(structure)
History of the last 10 Upgrades and Upgrade Eligibility Checks.
UpgradeName -> (string)A string that describes the update briefly
StartTimestamp -> (timestamp)UTC Timestamp at which the Upgrade API call was made in "yyyy-MM-ddTHH:mm:ssZ" format.
UpgradeStatus -> (string)
The overall status of the update. The status can take one of the following values:
- In Progress
- Succeeded
- Succeeded with Issues
- Failed
StepsList -> (list)
A list of `` UpgradeStepItem `` s representing information about each step performed as pard of a specific Upgrade or Upgrade Eligibility Check.
(structure)
Represents a single step of the Upgrade or Upgrade Eligibility Check workflow.
UpgradeStep -> (string)
Represents one of 3 steps that an Upgrade or Upgrade Eligibility Check does through:
- PreUpgradeCheck
- Snapshot
- Upgrade
UpgradeStepStatus -> (string)
The status of a particular step during an upgrade. The status can take one of the following values:
- In Progress
- Succeeded
- Succeeded with Issues
- Failed
Issues -> (list)
A list of strings containing detailed information about the errors encountered in a particular step.
(string)
ProgressPercent -> (double)The Floating point value representing progress percentage of a particular step.
NextToken -> (string)
Pagination token that needs to be supplied to the next call to get the next page of results | https://docs.aws.amazon.com/cli/latest/reference/es/get-upgrade-history.html | 2019-12-06T03:26:58 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.aws.amazon.com |
9.1.04.002: Patch 2 for version 9.1.04
This release consolidates the hotfixes delivered for BMC Atrium Core version 9.1.04 and later into a single patch.
You must apply this patch after you upgrade all the servers in a server group to version 9.1.04, because the BMC Remedy Deployment Application that deploys the patch on an individual server is a part of version 9.1.04.
Enhancements
This patch provides the following product enhancements:
- With this release, you need not run installer to apply the patch. The patch is offered as a deployable package and uses the BMC Remedy Deployment Application. For more information, see Applying a hotfix or a patch
- This patch includes REST API implementation for notification engine that is used for integrating any external product, such as TrueSight Operation Management. For more information on new REST APIs see, Using BMC Atrium Core functions in an external application with REST API.
Defect fixes
This patch includes fixes for some customer defects. For more information about the defects fixed in this patch, see Known and corrected issues.
Applying the patch
To download and apply the D2P_cmdbAtrium_9.1.04-P180221-180620180221.zip patch file from the Electronic Product Download, see Downloading and applying the patch .
Defect fixes This patch includes fixes for some customer defects. For more information about the defects fixed in this patch, see Known and corrected issues.
Link goes nowhere (Page not found)
Hi Dave,
Thanks for pointing that out. I have corrected the link now.
Regards,
Chaitanya | https://docs.bmc.com/docs/ac9104/9-1-04-002-patch-2-for-version-9-1-04-797317602.html | 2019-12-06T04:32:42 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.bmc.com |
At the end of 2019, Quickbooks Online is deprecating their OAuth1 service in favor of using OAuth2. After December 17th, 2019, Quickbooks will be revoking all OAuth1 tokens and no new tokens will be granted. To avoid connection failures, you will need to migrate your OAuth1 QBO element instances to OAuth2.
Cloud Elements is happy to provide the following migration scripts that will handle this for you at this Github Link. Essentially, there are two scripts:
- the first to find all the OAuth1 QBO instances that need to be migrated
- the second to migrate those instances to OAuth2
You will need to supply the following pieces of information to the first script:
- User Secret
- Organization Secret
- Your Cloud Elements Environment (Staging or Prod)
You will need to supply the previous information into the second script, as well as:
- OAuth 2.0 Client Id associated with your OAuth 1.0 application
- OAuth 2.0 Client Secret associated with your OAuth 1.0 application
- List of instances procured by running the first script | https://docs.cloud-elements.com/home/aba0260 | 2019-12-06T04:11:06 | CC-MAIN-2019-51 | 1575540484477.5 | [] | docs.cloud-elements.com |
6.4.1
Splunk Enterprise 6.4.1 was released on May 18, 2016.
The following issues have been resolved in this release. For information about security fixes not related to authentication or authorization, refer to the Splunk Security Portal.
Saved search, alerting, scheduling, and job management issues
Data input issues
Splunk Web and interface issues
Distributed deployment, forwarder, deployment server issues
Data model and pivot issues
PDF issues
Admin and CLI issues
Security issues
For a list of security issues, please see the Security Advisory. A list of all recent advisories can be found in the Security Portal.
Distributed search and search head clustering issues
Indexer and indexer clustering issues
Charting, reporting, and visualization issues
Unsorted issues
Search issues
This documentation applies to the following versions of Splunk® Enterprise: 6.4.1, 6.4.2, 6.4.3, 6.4.4, 6.4.5, 6.4.6, 6.4.7, 6.4.8, 6.4.9, 6.4.10, 6.4.11
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/6.4.11/ReleaseNotes/6.4.1 | 2019-12-06T03:18:27 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Using XL Deploy reports
XL Deploy contains information about your environments, infrastructure, and deployments. Using the reporting functionality, you can gain insight into the state of your environments and applications. Reports are available to all users of XL Deploy.
Reports dashboard
When opening the Reports section for the first time, XL Deploy will show a high-level overview of your deployment activity.
The dashboard consists of three sections that each give a different view of your deployment history:
The following graphs appear or environment. Select the appropriate filter, and then click the applications or environments to include them in the report.
Note: If you change the name of an application that was previously deployed, you will not be able to access detailed reports about that application.
Exporting to CSV format
If you want to reuse data from XL Deploy in your own reporting, you can download report data as a CSV file by clicking
. | https://docs.xebialabs.com/v.9.0/xl-deploy/how-to/using-xl-deploy-reports/ | 2019-12-06T04:28:53 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/static/reports-dashboard-fa34ed05e5b83f4a4d2ff77b9842ba2b.png',
'Reports Dashboard'], dtype=object)
array(['/static/deployment-report-html5-9f9876b45ef6e7bd17049d154586cf22.png',
'Deployment report in HTML'], dtype=object)
array(['/static/reports-deployments-filtered-049ec230d5766e1b71dfdb37bed39442.png',
'Deployments filtered report'], dtype=object) ] | docs.xebialabs.com |
Connect to XL Deploy servers
To configure connections between XL Release and XL Deploy servers, select Settings > Shared configuration from the top menu and go to the XL Deploy Server section.
The XL Deploy server configuration is only available to users who have the Admin global permission.
To add a server:
Click Add XL Deploy Server.
In the Title box, enter a name for the server. This is how the server will be identified in XL Release.
In the URL box, enter the address at which the server is reachable. This is the same address you use to access the XL Deploy user interface.
In the Username and Password boxes, enter the credentials of the XL Deploy user that XL Release will use to log in. XL Release uses this user to query XL Deploy for the applications and environments that are available.
It is recommended that you create an XL Deploy user with read-only rights for XL Release to use. To perform deployments, specify a user and password directly in the XL Deploy task. This provides fine-grained access control from XL Release to XL Deploy. If you do not specify a user in the XL Deploy task, then XL Release will use the user configured on the XL Deploy server to perform deployments (provided that user has deployment rights in XL Deploy).
Click Test to test if XL Release can log in to the XL Deploy server with the configured address and credentials.
Click Save to save the server.
| https://docs.xebialabs.com/v.9.0/xl-release/how-to/configure-xl-deploy-servers-in-xl-release/ | 2019-12-06T04:21:16 | CC-MAIN-2019-51 | 1575540484477.5 | [array(['/static/xl-deploy-servers-52f74ac26b01fbaf98068275390c3535.png',
'XL Deploy server configuration'], dtype=object)
array(['/static/xl-deploy-server-details-a5430a69426d1c9a32d3836e6395458c.png',
'XL Deploy server configuration details'], dtype=object) ] | docs.xebialabs.com |
Activity Control - Advanced
Enabling/Disabling Activities
By default, all activities are enabled at all levels in the CommCell. An activity that is enabled at the CommCell level, can still be disabled at lower levels. But when disabling activities, the CommCell level has the highest precedence while a subclient has the lowest precedence. If you disable an activity at the CommCell level, then that activity is disabled throughout the CommCell regardless of the corresponding settings of the individual entities.
You can enable/disable activities at any point in time using the following steps:
- From the CommCell Browser, right-click the entity and select Properties.
- Click the Activity Control tab.
- Select or clear check boxes to enable or disable activities.
- Click OK.
The following activities can be performed at different levels in the CommServe:
Preventing Job History for Disabled Activities
You can add the JMDontStartBkpsOnDisabledAgents additional setting to prevent job history from being created for disabled activities. Without this additional setting, job history is created for disabled activities even though the job does not run.
Example: The Enable Backup check box on the Activity Control tab is cleared at the client level. If the JMDontStartBkpsOnDisabledAgents additional setting is enabled, a "Failed to Start" job does not appear in the job history for the client after the backup job runs.
Note: The JMDontStartBkpsOnDisabledAgents additional setting applies to scheduled jobs.
Procedure
- From the CommCell Browser, right-click CommServe and then click Properties.
- In the CommCell Properties dialog box, click the Additional Settings tab and then click Add.
- In the Add Additional Settings dialog box, enter the details for the additional setting:
- In the Name box, type JMDontStartBkpsOnDisabledAgents.
- In Category box, select CommservDB.GxGlobalParam.
- In the Type box, select INTEGER.
- In Value box, select 1 to enable the additional setting.
- Click OK.
- In the CommCell Properties dialog box, click OK..
The section below provides the steps to enable all job activities at the CommCell level after a time interval as an example:
-.
Queuing Jobs for Disabled Activities
When an activity is disabled, you can queue all the jobs for the disabled activity using the following steps. These jobs will remain in the Job Controller in a queued state until the activities is re-enabled.
Use the following steps to queue jobs if activity control for the job types is disabled:
- From the CommCell Console ribbon, click the Home tab, and then click Control Panel.
- In the System section, click Job Management.
The Job Management dialog box appears.
- Select the Queue jobs if activity is disabled check box.
- Click OK.
CommCell Browser Icons for Activity Control
The icons associated with the CommServe, Clients and Agents entities in the CommCell Browser include information on the activity control status for backup and restore activities. When one of these activities is disabled, a red directional arrow overlays the entity icon as follows:
- The out-bound arrow (
) indicates that backup activities are disabled.
- The in-coming arrow (
) indicates that restore activities are disabled
When both backup and restore activities are disabled, an 'x' on the right-corner (
) overlays the entity icon. For example, if a Windows client has both backup and restore activities disabled, the icon will look like this:
.
For a comprehensive list of all icons in the CommCell Console, see CommCell Console Icons. | http://docs.snapprotect.com/netapp/v10/article?p=features/activity_control/advanced.htm | 2022-01-29T01:08:34 | CC-MAIN-2022-05 | 1642320299894.32 | [] | docs.snapprotect.com |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.