content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Injectors in Angular have rules that you can leverage to achieve the desired visibility of injectables in your apps. By understanding these rules, you can determine in which NgModule, Component or Directive you should declare a provider.:
@Injectable()
providedInproperty to refer to
@NgModule(), or
root.
in the
@Injectable()metadata. You can do this to configure a non-default provider of a service that is shared with multiple apps.
Here is an example of the case where the component router configuration includes a non-default location strategy by listing its provider in the
providerslist.
When resolving a token for a component/directive, Angular resolves it in two phases:
ElementInjectorhierarchy (its parents).
Angular's resolution behavior can be modified with
@Optional(),
@Self(),
@SkipSelf() and
@Host(). Import each of them from
@angular/core and use each in the component class constructor when you inject your service.
For a working app showcasing the resolution modifiers that this section covers, see the resolution modifiers example.
Resolution modifiers fall into three categories:
@Optional()
@SkipSelf()
🌼.
@Component({ selector: 'app-self', templateUrl: './self.component.html', styleUrls: ['./self.component.css'], providers: [{ provide: FlowerService, useValue: { emoji: '🌼' } }] }) export class SelfComponent { constructor(@Self() public flower: FlowerService) {} }
@SkipSelf()
@SkipSelf() is the opposite of
@Self(). With
@SkipSelf(), Angular starts its search for a service in the parent
ElementInjector, rather than in the current one. So if the parent
ElementInjector were using the value
🌿 (fern) for
emoji , but you had
🍁 (maple leaf) in the component's
providers array, Angular would ignore
🍁 (maple leaf) and use
🌿 (fern).
To see this in code, assume that the following value for
emoji is what the parent component were using, as in this service:
export class LeafService { emoji = '🌿'; }
Imagine that in the child component, you had a different value,
🍁 (maple leaf) but you wanted to use the parent's value instead. This is when you'd use
@SkipSelf():
@Component({ selector: 'app-skipself', templateUrl: './skipself.component.html', styleUrls: ['./skipself.component.css'], // Angular would ignore this LeafService instance providers: [{ provide: LeafService, useValue: { emoji: '🍁' } }] }) export class SkipselfComponent { // Use @SkipSelf() in the constructor constructor(@SkipSelf() public leaf: LeafService) { } }
In this case, the value you'd get for
emoji would be
🌿 (fern), not
🍁 (maple leaf).
@SkipSelf()with
@Optional()
Use
@SkipSelf() with
@Optional() to prevent an error if the value is
null. In the following example, the
Person service is injected in the constructor.
@SkipSelf() tells Angular to skip the current injector and
@Optional() will prevent an error should the
Person service be
null.
class Person { constructor(@Optional() @SkipSelf() parent?: Person) {} }
@Host()
@Host() lets you designate a component as the last stop in the injector tree when searching for providers. Even if there is a service instance further up the tree, Angular won't continue looking. Use
@Host() as follows:
@Component({ selector: 'app-host', templateUrl: './host.component.html', styleUrls: ['./host.component.css'], // provide the service providers: [{ provide: FlowerService, useValue: { emoji: '🌼' } }] }) export class HostComponent { // use @Host() in the constructor when injecting the service constructor(@Host() @Optional() public flower?: FlowerService) { } }
Since
HostComponent has
@Host() in its constructor, no matter what the parent of
HostComponent might have as a
flower.emoji value, the
HostComponent will use
🌼 (yellow flower).
When you provide services in the component class, services are visible within the
ElementInjector tree relative to where and how you provide those services.
Understanding the underlying logical structure of the Angular template will give you a foundation for configuring services and in turn control their visibility.
Components are used in your templates, as in the following example:
<app-root> <app-child></app-child> </app-root>
Note: Usually, you declare the components and their templates in separate files. For the purposes of understanding how the injection system works, it is useful to look at them from the point of view of a combined logical tree. The term logical distinguishes it from the render tree (your application DOM tree). To mark the locations of where the component templates are located, this guide uses the
<#VIEW>pseudo element, which doesn't actually exist in the render tree and is present for mental model purposes only.
The following is an example of how the
<app-root> and
<app-child> view trees are combined into a single logical tree:
<app-root> <#VIEW> <app-child> <#VIEW> ...content goes here... </#VIEW> </app-child> <#VIEW> </app-root>
Understanding the idea of the
<#VIEW> demarcation is especially significant when you configure services in the component class.
@Component()
How you provide services via an
@Component() (or
@Directive()) decorator determines their visibility. The following sections demonstrate
providers and
viewProviders along with ways to modify service visibility with
@SkipSelf() and
@Host().
A component class can provide services in two ways:
providersarray
@Component({ ... providers: [ {provide: FlowerService, useValue: {emoji: '🌺'}} ] })
viewProvidersarray
@Component({ ... viewProviders: [ {provide: AnimalService, useValue: {emoji: '🐶'}} ] })
To understand how the
providers and
viewProviders influence service visibility differently, the following sections build a step-by-step and compare the use of
providers and
viewProviders in code and a logical tree.
NOTE: In the logical tree, you'll see
@Provide,
@Inject, and
@NgModule, which are not real HTML attributes but are here to demonstrate what is going on under the hood.
-
@Inject(Token)=>Valuedemonstrates that if
Tokenis injected at this location in the logical tree its value would be
Value.
-
@Provide(Token=Value)demonstrates that there is a declaration of
Tokenprovider with value
Valueat this location in the logical tree.
-
@NgModule(Token)demonstrates that a fallback
NgModuleinjector should be used at this location.
The example app has a
FlowerService provided in
root with an
emoji value of
🌺 (red hibiscus).
@Injectable({ providedIn: 'root' }) export class FlowerService { emoji = '🌺'; }
Consider a simple app with only an
AppComponent and a
ChildComponent. The most basic rendered view would look like nested HTML elements such as the following:
<app-root> <!-- AppComponent selector --> <app-child> <!-- ChildComponent selector --> </app-child> </app-root>
However, behind the scenes, Angular uses a logical view representation as follows when resolving injection requests:
<app-root> <!-- AppComponent selector --> <#VIEW> <app-child> <!-- ChildComponent selector --> <#VIEW> </#VIEW> </app-child> </#VIEW> </app-root>
The
<#VIEW> here represents an instance of a template. Notice that each component has its own
<#VIEW>.
Knowledge of this structure can inform how you provide and inject your services, and give you complete control of service visibility.
Now, consider that
<app-root> simply injects the
FlowerService:
export class AppComponent { constructor(public flower: FlowerService) {} }
Add a binding to the
<app-root> template to visualize the result:
<p>Emoji from FlowerService: {{flower.emoji}}</p>
The output in the view would be:
Emoji from FlowerService: 🌺
In the logical tree, this would be represented as follows:
<app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <p>Emoji from FlowerService: {{flower.emoji}} (🌺)</p> <app-child> <#VIEW> </#VIEW> </app-child> </#VIEW> </app-root>
When
<app-root> requests the
FlowerService, it is the injector's job to resolve the
FlowerService token. The resolution of the token happens in two phases:
@NgModule()to delegate the request to.
In the example case, the constraints are:
<#VIEW>belonging to
<app-root>and end with
<app-root>.
<app-root>
@Components are special in that they also include their own
viewProviders, which is why the search starts at
<#VIEW>belonging to
<app-root>. (This would not be the case for a directive matched at the same location).
AppModuleacts as the fallback injector when the injection token can't be found in the
ElementInjectors.
providersarray
Now, in the
ChildComponent class, add a provider for
FlowerService to demonstrate more complex resolution rules in the upcoming sections:
@Component({ selector: 'app-child', templateUrl: './child.component.html', styleUrls: ['./child.component.css'], // use the providers array to provide a service providers: [{ provide: FlowerService, useValue: { emoji: '🌻' } }] }) export class ChildComponent { // inject the service constructor( public flower: FlowerService) { } }
Now that the
FlowerService is provided in the
@Component() decorator, when the
<app-child> requests the service, the injector has only to look as far as the
<app-child>'s own
ElementInjector. It won't have to continue the search any further through the injector tree.
The next step is to add a binding to the
ChildComponent template.
<p>Emoji from FlowerService: {{flower.emoji}}</p>
To render the new values, add
<app-child> to the bottom of the
AppComponent template so the view also displays the sunflower:
Child Component Emoji from FlowerService: 🌻
In the logical tree, this would be represented as follows:
<app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <p>Emoji from FlowerService: {{flower.emoji}} (🌺)</p> <app-child @Provide( <!-- search ends here --> <#VIEW> <!-- search starts here --> <h2>Parent Component</h2> <p>Emoji from FlowerService: {{flower.emoji}} (🌻)</p> </#VIEW> </app-child> </#VIEW> </app-root>
When
<app-child> requests the
FlowerService, the injector begins its search at the
<#VIEW> belonging to
<app-child> (
<#VIEW> is included because it is injected from
@Component()) and ends with
<app-child>. In this case, the
FlowerService is resolved in the
<app-child>'s
providers array with sunflower 🌻. The injector doesn't have to look any further in the injector tree. It stops as soon as it finds the
FlowerService and never sees the 🌺 (red hibiscus).
viewProvidersarray
Use the
viewProviders array as another way to provide services in the
@Component() decorator. Using
viewProviders makes services visible in the
<#VIEW>.
The steps are the same as using the
providersarray, with the exception of using the
viewProvidersarray instead.
For step-by-step instructions, continue with this section. If you can set it up on your own, skip ahead to Modifying service availability.
The example app features a second service, the
AnimalService to demonstrate
viewProviders.
First, create an
AnimalService with an
emoji property of 🐳 (whale):
import { Injectable } from '@angular/core'; @Injectable({ providedIn: 'root' }) export class AnimalService { emoji = '🐳'; }
Following the same pattern as with the
FlowerService, inject the
AnimalService in the
AppComponent class:
export class AppComponent { constructor(public flower: FlowerService, public animal: AnimalService) {} }
Note: You can leave all the
FlowerServicerelated code in place as it will allow a comparison with the
AnimalService.
Add a
viewProviders array and inject the
AnimalService in the
<app-child> class, too, but give
emoji a different value. Here, it has a value of 🐶 (puppy).
@Component({ selector: 'app-child', templateUrl: './child.component.html', styleUrls: ['./child.component.css'], // provide services providers: [{ provide: FlowerService, useValue: { emoji: '🌻' } }], viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] }) export class ChildComponent { // inject service constructor( public flower: FlowerService, public animal: AnimalService) { } }
Add bindings to the
ChildComponent and the
AppComponent templates. In the
ChildComponent template, add the following binding:
<p>Emoji from AnimalService: {{animal.emoji}}</p>
Additionally, add the same to the
AppComponent template:
<p>Emoji from AnimalService: {{animal.emoji}}</p>
Now you should see both values in the browser:
AppComponent Emoji from AnimalService: 🐳 Child Component Emoji from AnimalService: 🐶
The logic tree for this example of
viewProviders is as follows:
> </#VIEW> </app-child> </#VIEW> </app-root>
Just as with the
FlowerService example, the
AnimalService is provided in the
<app-child>
@Component() decorator. This means that since the injector first looks in the
ElementInjector of the component, it finds the
AnimalService value of 🐶 (puppy). It doesn't need to continue searching the
ElementInjector tree, nor does it need to search the
ModuleInjector.
providersvs.
viewProviders
To see the difference between using
providers and
viewProviders, add another component to the example and call it
InspectorComponent.
InspectorComponent will be a child of the
ChildComponent. In
inspector.component.ts, inject the
FlowerService and
AnimalService in the constructor:
export class InspectorComponent { constructor(public flower: FlowerService, public animal: AnimalService) { } }
You do not need a
providers or
viewProviders array. Next, in
inspector.component.html, add the same markup from previous components:
<p>Emoji from FlowerService: {{flower.emoji}}</p> <p>Emoji from AnimalService: {{animal.emoji}}</p>
Remember to add the
InspectorComponent to the
AppModule
declarations array.
@NgModule({ imports: [ BrowserModule, FormsModule ], declarations: [ AppComponent, ChildComponent, InspectorComponent ], bootstrap: [ AppComponent ], providers: [] }) export class AppModule { }
Next, make sure your
child.component.html contains the following:
<p>Emoji from FlowerService: {{flower.emoji}}</p> <p>Emoji from AnimalService: {{animal.emoji}}</p> <div class="container"> <h3>Content projection</h3> <ng-content></ng-content> </div> <h3>Inside the view</h3> <app-inspector></app-inspector>
The first two lines, with the bindings, are there from previous steps. The new parts are
<ng-content> and
<app-inspector>.
<ng-content> allows you to project content, and
<app-inspector> inside the
ChildComponent template makes the
InspectorComponent a child component of
ChildComponent.
Next, add the following to
app.component.html to take advantage of content projection.
<app-child><app-inspector></app-inspector></app-child>
The browser now renders the following, omitting the previous examples for brevity:
//...Omitting previous examples. The following applies to this section. Content projection: This is coming from content. Doesn't get to see puppy because the puppy is declared inside the view only. Emoji from FlowerService: 🌻 Emoji from AnimalService: 🐳 Emoji from FlowerService: 🌻 Emoji from AnimalService: 🐶
These four bindings demonstrate the difference between
providers and
viewProviders. Since the 🐶 (puppy) is declared inside the <#VIEW>, it isn't visible to the projected content. Instead, the projected content sees the 🐳 (whale).
The next section though, where
InspectorComponent is a child component of
ChildComponent,
InspectorComponent is inside the
<#VIEW>, so when it asks for the
AnimalService, it sees the 🐶 (puppy).
The
AnimalService in the logical tree would look like this:
> <app-inspector> <p>Emoji from AnimalService: {{animal.emoji}} (🐶)</p> </app-inspector> </#VIEW> <app-inspector> <#VIEW> <p>Emoji from AnimalService: {{animal.emoji}} (🐳)</p> </#VIEW> </app-inspector> </app-child> </#VIEW> </app-root>
The projected content of
<app-inspector> sees the 🐳 (whale), not the 🐶 (puppy), because the 🐶 (puppy) is inside the
<app-child>
<#VIEW>. The
<app-inspector> can only see the 🐶 (puppy) if it is also within the
<#VIEW>.
This section describes how to limit the scope of the beginning and ending
ElementInjector using the visibility decorators
@Host(),
@Self(), and
@SkipSelf().
Visibility decorators influence where the search for the injection token begins and ends in the logic tree. To do this, place visibility decorators at the point of injection, that is, the
constructor(), rather than at a point of declaration.
To alter where the injector starts looking for
FlowerService, add
@SkipSelf() to the
<app-child>
@Inject declaration for the
FlowerService. This declaration is in the
<app-child> constructor as shown in
child.component.ts:
constructor(@SkipSelf() public flower : FlowerService) { }
With
@SkipSelf(), the
<app-child> injector doesn't look to itself for the
FlowerService. Instead, the injector starts looking for the
FlowerService at the
<app-root>'s
ElementInjector, where it finds nothing. Then, it goes back to the
<app-child>
ModuleInjector and finds the 🌺 (red hibiscus) value, which is available because the
<app-child>
ModuleInjector and the
<app-root>
ModuleInjector are flattened into one
ModuleInjector. Thus, the UI renders the following:
Emoji from FlowerService: 🌺
In a logical tree, this same idea might look like this:
<app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <app-child @Provide( <!-- With SkipSelf, the injector looks to the next injector up the tree --> </#VIEW> </app-child> </#VIEW> </app-root>
Though
<app-child> provides the 🌻 (sunflower), the app renders the 🌺 (red hibiscus) because
@SkipSelf() causes the current injector to skip itself and look to its parent.
If you now add
@Host() (in addition to the
@SkipSelf()) to the
@Inject of the
FlowerService, the result will be
null. This is because
@Host() limits the upper bound of the search to the
<#VIEW>. Here's the idea in the logical tree:
<app-root @NgModule(AppModule) @Inject(FlowerService) flower=>"🌺"> <#VIEW> <!-- end search here with null--> <app-child @Provide(FlowerService="🌻")> <!-- start search here --> <#VIEW @Inject(FlowerService, @SkipSelf, @Host, @Optional)=>null> </#VIEW> </app-parent> </#VIEW> </app-root>
Here, the services and their values are the same, but
@Host() stops the injector from looking any further than the
<#VIEW> for
FlowerService, so it doesn't find it and returns
null.
Note: The example app uses
@Optional()so the app does not throw an error, but the principles are the same.
@SkipSelf()and
viewProviders
The
<app-child> currently provides the
AnimalService in the
viewProviders array with the value of 🐶 (puppy). Because the injector has only to look at the
<app-child>'s
ElementInjector for the
AnimalService, it never sees the 🐳 (whale).
Just as in the
FlowerService example, if you add
@SkipSelf() to the constructor for the
AnimalService, the injector won't look in the current
<app-child>'s
ElementInjector for the
AnimalService.
export class ChildComponent { // add @SkipSelf() constructor(@SkipSelf() public animal : AnimalService) { } }
Instead, the injector will begin at the
<app-root>
ElementInjector. Remember that the
<app-child> class provides the
AnimalService in the
viewProviders array with a value of 🐶 (puppy):
@Component({ selector: 'app-child', ... viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] })
The logical tree looks like this with
@SkipSelf() in
<app-child>:
<app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW><!-- search begins here --> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService, SkipSelf=>"🐳")> <!--Add @SkipSelf --> </#VIEW> </app-child> </#VIEW> </app-root>
With
@SkipSelf() in the
<app-child>, the injector begins its search for the
AnimalService in the
<app-root>
ElementInjector and finds 🐳 (whale).
@Host()and
viewProviders
If you add
@Host() to the constructor for
AnimalService, the result is 🐶 (puppy) because the injector finds the
AnimalService in the
<app-child>
<#VIEW>. Here is the
viewProviders array in the
<app-child> class and
@Host() in the constructor:
@Component({ selector: 'app-child', ... viewProviders: [{ provide: AnimalService, useValue: { emoji: '🐶' } }] }) export class ChildComponent { constructor(@Host() public animal : AnimalService) { } }
@Host() causes the injector to look until it encounters the edge of the
<#VIEW>.
<app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW> <app-child> <#VIEW @Provide(AnimalService="🐶") @Inject(AnimalService, @Host=>"🐶")> <!-- @Host stops search here --> </#VIEW> </app-child> </#VIEW> </app-root>
Add a
viewProviders array with a third animal, 🦔 (hedgehog), to the
app.component.ts
@Component() metadata:
@Component({ selector: 'app-root', templateUrl: './app.component.html', styleUrls: [ './app.component.css' ], viewProviders: [{ provide: AnimalService, useValue: { emoji: '🦔' } }] })
Next, add
@SkipSelf() along with
@Host() to the constructor for the
Animal Service in
child.component.ts. Here are
@Host() and
@SkipSelf() in the
<app-child> constructor :
export class ChildComponent { constructor( @Host() @SkipSelf() public animal : AnimalService) { } }
When
@Host() and
SkipSelf() were applied to the
FlowerService, which is in the
providers array, the result was
null because
@SkipSelf() starts its search in the
<app-child> injector, but
@Host() stops searching at
<#VIEW>—where there is no
FlowerService. In the logical tree, you can see that the
FlowerService is visible in
<app-child>, not its
<#VIEW>.
However, the
AnimalService, which is provided in the
AppComponent
viewProviders array, is visible.
The logical tree representation shows why this is:
<app-root @NgModule(AppModule) @Inject(AnimalService=>"🐳")> <#VIEW @Provide( <!-- ^^@SkipSelf() starts here, @Host() stops here^^ --> <app-child> <#VIEW @Provide( <!-- Add @SkipSelf ^^--> </#VIEW> </app-child> </#VIEW> </app-root>
@SkipSelf(), causes the injector to start its search for the
AnimalService at the
<app-root>, not the
<app-child>, where the request originates, and
@Host() stops the search at the
<app-root>
<#VIEW>. Since
AnimalService is provided via the
viewProviders array, the injector finds 🦔 (hedgehog) in the
<#VIEW>.
ElementInjectoruse case examples
The ability to configure one or more providers at different levels opens up useful possibilities. For a look at the following scenarios in a working app, see the heroes use case examples.
Architectural reasons may lead you to restrict access to a service to the application domain where it belongs. For example, the guide sample includes a
VillainsListComponent that displays a list of villains. It gets those villains from a
VillainsService.
If you provided
VillainsService in the root
AppModule (where you registered the
HeroesService), that would make the
VillainsService visible everywhere in the application, including the Hero workflows. If you later modified the
VillainsService, you could break something in a hero component somewhere..
VillainService is a singleton with respect to
VillainsListComponent because that is where it is declared. As long as
VillainsListComponent does not get destroyed it will be the same instance of
VillainService but if there are multilple instances of
VillainsListComponent, then each instance of
VillainsListComponent will have its own instance of
VillainService.
Many applications allow users to work on several open tasks at the same time. For example, in a tax preparation application,:
Suppose that the
HeroTaxReturnComponent had logic to manage and restore changes. That would be a pretty easy task for a simple hero tax return. In the real world, with a rich tax return data model, the change management would be tricky. You could delegate that management to a helper service, as this example does.
The
HeroTaxReturnService
HeroTaxReturnService.); } }.
This won't work if the service is an application-wide singleton. Every component would share the same service instance, and each component would overwrite the tax return that belonged to another hero.
To prevent. capabilities suitable for whatever is going on in component (B).
Component (B) is the parent of another component (C) that defines its own, even more specialized provider for
CarServ).
For more information on Angular dependency injection, see the DI Providers and DI in Action guides.
© 2010–2020 Google, Inc.
Licensed under the Creative Commons Attribution License 4.0. | https://docs.w3cub.com/angular/guide/hierarchical-dependency-injection | 2021-04-10T14:33:59 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.w3cub.com |
Deploying in a testing or production environment
These instructions will guide the user through the UAT or production configuration to deploy the following components. They are intended for firms deploying Corda Enterprise.
- Corda Node
- Corda Vault
- Corda Bridge and Float components of the Corda Firewall
- Load Balancer (presenting 1 public IP to CorDapp)
- Optional Zookeeper Cluster (manage Corda Firewall component availability)
- HTTPS Proxy Server (Node registration, Network Map download, includes CRL checking)
- SOCKS4/5 Proxy Server (AMPQ over TLS Messaging)
There are alternative approaches to how these components are deployed. For the purposes of this document, the following diagrams represent the topology used.
Deployment scenarios for testing and production environments
When deploying Corda Enterprise in a testing environment the Node, Bridge, and Float components should be deployed in a non-HA configuration as shown in the following diagram.
When deploying Corda Enterprise in a production environment, the Node, Bridge, and Float components should be deployed in a high-availability configuration.
Deployment details
- Corda Nodes run in a Hot/Cold Setup.
- The Corda Node communicates with the Doorman (authentication) and Network Map (peer addresses) over HTTPS typically through an HTTP Proxy Server.
- The Corda Node communicates with peers on the Corda network communicating with Corda Firewall which has 2 parts, the Bridge and the Float. The Float and Bridge components also check the certificate revocation list.
- The Float’s job is to act as an inbound socket listener, capturing messages from peers and send them to the Bridge. The Float prevents the Node and Artemis server from being exposed to Peer Nodes.
- The Bridge captures the inbound messages and sends them to the shared Artemis queue. The Bridge is typically configured to route through a SOCKS5 Proxy Server and also manages outgoing messages from the Node to Peers on the Network. The firewall between the Bridge and the DMZ is configured as an outbound only firewall. When communicating to the float, the bridge initiates the connection.
- In an HA configuration Node A and Node B use a shared Artemis queue configured on an NFS mountpoint shared between VM A and VM B.
- R3 have tested Zookeeper to provide an election mechanism to determine which Bridge is up and chooses a Bridge to send messages to the shared Artemis queue.
- The Bridge can select an Active and Alternative Artemis queue for incoming messages and an Active and Alternative Float for outgoing messages based on a configuration file setting.
- R3 customers have tested F5 Load Balancer presenting 1 Float IP to the external world for incoming Peer to Peer Traffic.
- R3 customers have also deployed Azure/F5 Load Balancers presenting 1 Node IP for RPC client connections.
- Customers could use a solution like VCS cluster for Node A to Node B failover, though this configuration has not been tested by R3.
Installation steps
Installing Java 8 on the VM
Java 8 JDK should be installed on your virtual machine. Refer to your internal processes for installation procedures.
These are the configuration files that will be created during the process:
- Corda Node - node.conf: This configuration file contains settings for the following components and functions:
- Doorman
- Network Map
- Corda Bridge
- Vault Database
- RPC Port settings for client API
- P2P address for advertising Corda Node to Peers
- Crash shell port, user, and password
- Corda Firewall Bridge - firewall.conf: This configuration file contains specifies the location of:
- Artemis broker IP address and port
- Corda Float listening address and port
- Location of local JKS PKI authentication keys
- Corda Firewall Float - firewall.conf: This configuration file contains specifies the location of:
- Corda Float tunnel listening address and port (Bridge connecting address and port)
- Corda Float public endpoint
- Location of local JKS PKI authentication keys
You can find examples of firewall configuration files in Configuring the Corda Enterprise Firewall.
Installing the Corda Node
- Upload the appropriate
corda-<version>.jarfile to the Node root directory.
- In the root of your Node directory, create a folder called
/certificates.
- The network operator will provide you with a
network-root-truststore.jkswhich will be used for authentication during initial registration.
- Upload the
network-root-truststore.jksfile to this directory.
- In the root of your Node directory, create a folder called
cordapps. Upload your CorDapps to this folder.
Once your Node has been started it will contain the following files and directories:
additional-node-infos/ artemis/ brokers/ certificates/ cordapps/ drivers/ logs/ plugins -> drivers/ corda-<version>.jar network-parameters node.conf nodeInfo-XXXXXXXXX
This is a sample
node.conf which details a configuration connecting to the Corda UAT Network.
{ " }, }
Implementing the Corda Firewall PKI
In a bank environment there will typically be several layers of security protecting the firms data.
- The Corda Node may be deployed behind the inner DMZ (no access to the Internet)
- The Bridge Server may reside on a VM in front of the inner DMZ (not addressable from the Internet)
- The Corda Float may reside on a VM in the Outer DMZ (directly addressable from the Internet)
PKI Authentication
- Corda PKI Authentication issued by Corda Network can link the Node and Bridge i.e. the red keys indicated below truststore and sslkeystore
- Local PKI Authentication issued by separate CA will link the Bridge and Float i.e the purple keys indicated below trust and Bridge.
- Corda PKI Authentication will link the Node and Bridge and authenticate to Corda Network in the outside world. In other words, this permits mutual authentication between a Corda Node and its Peer Corda Nodes.
- Local PKI Authentication will link the Bridge and Float and allow a secure tunnel into the Float from the outside world. In other words, this permits mutual authentication between two software components, the Bridge and the Float.
Explanation of PKI Keys
Node Authentication
truststore.jks - this is the same trust store that the Node is bootstrapped with during initial registration. It contains the
cordarootca certificate - this is the public, root certificate of the entire network. It needs to be copied to the Bridge when it is set up. Note that the truststore is also dynamically copied from the Bridge to the Float at runtime (and is held in memory only on the Float). The truststore is used for authenticating Nodes that connect to the Bridge and Float.
Node to Bridge Connection
sslkeystore.jks is issued by the Node and contains just the Node’s TLS certificate. It needs to be installed on the Node and the Bridge. The Node-to-Bridge connection is mutually authenticated TLS, with sslkeystore used both sides to establish the secure tunnel and truststore.jks is required on each side to authenticate the connection.
Bridge to Float Connection
bridge.jks and
float.jks contain TLS certificates and their associated private keys. By convention they should be referred to as keystores. These TLS certificates are unrelated to any of the certificates issued by the Node. In our example documentation the Bridge & Float keys are issued by a stand-alone root certificate. This root certificate is stored in trust.jks. This is required for the Bridge and Float to authenticate each other
Generate Bridge and Float keystores
For Float and Bridge to communicate a tunnel keystore must be created. To create a tunnel keystore, run the following command:
java -jar corda-tools-ha-utilities-4.1.jar generate-internal-tunnel-ssl-keystores -p tunnelStorePass -e tunnelPrivateKeyPassword -t tunnelTrustpass
Bridge Installation
- Upload the
corda-firewall-4.1.jarto the /opt/cordabridge directory.
- In the /opt/cordabridge directory, create a softlink called
certificateslinked to /opt/corda/certificates
- In the /opt/cordabridge directory, make a directory called bridgecerts
- In the /opt/cordabridge directory, copy /opt/corda/network-parameters back to /opt/cordabridge
- In the /opt/cordabridge directory, create a file called firewall.conf
- Copy the files /opt/corda/temp/bridge.jks and /opt/corda/temp/trust.jks into the /opt/cordabridge/bridgecerts directory
This is a sample firewall.conf: = "<node-machine-address>:11005" // NB: for vmInfra2 swap artemisBrokerAddress and alternateArtemisBrokerAddresses. alternateArtemisBrokerAddresses = ["<node-machine-backup-address>:11005"] socksProxyConfig { version = SOCKS5 proxyAddress = "<socks-server>:1080" username = "proxyuser" password = "password" } } bridgeInnerConfig { floatAddresses = ["<float-machine-address>:12005", "<float-machine-backup-address>:12005"] // NB: for vmInfra2 change the ordering." crlCheckSoftFail = true } } haConfig { haConnectionString = "bully://localhost" // Magic URL enabling master via Artemis messaging, not Zookeeper } networkParametersPath = network-parameters // The network-parameters file is expected to be copied from the node registration phase and here is expected in the workspace folder.
Float Installation
- Create an /opt/cordafloat directory on your VM
- Upload the
corda-firewall-4.1.jarto the /opt/cordafloat directory.
- In the /opt/cordafloat directory, make a directory called floatcerts.
- In the /opt/cordafloat directory, create a file called float.conf.
- The keys were created in the Node VM so sftp from the Node VM to the Float VM and copy the files NodeVM:/opt/corda/temp/float.jks and /opt/corda/temp/trust.jks into the FloatVM:/opt/cordafloat/floatcerts directory.
- You now should have the correct non Corda PKI CA authentication in place between Bridge and Float.
This is a sample float.conf:
firewallMode = FloatOuter inboundConfig { listeningAddress = "<float-external-facing-address>:10002" } floatOuterConfig { floatAddress = "<float-bridge-facing-address>:12005"" crlCheckSoftFail = true } } networkParametersPath = network-parameters // The network-parameters file is expected to be copied from the node registration phase and here is expected in the workspace folder.
A full list of the parameters that can be utilized in these configuration files can be found in Configuring the Corda Enterprise Firewall.
Corda 3.x vs Corda 4.x Firewall Upgrade
In Corda 4.x it is possible to for multiple Nodes representing multiple identities to reside behind the same Corda Firewall. Details on setup can be found in Firewall upgrade.
Port Policy and Network Configuration
Connections with the Corda Network Doorman and Network Map services (inbound and outbound traffic) will be over HTTP/HTTPS on ports 80 and 443.
Connections with peer Corda Nodes (including Notaries) will happen on a peer-to-peer connection using AMQP/TLS typically in a port range of 10000 - 10099, though port use is determined by the Node owner.
Connections with local applications connecting with the CorDapp via the Corda Node happen over RPC.
Administrative logins with the Corda Node happen via ssh whose port is configured in the node.conf file, typically port 2222.
Suggested Work flow for Corda Node & Corda Firewall Installation
- Run ifconfig on Node VM.
- Run ifconfig on Bridge VM.
- Run ifconfig on Float VM.
- Ask your Infrastructure team to tell you public IP of load balancer/firewall.
- In node.conf p2pAddress put IP from question 4.
- In node.conf messagingServerAddress put local IP address of Node from question 1, or 0.0.0.0 for all interfaces.
- In Bridge.conf outboundconfig put IP address of Node from question 1.
- In Bridge.conf bridgeInnerConfig put IP address of 3, or ask infrastructure team what address is presented by firewall between Bridge and Float.
- In Float.conf floatOuterConfig put IP address from 3 which will be routed to from Node. If machine has one NIC use that address, if it has two then use the card that has permission for access from Bridge network.
- In Float.conf inboundConfig use IP address from 3 which faces the internet. If there is only one NIC use that value, if there are two check with Infrastructure which one is accessed from the load balancer.
- In Float.conf floatOuterConfig put IP address from 3 which will be routed to from Node. If machine has one NIC use that address, if it has two then use the card that has permission for access from Bridge network.
The following image may be helpful in ensuring alignment between the Node, Bridge and Float configuration files.
Proxy Configurations
You will likely need to establish proxy servers, one for HTTP connection to the Doorman and Network Map services, and Socks proxy to be used with the Corda Firewall for P2P communication Corda Nodes. Please note the examples below are for demonstration purposes only, it is assumed most financial institutions will already have Enterprise Proxy Server deployments in place and available for use by the Corda Firewall.
Using HTTP Proxy with Corda
Many financial institutions will use an HTTP Proxy Server to monitor connections going out to the Internet. Corda facilitates the use of an HTTP Proxy to access the Doorman & Network map via HTTPS GET requests.
The following is an example of how to set up a Squid Proxy Server and start the Corda Node to point to it as a “tunnel” to connect to Doorman and Network Map.
- Prerequisite is a VM 2 CPU Core & 2 GB RAM running Ubuntu 18.x.
- ssh into the VM where you want to install the Proxy Server and run the following:
sudo apt update sudo apt -y install squid
- Edit
/etc/squid/squid.confand add the following entries:
acl SSL_ports port 443 acl Safe_ports port 8080 acl CONNECT method CONNECT http_access allow all http_port 8080 refresh_pattern ^ftp: 1440 20% 10080 refresh_pattern ^gopher: 1440 0% 1440 refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 refresh_pattern (Release|Packages(.gz)*)$ 0 20% 2880 refresh_pattern . 0 20% 4320 debug_options ALL,3
- Once Squid is successfully installed run:
sudo systemctl start squid sudo systemctl enable squid sudo systemctl status squid
- If Squid starts successfully you will see an output similar to this:
cordaadmin@corda-firewall-proxies:~$ sudo systemctl status squid ● squid.service - LSB: Squid HTTP Proxy version 3.x Loaded: loaded (/etc/init.d/squid; generated) Active: active (running) since Wed 2019-03-13 18:44:10 UTC; 14min ago Docs: man:systemd-sysv-generator(8) Process: 14135 ExecStop=/etc/init.d/squid stop (code=exited, status=0/SUCCESS) Process: 14197 ExecStart=/etc/init.d/squid start (code=exited, status=0/SUCCESS) Tasks: 4 (limit: 4915) CGroup: /system.slice/squid.service ├─14261 /usr/sbin/squid -YC -f /etc/squid/squid.conf ├─14263 (squid-1) -YC -f /etc/squid/squid.conf ├─14265 (logfile-daemon) /var/log/squid/access.log └─14267 (pinger) Mar 13 18:44:10 corda-firewall-proxies systemd[1]: Starting LSB: Squid HTTP Proxy version 3. Mar 13 18:44:10 corda-firewall-proxies squid[14197]: * Starting Squid HTTP Proxy squid Mar 13 18:44:10 corda-firewall-proxies squid[14261]: Squid Parent: will start 1 kids Mar 13 18:44:10 corda-firewall-proxies squid[14197]: ...done. Mar 13 18:44:10 corda-firewall-proxies systemd[1]: Started LSB: Squid HTTP Proxy version 3.x Mar 13 18:44:10 corda-firewall-proxies squid[14261]: Squid Parent: (squid-1) process 14263
- At this point you can ssh to the VM where the Corda Node is installed and run the following command:
java -Dhttps.proxyHost=your-firewall-proxy -Dhttps.proxyPort=8080 -jar corda.jar
- If the Corda Node starts up successfully you can then check
/var/log/squid/access.logand you should see output as follows:
1552502594.525 70615 10.1.0.30 TCP_TUNNEL/200 30087 CONNECT netmap.uat.corda.network:443 - HIER_DIRECT/51.140.164.141 -
Using Socks Proxy with Corda Bridge
R3 strongly recommend the use of a SOCKS Proxy in conjunction with the Corda Firewall to access peers on the network for P2P communication.
SOCKS. By contrast an HTTP Proxy only understands HTTP traffic.
SOCKS works by establishing a TCP/IP connection with another server on the behalf of your client machine. Through this connection, traffic is routed between the client and the server, essentially anonymizing and encrypting your data and your information along the way.
SOCKS proxies provide an improvement over HTTP proxy in terms of speed of data delivery & by preventing data packets being mis-routed or mislabeled. This provides an overall improvement in terms of stability and avoiding data transfer errors that could otherwise happen.
The additional benefit of utilizing a SOCKS server is that it facilitates organizations enforce security policy and allow applications to reach legitimate external hosts through simple, centrally controlled rule-based settings.
socksProxyConfig { version = SOCKS5 proxyAddress = "PROXYSERVER:1080" userName = "user" password = "password" } | https://docs.corda.net/docs/corda-enterprise/4.6/node/deploy/env-prod-test.html | 2021-04-10T15:18:22 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../../resources/nonha.png', 'nonha'], dtype=object)
array(['../../resources/ha.png', 'ha'], dtype=object)
array(['../../resources/firewallpki.png', 'firewallpki'], dtype=object)] | docs.corda.net |
Configure maximum index size
Note: This topic is not relevant to SmartStore indexes. See Configure data retention for SmartStore indexes.
You can configure maximum index size in a number of ways:
To configure index storage size, you set attributes in indexes.conf. For more information on the attributes mentioned in this topic, read "Configure index storage".
Caution: While processing indexes, the indexer settings:
#. To control bucket storage across groups of indexes, use the
maxVolumeDataSizeMB attribute, described below. that concern.
# global settings # Inheritable by all indexes: No hot/warm directory (homePath)! | https://docs.splunk.com/Documentation/Splunk/8.1.2/Indexer/Configureindexstoragesize | 2021-04-10T15:26:09 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Weblate is a copylefted libre software web-based continuous localization system, used by over 1150 libre projects and companies in more than 115 countries.
Install it, or use the Hosted Weblate service at weblate.org.
Support¶
Weblate is a libre software with optional professional support and cloud hosting offerings. Check out for more information.
Documentation¶
To be found in the
docs directory of the source code, or
viewed online on
Installation¶
Setup instructions:
Bugs¶
Please report feature requests and problems
- Glossary
- Checks and fixups
- Searching
- Translation workflows
- Frequently Asked Questions
- Supported file formats
- Version control integration
- Weblate’s REST API
- Weblate Client
- Weblate’s Python API
Administrator docs
- Configuration instructions
- Weblate deployments
- Upgrading Weblate
- Backing up and moving Weblate
- Authentication
- Access control
- Translation projects
- Language definitions
- Continuous localization
- Licensing translations
- Translation process
- Checks and fixups
- Machine translation
- Addons
- Translation Memory
- Configuration
- Sample configuration
- Management commands
- Announcements
- Component Lists
- Optional Weblate modules
- Customizing Weblate
- Management interface
- Getting support for Weblate
- Legal documents
Application developer guide
- Starting with internationalization
- Integrating with Weblate
- Translating software using GNU Gettext
- Translating documentation using Sphinx
- Translating HTML and JavaScript using Weblate CDN
- Translation component alerts
- Building translators community
- Managing translations
- Reviewing strings
- Promoting the translation
- Translation progress reporting
Contributor docs
Change history
- Weblate 4.5.1
- Weblate 4.5
- Weblate 4.4.2
- Weblate 4.4.1
- Weblate 4.4
- Weblate 4.3.2
- Weblate 4.3.1
- Weblate 4.3
- Weblate 4.2.2
- Weblate 4.2.1
- Weblate 4.2
- Weblate 4.1.1
- Weblate 4.1
- Weblate 4.0.4
- Weblate 4.0.3
- Weblate 4.0.2
- Weblate 4.0.1
- Weblate 4.0
- Weblate 3.x series
- Weblate 2.x series
- Weblate 1.x series
- Weblate 0.x series | https://docs.weblate.org/en/weblate-4.5.1/index.html | 2021-04-10T14:12:27 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['https://s.weblate.org/cdn/Logo-Darktext-borders.png', 'Weblate'],
dtype=object) ] | docs.weblate.org |
Developer Documentation¶
This manual is meant for developers intending to develop apps for FreedomBox. It provides an API reference and a step-by-step tutorial for developing apps.
Note: If you are looking for documentation on using FreedomBox, please visit the FreedomBox Manual. You can also find a copy of the user manual in the help section of your FreedomBox. | https://docs.freedombox.org/ | 2021-09-16T16:05:44 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.freedombox.org |
SMTP Filters
Following are the settings that can be specified in the filters section of the X-SMTPAPI header. All filters and setting names must be lowercase.
- If you're enabling a Setting, also called a filter, via SMTPAPI, you are required to define all of the parameters for that Setting.
- If you enable a disabled setting, our system will not pull your settings for the disabled setting. You will need to define the settings in your X-SMTPAPI header Example: If you have a footer designed but disabled, you can't just enable it via the API; you need to define the footer in the API call itself.
- All filter names and setting names must be lowercase.
Some Settings are not listed here because they cannot be defined on a per-message basis. To update these other Settings, please refer to the Web API Filter Settings commands.
Sends a BCC copy of the email created in this transaction to the address specified.
{ "filters": { "bcc": { "settings": { "enable": 1, "email": "[email protected]" } } } }
Filter: bypass_list_management
This setting is very powerful, and can only be used on a per-message basis. Use with extreme caution. To learn more about the more granular bypass settings available in the v3 Mail Send API, see our Suppressions Overview documentation.
Some emails are too important to do normal list management checks, such as password resets or critical alerts. Enabling this filter will bypass the normal unsubscribe / bounce / spam report checks and queue the email for delivery.
{ "filters": { "bypass_list_management": { "settings": { "enable": 1 } } } }
Filter: clicktrack
Rewrites links in email text and html bodies to go through our webservers, allowing for tracking when a link is clicked on.
Example X-SMTPAPI Header Value
{ "filters": { "clicktrack": { "settings": { "enable": 1, "enable_text": true } } } }
Filter: dkim
Allows you to specify the domain to use to sign messages with DKIM certification. This domain should match the domain in the From address of your email.
Example X-SMTPAPI Header Value
{ "filters": { "dkim": { "settings": { "domain": "example.com", "use_from": false } } } }
Filter: footer
Inserts a footer at the bottom of the text and HTML bodies.
Example X-SMTPAPI Header Value
{ "filters": { "footer": { "settings": { "enable": 1, "text/html": "<p>Thanks,<br />The SendGrid Team<p>", "text/plain": "Thanks,\n The SendGrid Team" } } } }
Filter: ganalytics
Re-writes links to integrate with Google Analytics.
Example X-SMTPAPI Header Value
{ "filters": { "ganalytics": { "settings": { "enable": 1, "utm_source": "Transactional Email", "utm_medium": "email", "utm_content": "Reset Your Password", "utm_campaign": "Redesigned Transactional Messaging" } } } }
Filter: opentrack
If you don't use 'replace' this will insert an
<img> tag at the bottom of the html section of an email which will be used to track if an email is opened. If you choose to use 'replace', you can put the tracking pixel wherever you would like in the email and SendGrid will replace it at send time.
Example X-SMTPAPI Header Value
{ "filters": { "opentrack": { "settings": { "enable": 1, "replace": "%opentrack%" } } } }
Filter: spamcheck
Tests message with SpamAssassin to determine if it is spam, and drop it if it is.
Example X-SMTPAPI Header Value
{ "filters": { "spamcheck": { "settings": { "enable": 1, "maxscore": 3.5, "url": "" } } } }
Filter: subscriptiontrack
Inserts a subscription management link at the bottom of the text and html bodies or insert the link anywhere in the email.
If you wish to append an unsubscription link, use the
text/html and
text/plain parameters. However, if you wish to have the link replace a tag (such as
[unsubscribe]), use the
replace parameter.
The
landing argument cannot be used in SMTPAPI. It can only be setup via the UI or WebAPI, as an account-level setting.
Example X-SMTPAPI Header Value
{ "filters": { "subscriptiontrack": { "settings": { "text/html": "If you would like to unsubscribe and stop receiving these emails <% click here %>.", "text/plain": "If you would like to unsubscribe and stop receiving these emails click here: <% %>.", "enable": 1 } } } }
Filter: templates
This setting refers to SendGrid's transactional templates. SendGrid supports versioning, and the ability to create multiple transactional templates. Previously, we had a Template App, which is now referred to as the Legacy Template App.
Uses a transactional template when sending an email.
Example X-SMTPAPI Header Value
{ "filters": { "templates": { "settings": { "enable": 1, "template_id": "5997fcf6-2b9f-484d-acd5-7e9a99f0dc1f" } } } }
Filter: template
This setting refers to our original Email Template app. We now support more fully featured transactional templates. You may create multiple transactional templates that allow for versioning, in addition to several other features.
Wraps a template around your email content. Useful for sending out marketing email and other nicely formatted messages.
Example X-SMTPAPI Header Value
{ "filters": { "template": { "settings": { "enable": 1, "text/html": "<html><head></head><body bgcolor='pink'><div style='width:200px' bgcolor='#FFF'><% body %></div></body></html>" } } } }
Need some help?
We all do sometimes; code is hard. Get help now from our support team, or lean on the wisdom of the crowd browsing the SendGrid tag on Stack Overflow. | https://docs.sendgrid.com/for-developers/sending-email/smtp-filters | 2021-09-16T15:02:53 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.sendgrid.com |
to do when the contents to be rendered is too large to fit within the area given.
using UnityEngine;
public class Example : MonoBehaviour { // Prints how is managed the text when the contents rendered // are too large to fir in the area given.
void OnGUI() { Debug.Log(GUI.skin.button.clipping); } }
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/2019.1/Documentation/ScriptReference/GUIStyle-clipping.html | 2021-09-16T16:22:36 | CC-MAIN-2021-39 | 1631780053657.29 | [] | docs.unity3d.com |
The Microsoft Azure Security Technologies exam enables the candidate to implement security controls and threat protection as per the standard operating procedure. For a better understanding of the skills that are measured with the Microsoft AZ-500 exam, check out the following skills measured points.
How to manage identity and access 20-25%
How to implement platform protection 35-40%
How to manage different security operations 15-20%
How to secure data and applications 30-35%
These are the skills measured domains that are cited above.
If any skills not mention plz add it and also explain the recommended learning study material to pass this challenging exam.
thanks | https://docs.microsoft.com/en-us/answers/questions/37409/what-skills-are-measured-with-microsoft-az-500-exa.html | 2021-04-10T16:15:07 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
.
Old and New Methods.
Group Command Helper
MongoDB\Collection does not yet have a helper method for the
group command; however, it is planned in
PHPLIB-177. The following example demonstrates how to execute a group
command using the
MongoDB\Database::command() method:
MapReduce Command Helper
MongoDB\Collection does not yet have a helper method for the
mapReduce command; however, that is
planned in PHPLIB-53. The following example demonstrates how to execute
a mapReduce command using the
MongoDB\Database::command() method:
DBRef Helpers
MongoDB\Collection does not yet have helper methods for working
with DBRef objects; however, that is
planned in PHPLIB-24.). | https://docs.mongodb.com/php-library/v1.2/upgrade/ | 2021-04-10T15:06:07 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.mongodb.com |
You can define custom goals to share an overview of your financial plan with your community and to track your progress.
Collective Goals will be sent in the emails sent to your contributors, and you can adjust their visibility on your Collective page.
To create your own Collective goals, go to your Collective profile, click on Settings and then on Collective Goals.
To create a goal, give it a title, categorize it as either a goal balance or yearly budget, specify the amount you need and add a description. Don't forget to save your changes when you're done!
You can also control contributions and tiers by accessing the Tiers section.
You can define whether you can to allow custom contributions or not by manipulating the Enable custom contributions checkbox.
You can also create custom tiers on the same page. Click on add another tier to activate the following form:
Generic, membership, service, product, donation.
e.g. Gold Sponsor, Member, Supporter. This is the only required field to add a new Tier with the default values; all others are optional.
Important information for Financial Contributors such as what does this tier mean or which rewards are included.
Fixed or flexible. Flexible amounts can be adjusted by users as they wish.
Suggestions for contributors to pick from.
A preset starting amount.
The minimum amount allowed for this contribution, if any.
The frequency in which such contribution will be charged. It can be a one time contribution, a monthly one, or even yearly.
For limited edition contributions (such as special edition items, one-time events, etc).
The action word on the Tier card users click (such as donate, join, contribute). You can also add emojis if you wish!
Target amount you're trying to raise in this Tier.
If you want to force the creation of a page for this tier. Here's what it looks like:
You can access it from your Collective page by clicking on Read more.
To add more Tiers, click the blue "add another tier" button.
Don't forget to save your changes so they show up on your Collective page.
A great way of having more companies supporting your project is by offering them a support tier, where you offer access to support in exchange for financial contributions. Companies may or may not use it but it gives them the peace of mind to have someone who knows the code base very available to help them within a reasonable time frame.
An example is Babel's Base Support Sponsor tier.
Here is a template for a basic support tier for your collective:
Become a Base Support Sponsor with a yearly donation of $24,000 ($2k/month paid in full upfront). Get your logo on our README on Github and on the front page of our website, as well as access to up to 2 hours of support per month. Support will be remote with the option of a share screen or via private chat.
With the above tier example, your responsibilities as a Collective are:
Adding the sponsor's logo on your website
Merging the Open Collective's PR on your readme to show their logo (or doing one yourself)
Engaging with them to provide the support you've agreed to via the channels specified
The Sponsor's responsibilities are:
Contacting you with via the specified channels
Limiting their support requirements to the time slots specified
Scheduling the time slots at least a week in advance
It depends on a lot of things but mainly on how critical is your open source project to the organization (which is arguably quite subjective). We recommend to start at $500/month with a yearly commitment ($6k up front) for access to two hours of support a month. As you grow, you can start increasing the price.
It's easier for companies to budget services for the year. It also ensures that they contribute to the maintenance of the project.
By subscribing to a support tier, you not only get private time for your team to discuss the open source project that you are depending on, but you are also helping the collective to maintain it. It's a great way to balance your interests with the interests of the community.
Yes, you will receive an invoice emitted by the host of the collective (Open Source Collective 501c6).
Yes, you can pay with a credit card or we can work with you to go through your Purchase Order process. Just reach out to us: [email protected]
Please reach out to us and we will work it out ([email protected]). | https://docs.opencollective.com/help/collectives/tiers-goals | 2021-04-10T14:38:43 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.opencollective.com |
Welcome to Splunk Enterprise 8.1
If you are new to Splunk Enterprise, read the Splunk Enterprise Overview..
What's New in 8.1.0.1
Splunk Enterprise 8.1.0.1 was released on November 20, 2020. It resolves the issue described in Fixed issues.
What's New in 8.1.1
Splunk Enterprise 8.1.1 was released on December 8, 2020. It introduces the following enhancements and resolves the issues described in Fixed issues..1
Feedback submitted, thanks! | https://docs.splunk.com/Documentation/Splunk/8.1.1/ReleaseNotes/MeetSplunk | 2021-04-10T14:08:54 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
New or Recently Updated Tutorials¶
New for QuantumATK Q-2019.12¶
New for QuantumATK P-2019.03¶
New for QuantumATK O-2018.06¶
- Dynamical Matrix study object: Phonons in bulk silicon
- Formation energies and transition levels of charged defects
- Relaxation of devices using the OptimizeDeviceConfiguration study object
- Electrical characteristics of devices using the IVCharacteristics study object
- Photocurrent in a silicon p-n junction
New for QuantumATK 2017.0¶
- DFT-1/2 and DFT-PPS density functional methods for electronic structure calculations
- Introducing the QuantumATK plane-wave DFT calculator
- Metadynamics Simulation of Cu Vacancy Diffusion on Cu(111) - Using PLUMED
- Determination of low strain interfaces via geometric matching
- Open-circuit voltage profile of a Li-S battery: ReaxFF molecular dynamics | http://docs.quantumatk.com/tutorials/new_or_updated_tutorials.html | 2021-04-10T14:05:31 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.quantumatk.com |
Trace: • GPRS series module topic
GPRS series module topic
GPRS series module topic
A9 is the latest GSM/GPRS module launched by Essence..
Selection table
Please poke the full range of modules to purchase:Anxinke official Taobao shop
Main specifications
A9/A9G Module
-);
Debugging related
AT command debugging
- A9/A9G AT command operation examples (online documentation, will continue to update and debug DEMO), refer to the following documents)
- A9/A9G 10-minute docking with Gizwits to create your own product tutorial:
Secondary development
- CSDK_Github open source information: Open Source Information
- C_SDK development kit: development kit
-
Firmware related
- GPRS series module upgrade guide: GPRS 系列模组升级指南
- GPS series module upgrade guide: GPS 系列模组升级指南
Resource summary
Anxinke Information
- User Manual (user_manual): A9/A9G User Manual (A9/A9G hardware design can refer to the A9G development board schematic diagram)
- AT instruction set (at_instruction_set): A9/A9G AT instruction set A9/A9G AT_Command_series
- Hardware information (hardware_info): A9/A9G_hardware_infoCA-01 hardware package
- Debugging tool (serial_tool): Serial debugging tool
- FAQ document: 安信可gps_gprs_a9g常见问题faq.pdf
- Precautions for mass production: GPRS mass production precautions
- Module certificate: GPRS系列模组证书
Guokewei original factory information
- GK9501 related information: gk9501_doc_tool.7z
Pudding Series Development Board
- A9G development board: Pudding 系列开发板-A9G开发板资料
- A9 development board: Pudding 系列开发板-A9开发板资料
Other resources
Business: 18022036575 (same number on WeChat) | https://docs.ai-thinker.com/en/gprs%E6%A8%A1%E7%BB%84%E4%B8%93%E9%A2%982 | 2021-04-10T14:58:54 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.ai-thinker.com |
DHCP vendor deployment options configure options specific to a pre-defined vendor profile.
Vendor profiles and option definitions are created as described in DHCP vendor profiles and options, but they are not defined with any values. Values are assigned when the vendor deployment options are added..
-. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Adding-DHCP-vendor-deployment-options/8.3.1 | 2021-04-10T14:49:12 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bluecatnetworks.com |
Using the BMC Event Adapters
This topic discusses about how to enable, disable, start, and stop BMC Event Adapters.
The following event adapters are available:
- LogFile Adapter - The LogFile Adapter is a file reader that can be used with any text file containing records that can be recognized by Perl regular expressions that describe the record and the record separator. For more information, see LogFile Adapter.
- SNMP Adapter - The SNMP (Trap) Adapter consists of a UDP SNMP server listening for SNMP traps. It includes the SNMP Adapter Configuration Manager to convert information from Management Information Base (MIB) files into cell classes, and map and enumeration configuration files used to format traps into cell events. For more information, see SNMP Adapter.
- Perl EventLog Adapter for Windows - The Perl EventLog Adapter for Windows is a Windows-only adapter written in Perl that runs in the mcxa process. It monitors the system, security, and application events generated by a Windows operating system, translates the events, and forwards the events to a cell. For more information, see Perl EventLog Adapter for Windows.
- IP Adapters - The IP Adapters use the various protocols of the IP protocol suite to establish connections with programs from which you might want to collect cell event data. For more information, see IP Adapters.
- BMC Event Log Adapter for Windows - The BMC Event Log Adapter for Windows (BMC ELA) monitors the system, security, and application events generated by the Windows operating system, translates the events, and forwards the events to a cell. For more information, see BMC Event Log Adapter for Windows.
- User-defined adapters. For more information, see User-defined adapters.
For more information about BMC event adapters, see the following:
Was this page helpful? Yes No Submitting... Thank you | https://docs.bmc.com/docs/tsim105/using-the-bmc-event-adapters-616455733.html | 2021-04-10T15:24:49 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bmc.com |
This feature is part of our Machine Learning APIs..
Requests for similarity search are asynchronous.
You can read more about the feature implementation in theSetSelector object specifiesKey}
..."productSetSelectors": [{ "projectKey": "{projectKey}","productIds": ["{product_id}"] },{ "projectKey": "sunrise" }]...
Compare every product with type
{product_type_id_1} with products with type
{product_type_id_2}
..."productSetSelectors": [{ "projectKey": "{projectKey}","productTypeIds": ["{product_type_id_1}"] },{ "projectKey": "{projectKey}","productTypeIds": ["{product_type_id_2}"] }]...
Compare the staged version of product
{product_id_1} with the current version of products
{product_id_2} and
{product_id_3}
..."productSetSelectors": [{ "projectKey": "{projectKey}","productIds": ["{product_id_1}"],"staged": true },{ "projectKey": "{projectKey}","productIds": ["{product_id_2}", "{product_id_3}"],"staged": false }]...
Product Comparisons
The maximum number of allowed product comparisons is 20,000,000. The calculation of the number of comparisons depends on the type of the request.
If the two specified
productSetSelectors are identical, each product in the first ProductSet will be compared to all the products in the second ProductSet
excluding itself resulting in (N choose 2) comparisons, where N is the total number of products in one ProductSet.
If the two specified
productSetSelectors are different, every product in one ProductSet is compared to every product in the other ProductSet, resulting in (N1 * N2) total comparisons.}...
Representations
ProductSet(that means use current ProductData).
includeVariants- Boolean - Optional
Specifies use of product variants. If set to
true, all product variants are compared, not just the master variant. Default:
false.
productSetLimit- Integer - Optional
Maximum number of products to check (if unspecified, all products are considered). Note that the maximum number of product comparisons between two productSets is 20,000,000. This limit cannot be exceeded. If you need a higher limit, contact Support Portal.
Default ProductSetSelector
Compare all products within a project.
[{ "projectKey": "{projectKey}" }, { "projectKey": "{projectKey}" }]
offset- Number - Optional
language- String Default:
enIETF language tag language tag used to prioritize language for text comparisons.
currencyCode- String - Default:
EUR
The three-digit ISO 4217 currency code to compare prices in. When a product has multiple prices, all prices for the product are converted to the currency provided by the currency attribute and the median price is calculated for comparison. Currencies are converted using the European Central Bank (ECB) currency exchange rates at the time the request is made. Of theSetSelector - Optional
Default: Default ProductSet- The SimilarityMeasures used in this search.
Endpoints
Initiation Endpoint
Host:-{region}.europe-west1.gcp.commercetools.com
Endpoint:
/{projectKey}/similarities/products
Method:
OAuth 2.0 Scopes:
view_products:{projectKey}
Response Representation: TaskToken
Request Representation: SimilarProductSearchRequest
Status Endpoint
Host:-{region}.europe-west1.gcp.commercetools.com
Endpoint:
/{projectKey}/similarities/products/status/{task_id}
Method:
GET
OAuth 2.0 Scopes:
view_products:{projectKey}
Response Representation: TaskStatus of a PagedQueryResult with
results containing an array of SimilarProductPairs, sorted by confidence scores in descending order and the meta information of SimilarProductSearchRequestMeta.
Examples
curl -X POST-{mlRegion}.europe-west1.gcp.commercetools.com/{projectKey}/similarities/products \-H "Content-Type: application/json" \-H 'Authorization: Bearer {access_token}' \-d \'{"limit" : 3,"similarityMeasures" : {"name": 1},"productSetSelectors" : [{"projectKey": "{projectKey}","productTypeIds": [ "8b50b0b0-8091-8e32-4601-948a8b504606" ],"staged": true},{"projectKey": "{projectKey}","productTypeIds": [ "46068292-4a41-4601-948a-948a8b508b50" ],"staged": true}]}'
{"taskId": "078b4eb3-8e29-1276-45b1-8964cf118707","location": "/{projectKey}/similarities/products/078b4eb3-8e29-1276-45b1-8964cf118707"}
curl -sH 'Authorization: Bearer {access_token}'-{mlRegion}.europe-west1.gcp.commercetools.com/{projectKey}}}}} | https://docs.commercetools.com/api/projects/similarProducts | 2021-04-10T14:53:46 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.commercetools.com |
Cannot Run QuantumATK P-2019.03¶
If you receive the license error below or in general have any issue stating QuantumATK P-2019.03 please verify the following:
- You must use the Synopsys Common Licensing (SCL) software, version 2018.06 or later,
- You must have the license key files issued later than December 10, 2018
you can retrieve both SCL installers and licence key from the SolvNet portal
License errors:
FlexNet Licensing checkout error: SIGN= keyword required but missing from the license certificate. This is probably because the license is older than the application You need to obtain a SIGN= version of this license from your vendor
For more information please visit the License Installation Guide. | https://docs.quantumatk.com/faq/faq_licensing_norun.html | 2021-04-10T14:59:38 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.quantumatk.com |
Overview
Open the Builder and try it out!
Take a look at our video tutorials
Or read the documentation
The SDK
Follow the SDK 101 tutorial for a quick crash course.
Take a look at the escape room video tutorial series.
Or read the documentation
Shortcuts
Several libraries are built upon the Decentraland SDK to help you build faster, see the full list in the Awesome Repository
SDK Scene examples
See scene examples for more scene examples.
Also see tutorials for detailed instructions for building scenes like these. | https://docs.decentraland.org/development-guide/content-intro/ | 2021-04-10T15:18:39 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/images/home/1.png', None], dtype=object)
array(['/images/home/2.png', None], dtype=object)
array(['/images/home/3.png', None], dtype=object)] | docs.decentraland.org |
idf DICTFILE1 [DICTFILE2 ...] --out IDFILEbuild IDF file from one or several dictionary dumps. Additional parameter
-skip-uniqwill skip unique (df=1) words.
-
manticore.conffile format.
--dumpheader INDEXNAMEdumps index header by index name with looking up the header path in the configuration file.
--dumpdict INDEXNAMEdumps dictionary. Additional
-statsswitch will dump to dictionary the total number of documents. It is required for dictionary files that are used for creation of IDF files.
--dumpdocids INDEXNAMEdumps document IDs by index name.
- manticore.conf, and not the index header.
--mergeidf NODE1.idf [NODE2.idf ...] --out GLOBAL.idfmerge several .idf files into a single one. Additional parameter
-skip-uniqwill skip unique (df=1) words.
-.
--rotateworks only with
--checkand defines whether to check index waiting for rotation, i.e. with .new extension. This is useful when you want to check your index before actually using it.
--apply-killlistsloads and applies kill-lists for all indexes listed in the config file. Changes are saved in .SPM files. Kill-list files (.SPK) are deleted. This can be useful if you want to move applying indexes from daemon startup to indexing stage. | https://docs.manticoresearch.com/3.2.2/html/command_line_tools_reference/indextool_command_reference.html | 2021-04-10T14:27:16 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.manticoresearch.com |
Proper alignment of business and IT stakeholders helps to overcome migration roadblocks and accelerate migration efforts. This article provides recommended steps for:
- Stakeholder alignment
- Migration planning
- Deploying a landing zone
- Migrating your first 10 workloads
It also helps you implement proper governance and management processes.
Use this guide to streamline the processes and materials required for aligning an overall migration effort. The guide uses the methodologies of the Cloud Adoption Framework that are highlighted in this illustration.
If your migration scenario is atypical, you can get a personalized assessment of your organization's migration readiness by using the strategic migration and readiness tool (SMART) assessment. Use it to identify the guidance that best aligns to your current needs.
The technical effort and process required to migrate workloads is relatively straightforward. It's important to complete the migration process efficiently. Strategic migration readiness has an even bigger impact on the timelines and successful completion of the overall migration.
To accelerate adoption, you must take steps to support the cloud adoption team during migration. This guide outlines these iterative tasks to help customers start on the right path toward any cloud migration. To show the importance of the supporting steps, migration is listed as step 10 in this article. In practice, the cloud adoption team is likely to begin their first pilot migration in parallel with steps 4 or 5.
Step 1: Align stakeholders
To avoid common migration blockers, create a clear and concise business strategy for migration. Stakeholder alignment on motivations and expected business outcomes shapes decisions made by the cloud adoption team.
- Motivations: The first step to strategic alignment is to gain agreement on the motivations that drive the migration effort. Start by understanding and categorizing motivations and common themes from various stakeholders across business and IT.
- Business outcomes: After motivations are aligned, it's possible to capture the desired business outcomes. This information provides clear metrics you can use to measure the overall transformation.
Deliverables:
- Use the strategy and plan template to record motivations and desired business outcomes.
Step 2: Align partner support
Partners, Microsoft Services, or various Microsoft programs are available to support you throughout the migration process.
- Understand partnership options to find the right level of partnership and support.
Deliverables:
- Establish terms and conditions or other contractual agreements before you engage supporting partners.
- Identify approved partners in the strategy and plan template.
Step 3: Gather data and analyze assets and workloads
Use discovery and assessment to improve technical alignment and create an action plan for executing your strategy. During this step, validate the business case using data about the current state environment. Then perform quantitative analysis and a deep qualitative assessment of the highest priority workloads.
- Inventory existing systems: Use a programmatic data-driven approach to understand the current state. Discover and gather data to enable all assessment activities.
- Incremental rationalization: Streamline assessment efforts to focus on a qualitative analysis of all assets, possibly even to support the business case. Then add a deep qualitative analysis for the first 10 workloads to be migrated.
Deliverables:
- Raw data on existing inventory.
- Quantitative analysis on existing inventory to refine the business justification.
- Qualitative analysis of the first 10 workloads.
- Business justification documented in the strategy and plan template.
Step 4: Make a business case
Making the business case for migration is likely to be an iterative conversation among stakeholders. In this first pass at building the business case, evaluate the initial high-level return from a potential cloud migration. The goal of this step is to ensure that all stakeholders align around one simple question: based on the available data, is the overall adoption of the cloud a wise business decision?
- Building a cloud migration business case is a good starting point for developing a migration business case. Clarity on formulas and tools can aid in business justification.
Deliverables:
- Use the strategy and plan template to record business justification.
Step 5: Create a migration plan
A cloud adoption plan provides an accelerated approach to developing a project backlog. The backlog can then be modified to reflect discovery results, rationalization, needed skills, and partner contracting.
- Cloud adoption plan: Define your cloud adoption plan using the basic template.
- Workload alignment: Define workloads in the backlog.
- Effort alignment: Align assets and workloads in the backlog to clearly define effort for prioritized workloads.
- People and time alignment: Establish iteration, velocity (people's time), and releases for the migrated workloads.
Deliverables:
- Deploy the backlog template.
- Update the template to reflect the first 10 workloads to be migrated.
- Update people and velocity to estimate release timing.
- Timeline risks:
- Lack of familiarity with Azure DevOps can slow the deployment process.
- Complexity and data available for each workload can also affect timelines.
Step 6: Build a skills readiness plan
Existing employees can play a hands-on role in the migration effort, but additional skills might be required. In this step, find ways to develop those skills or use partners to add to those skills.
- Build a skills-readiness plan. Quickly evaluate your existing skills to identify what other skills the team should develop.
Deliverables:
- Add a skills-readiness plan to the strategy and plan template.
Step 7: Deploy and align a landing zone
All migrated assets are deployed to a landing zone. The landing zone start simple to support smaller workloads, then scales to address more complex workloads over time.
- Choose a landing zone: Find the right approach to deploying a landing zone based on your adoption pattern. Then deploy that standardized code base.
- Expand your landing zone: Whatever your starting point, identify gaps in the deployed landing zone and add required components for resource organization, security, governance, compliance, and operations.
Deliverables:
- Deploy your first landing zone for deploying initial low-risk migrations.
- Develop a refactoring plan with the cloud center of excellence or the central IT team.
- Timeline risks:
- Governance, operations, and security requirements for the first 10 workloads can slow this process.
- Refactoring the first landing zone and subsequent landing zones takes longer, but it should happen in parallel with migration efforts.
Step 8: Migrate your first 10 workloads
The technical effort required to migrate your first 10 workloads is relatively straightforward. It's also an iterative process that you repeat as you migrate more assets. In this process, you assess your workloads, deploy your workloads, and then release them to your production environment.
Cloud migration tools enable migrating all virtual machines in a datacenter in one pass or iteration. It's more common to migrate a smaller number of workloads during each iteration. Breaking up the migration into smaller increments requires more planning, but it reduces technical risks and the impact of organizational change management.
With each iteration, the cloud adoption team gets better at migrating workloads. These steps help the technical team mature their capabilities:
- Migrate your first workload in a pure infrastructure as a service (IaaS) approach by using the tools outlined in the Azure migration guide.
- Expand tooling options to use migration and modernization by using the migration examples.
- Develop your technical strategy by using broader approaches outlined in Azure cloud migration best practices.
- Improve consistency, reliability, and performance through an efficient migration-factory approach as outlined in Migration process improvements.
Deliverables:
Continuous improvement of the adoption team's ability to migrate workloads.
Step 9: Hand off production workloads to cloud governance
Governance is a key factor to the long-term success of any migration effort. Speed to migration and business impact is important. But speed without governance can be dangerous. Your organization needs to make decisions about governance that align to your adoption patterns and your governance and compliance needs.
- Governance approach: This methodology outlines a process for thinking about your corporate policy and processes. After determining your approach, you can build the disciplines required to enable governance across your enterprise cloud adoption efforts.
- Initial governance foundation: Understand the disciplines needed to create a governance minimum viable product (MVP) that serves as the foundation for all adoption.
- Governance benchmark: Identify gaps in your organization's current state of governance. Get a personalized benchmark report and curated guidance on how to get started.
Deliverables:
- Deploy an initial governance foundation.
- Complete a governance benchmark to plan for future improvements.
- Timeline risk:
- Improvement of policies and governance implementation can add one to four weeks per discipline.
Step 10: Hand off production workloads to cloud operations
Operations management is another requirement to reach migration success. Migrating individual workloads to the cloud without an understanding of ongoing enterprise operations is a risky decision. In parallel with migration, you should start planning for longer-term operations.
- Establish a management baseline
- Define business commitments
- Expand the management baseline
- Get specific with advanced operations
Deliverables:
- Deploy a management baseline.
- Complete the operations management workbook.
- Identify any workloads that require an Microsoft Azure Well-Architected Review assessment.
- Timeline risks:
- Review the workbook: Estimate one hour per application owner.
- Complete the Microsoft Azure Well-Architected Review assessment: Estimate one hour per application.
Value statement
These steps help teams accelerate their migration efforts through better change management and stakeholder alignment. These steps also remove common blockers and realize business value more quickly.
Next steps
The Cloud Adoption Framework is a lifecycle solution that helps you begin a migration journey. It also helps mature the teams that support migration efforts. The following teams can use these next steps to continue to mature their capabilities. These parallel processes aren't linear and shouldn't considered blockers. Instead, each is a parallel value stream to help improve your organization's overall cloud readiness.
If your migration scenario is atypical, you can get a personalized assessment of your organization's migration readiness by using the strategic migration and readiness tool (SMART) assessment. The answers you provide help identify which guidance aligns best with your current needs. | https://docs.microsoft.com/en-in/azure/cloud-adoption-framework/get-started/migrate | 2021-04-10T14:30:26 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../_images/get-started/migration-map.png',
'Get started with migration in Azure'], dtype=object)
array(['../_images/migrate/methodology-effort-only.png',
'Phases of iterative migration efforts: assess, deploy, release'],
dtype=object) ] | docs.microsoft.com |
How to execute SQL scripts using DatabaseBuilder webjob¶
In order to easily deploy a Sidra Data Platform installation in a new environment and be able to replicate the deployment every time, all the information about Data Factory pipelines that is stored in the metadata database must be also included in the code base as SQL scripts. This is a way to achieve the DevOps practice of Infrastructure as code.
To easy that task, Sidra provides the DatabaseBuilder webjob to automatize the management and execution of those scripts. It can also be used to execute the scripts to populate the metadata of providers, entities, attributes, etc.
This webjob can be added to both Core and Client app solutions.
Project structure¶
The functionality of the WebJob is included in the Sidra package
PlainConcepts.SIDRA.DotNetCore.Webjob.DatabaseBuilder. So, in the Visual Studio solution, the DatabaseBuilder project will be just a host that references the Sidra package. It can also contains the scripts to be executed.
The location of the scripts can be configured in the WebJob, they can be stored:
- In an Azure Storage account by setting up the Application Setting DatabaseBuilder.StoragePath to the path to the Azure Storage account.
- Locally within the WebJob by setting up the Application Setting DatabaseBuilder.LocalFolderName to the local folder name.
In case any of the previous settings is configured, the scripts will be retrieved locally from the folder "Scripts". That is the default configuration used for both Core and Client apps.
Databases and the scripts folder structure¶
Wherever the scripts were located, they have to comply with the following folder structure so they can be handled by the DatabaseBuilder. It must be a folder for each database that will be built up. Each of these folders will contain the scripts that will be executed in the respective database. Some folders have specific names because they refer to well known databases (Core, DW, Log, Client) but the DatabaseBuilder allows to add any number of additional folders to manage the scripts of any number of additional databases.
- CoreContext folder for the Core database scripts.
- DWContext folder for the OperationalWarehouse database scripts.
- LogContext folder for the Log database scripts.
- ClientContext folder for the Client app database scripts.
- AdditionalDatabaseContext folder for an additional database (a folder for each additional database).
Additionally to the folder structure, the WebJob must be configured to select which folders must manage. This configuration is performed in the
Main method of the
Program class in the DatabaseBuilder host project by using the
Options class.
Scripts naming convention¶
Inside of the previous folders, the scripts will be files containing the SQL sentences to execute in the corresponding database. The files must comply with this naming convention:
where:
The executionOrder must be a number that will be used to sort the files before executing them. So the file with the minor execution order will be the first in being executed.
The environment is an optional part of the naming that is used to filter the scripts to be executed. There is an Application Setting named Environment that can be configured for that purpose. Only the files with an environment matching the value of that Application Setting will be executed. If the filename does not include an environment, it will always be executed.
- The name is a word used to describe the script. It is usual to use PBI + the number of the user story in the backlog, but it is just a good practice and it is not mandatory. It is important to highlight that the final .sql of the naming convention must be lower case.
Using the previous configuration, only the first and third scripts will be executed:
Sidra databases seed execution¶
Sidra requires to populate the databases with some base information to allow the system to function. That seed for the databases is included in the Sidra persistence packages (e.g.
PlainConcepts.SIDRA.DotNetCore.Persistence for the Core database) and it is executed by the DatabaseBuilder.
App settings¶
The DatabaseBuilder can be configured using the
appsettings.json file. This is a summary of all the configuration parameters:
How to test locally¶
The DatabaseBuilder project is deployed as a WebJob but it is a Console Application, that means that it can be executed locally so it is possible to test the execution of the scripts in a local database before executing those changes in the actual database infrastructure.
In order to test those scripts locally, it is necessary to configure the
appsettings.json file of the DatabaseBuilder project with the following parameters: | https://docs.sidra.dev/Sidra-Data-Platform/Tutorials/Execute-sql-scripts/ | 2021-04-10T14:17:44 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../../attachments/tutorial-databasebuilder-project.png',
'DatabaseBuilder WebJob project in a Core solution tutorial-databasebuilder-project'],
dtype=object) ] | docs.sidra.dev |
Column Menu
The TreeList provides a built-in option for triggering column operations through a menu.
To enable the column-menu implementation, set
.ColumnMenu(). As a result, the column headers of the TreeList will render a column menu which allows the user to sort, filter, or change the visibility of the column. The column menu also detects when a specific column operation is disabled through the column definition and excludes the corresponding UI from its rendering. For a runnable example, refer to the demo on implementing a column menu in the TreeList. | https://docs.telerik.com/aspnet-core/html-helpers/data-management/treelist/columns/column-menu | 2021-04-10T15:22:38 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.telerik.com |
Welcome to Django Report Scaffold’s documentation!¶
Contents:
What is Django Report Scaffold?¶
You have an app and it probably needs reports. Scaffold reports will help you streamline the process and present users with a consistent interface. Think of it like Django admin for reports. It features:
- Stock report filtering
- Highly extentable - Keep with stock filters or make your own.
- Export to xlsx, admin change list page, django-report-builder, or appy POD templates. Extend and add your own exports!
- Ready to go user interface allows users to quickly filter reports.
- Filter system encourages you to keep your report logic organized. Filters can change a queryset or change report contextthat might affect other filters. | https://django-report-scaffold.readthedocs.io/ | 2021-04-10T15:05:42 | CC-MAIN-2021-17 | 1618038057142.4 | [] | django-report-scaffold.readthedocs.io |
View the history of DHCPv4 leases provided by your managed servers.
You can view the DHCPv4 lease history for a specific IPv4 network or for all networks within an IPv4 block.
To view IPv4 DHCP lease history:
- From the configuration drop-down menu, select a configuration.
- Select the IP Space tab. Tabs remember the page you last worked on, so select the tab again to ensure you're on the Configuration information page.
- Under IPv4 Blocks, click an IPv4 block.
- Click the DHCP Leases History tab. To view the DHCP lease history for a specific network, navigate to an IPv4 network and click the DHCP Leases History tab.The DHCP.
- Parameter Request List—displays the list of parameters the device requested from the DHCP server.
- Vendor Class Identifier—displays an identifier sent by the DHCP client software running on a device.
- Circuit ID—shows information specific to the circuit from which the request came.
- Name—the name you created for the IP address.
- Remote ID—shows information that identifies the relay agent.
To view the details for an address, click an address in the Active IP Address column.
To view the details for a server, click a server in the Primary Server or Secondary Server columns. | https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Viewing-DHCPv4-lease-history/8.2.0 | 2021-04-10T14:41:36 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.bluecatnetworks.com |
The “NetSuite Kit Inventory to Shopify Inventory Add/Update” data flow exports Kit inventory (quantity) from NetSuite to Shopify. This use case is only applicable to customers who have modeled their Kit Items in NetSuite as simple items in Shopify. Whenever a Kit inventory item is created or updated in NetSuite, the Integration App export the details of that item as a product on Shopify. The Integration App also provides options to set up the Integration App before running the pre-built integration the kit inventory item
NetSuite Kit items are a special type of items that Shopify.
Working with the Kit Inventory data flow
Set up the kit item in NetSuite and then open the Integration App. The Integration App 'Flows > Inventory' section displays the data flow groups/data flow. To run the data flow from this section, click the 'Run' button.
The dashboard shows the status of the data flow. Once the flow is successful (takes a few minutes to complete), validate that the NetSuite record is exported to Shopify.
Inventory Settings
- NetSuite Saved Search for syncing inventory for kit Items and item groups: Select appropriate saved search. Refresh to fetch the latest values for the saved search. The following screen displays the saved search and its associated criteria in NetSuite:
- items across all the locations, and then will calculate inventory for a kit.
- NetSuite locations to pick inventory from: Let's you choose the inventory location(s) in NetSuite from which the inventory quantities should be read and exported to Shopify.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/115001487411-Sync-kit-inventory-from-NetSuite-to-Shopify | 2021-04-10T14:33:31 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/360085188992/mceclip0.png',
'mceclip0.png'], dtype=object) ] | docs.celigo.com |
Important: This connector requires an on-premise agent and is only for SQL (not SODA).
Oracle DB (SQL) links: API guide, Authentication
A. Set up an Oracle DB (SQL) connection
Start establishing a connection to Oracle DB (SQL) Oracle DB (SQL).
The Create connection pane opens with required and advanced settings.
B. Describe the Oracle DB connection
At this point, you’re presented with general settings about the Oracle DB connection:
Name (required): Provide a clear and distinguishable name. Throughout integrator.io imports and exports, you will have the option to choose this new connection, and a unique identifier will prove helpful later when selecting among a list of connections that you’ve created.
Application (required, not-editable): A reminder of the app you’re editing.
Agent (required): To connect to an on-premise application integrator.io requires that an agent be installed on a computer that has network access to the on-premise application. Once the agent has been installed you can reference the agent with the drop-down menu here. A single agent can be used by multiple different connections.
C. Edit required Oracle DB (SQL) settings
At this point, you’re presented with a series of options for establishing Oracle DB (SQL) authentication.
Host (required): Enter the hostname or IP address of the server you are connecting to.
Username (required): Enter the username to authenticate with the server.
Password (required): Enter the password to authenticate with the server.
Instance name (optional): This field specifies the instance name to connect to. The SQL Server Browser service must be running on the database server, and UDP port 1434 on the database server must be reachable. If you set this field you cannot also set the port field because "instanceName" and "port" are mutually exclusive connection options.
Port (optional): The server port to connect to. The default port is 1521.
D. Edit advanced Oracle DB settings
Before continuing, you have the opportunity to provide additional configuration information, if needed, for the Oracle DB connection.
E. Test and save the connection
Once you have configured the Oracle DB_4<<
The new connection is now successfully added to your account. It will be applied to the current source or destination app, if you created it within a flow. Otherwise, you may proceed to register the connection with an integration.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360050360312-Set-up-a-connection-to-Oracle-DB-SQL- | 2021-04-10T14:22:03 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/360074493031/oracle-db.png', None],
dtype=object)
array(['/hc/article_attachments/360074492991/oracle-db-1.png', None],
dtype=object)
array(['/hc/article_attachments/360076198492/odb1.png', 'odb1.png'],
dtype=object)
array(['/hc/article_attachments/360076426571/odb2.png', 'odb2.png'],
dtype=object)
array(['/hc/article_attachments/360074309672/amazon-redshift-confirm.png',
None], dtype=object) ] | docs.celigo.com |
Posting transactions to observer nodes
This tutorial will take you through the steps involved in adding support for observer nodes to your CorDapp.
Introduction.
How observer nodes operate
By default, vault queries do not differentiate between states you recorded as a participant/owner, and states you recorded as an observer. You will have to write custom vault queries that only return states for which you are a participant/owner. See the Example usage section of the API: Vault Query page for information on how to do this. This also means that
Cash.generateSpendshould not be used when recording
Cash.Statestates as an observer.
When an observer node is sent a transaction with the
ALL_VISIBLEflag set, any transactions in the transaction history that have not already been received will also have
ALL_VISIBLEstatesrecording flag were processed in this way.
Nodes may re-record transactions if they have previously recorded them as a participant and wish to record them as an observer. However, if this is done, the node cannot resolve a forward chain of transactions. This means that if you wish to re-record a chain of transactions and get the new output states to be correctly marked as consumed, the full chain must be sent to the node in order. | https://docs.corda.net/docs/corda-os/4.6/tutorial-observer-nodes.html | 2021-04-10T14:37:29 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.corda.net |
How do I work around "Empty choice is unsatisfiable if minOccurs not equal to 0. An error occured at file://[some path to XSD]"?
You might have received this error message if you followed these steps:
- Install BizTalk 2004 on a clean machine with Visual Studio .NET 2003,
- Start Visual Studio and Create a new C# project (Windows Application for instance),
- Save the solution/Project,
- Right click on the project name in the solution explorer and select "Add-> New Item ...",
- Click "Data Set" and press "OK",
You see the following error message (the path under the white box is the path to the XSD that has been created when you inserted the DataSet):
When you install BizTalk, there is a component called "BizTalk developer tools". This component contains all the logic to integrate with Visual Studio .NET 2003: pipeline designer, BizTalk Explorer and among many other things, a schema editor.
The error you are seeing is because the BizTalk Editor is trying to edit a DataSet schema and got confused. To workaround this, you have to ensure that DataSet schemas are opened with the Visual Studio Schema Editor and not the BizTalk Schema Editor. Here are two different workarounds:
- Bypass the BizTalk Schema Editor when editing the schema (works around the problem once)
- Reset the default editor to the Visual Studio Schema Editor instead of the BizTalk Schema editor (puts the Visual Studio Schema Editor back as the default XSD editor)
Bypass the BizTalk Schema Editor when editing the DataSet:
- Open the solution in VS.NET 2003
- Insert the DataSet as explained above (steps 4 and 5). You see the error, but you also see that the DataSet item was created and added to the solution explorer
- Click "OK" to close the error box
- In the solution explorer, locate the schema and right click on it
- Select "Open With ..."
- In the dialog box, select "XML Schema editor" and click "Open"
Set the Visual Studio Schema Editor back to default editor:
- Follow the steps 1 to 5 above
- In the "Select XML Schema Editor" dialog box, select "XML Schema Editor" and click "Set as default".
- Press OK to open the DataSet
Ravi, I hope this page helped you in case you missed my answer in another group . | https://docs.microsoft.com/en-us/archive/blogs/gzunino/how-do-i-work-around-empty-choice-is-unsatisfiable-if-minoccurs-not-equal-to-0-an-error-occured-at-filesome-path-to-xsd | 2021-04-10T16:04:17 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
How to Integrate your own autoresponder code into the Surefire Wealth squeeze page
To add your own autoresponder code to the squeeze page requires a modicum of HTML knowledge and the Raw HTML form code from your autoresponder.
You can download your squeeze page from your Reseller Tools Area:
Go to Section 3 to get your promotional tools:
You will be taken to another window where you can download the files. Expand the Reseller Files and click the "Click here to download" link.
Extract the files, look for the Squeeze page folder and open the index file.
You need to edit the file using an editor like Notepad++ or even Notepad. You can do this by right clicking the file icon, choose Open with... and select Notepad++. Once the file is open, look for the line <!-- Form Starts Here Put your AR Code here-->, shown on the screen shot below:
Remove the line and replace it with your HTML form code. Save the file and upload it to your own server. | https://docs.surefirewealth.com/article/964-integrate-your-own-autoresponder-code-into-the-surefire-wealth-squeeze-page | 2021-04-10T14:39:33 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5e8bd6df2c7d3a7e9aea7b68/file-YH9Sx0p8bn.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5e8bd9e404286364bc97eab8/file-pYt1wOvtPZ.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5e8bdab32c7d3a7e9aea7b88/file-hkwq8jZ3lG.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/568c4563c69791436155bbd4/images/5e8bdc112c7d3a7e9aea7b8d/file-Ol6pabtRXM.png',
None], dtype=object) ] | docs.surefirewealth.com |
The
onoffline property of the
WorkerGlobalScope interface represents an
EventHandler to be called when the
offline event occurs and bubbles through the
Worker.
self.onoffline = function() { ... };
The following code snippet shows an
onoffline handler set inside a worker:
self.onoffline = function() { console.log('Your worker is now offline'); }
The
WorkerGlobalScope interface it belongs to.
© 2005–2018 Mozilla Developer Network and individual contributors.
Licensed under the Creative Commons Attribution-ShareAlike License v2.5 or later. | https://docs.w3cub.com/dom/workerglobalscope/onoffline | 2021-04-10T14:59:12 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.w3cub.com |
[−][src]Crate cntrlr
A library for simple asynchronous embedded programming
use cntrlr::prelude::*; use core::futures::pending; #[entry] async fn main() -> ! { serial_1().enable(9600); writeln!(serial_1(), "Hello, World").await.expect("Failed to message"); pending().await }
For an API overview, check the
prelude module. This is the
core set of functionality provided by Cntrlr, and provides
functionality for most applications.
For hardware-specific functionality, each supported board and
microcontroller has its own module under
hw. Note that there
are currently both safety and ergonomics issues with these
lower-level APIs, and they don't provide much more functionality
than what is needed to implement the main Cntrlr API. They will be
expanded as time goes on, and will be improved for correcntess and
usability. | https://docs.rs/cntrlr/0.1.0/cntrlr/ | 2021-04-10T14:20:08 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.rs |
Using an External Script Editor
T-HSCP-001-004
If you are more comfortable editing your scripts using another text editor or syntax highlighter, Harmony allows you to set an external script editor and use that one instead.
- Do one of the following:
- In the Script Editor toolbar, click the
Set External Editor button.
- In the top-left corner of the Script Editor view, click on the
Menu button and select Editor > Set External Editor.
The Set External Editor window opens.
In the external editor, type the complete and absolute path to the text editing application you want to use, including its name and extension. For example, if you want to use notepad++ on Windows, you could type:
C:\Program Files\Notepad++\notepad++.exe
On macOS, if you want to use an application that supports the AppleScript Open Document protocol, simply typing the name of the application might work, for example:
TextWrangler
Typing the name of the application you want to use will also work if that application has its path in the PATH environment variable, such as notepad on Windows
or gedit on GNU/Linux.
- Click on OK.
- In the File list, select the script you want to edit.
Do one of the following:
- In the Script Editor toolbar, click the
Open with External Editor button.
- In the top-left corner of the Script Editor view, click on the
Menu button and select Editor > External Editor.
- In the external editor that opens, make the desired changes to your script, save the script and close the external editor.
- In Harmony, the changes you made to your script in the external editor will not appear immediately. You can load those changes by selecting another script in the File list, then selecting the script you changed again. | https://docs.toonboom.com/help/harmony-20/advanced/scripting/use-external-script-editor.html | 2021-04-10T15:12:21 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
wradlib.classify.trapezoid¶
wradlib.classify.
trapezoid(msf, obs)¶
Calculates membership of obs using trapezoidal membership functions
- Parameters
msf (
numpy.ndarray) – Array which is of size (obs.shape, 4), containing the trapezoidal membership function values for every obs point for one particular hydrometeor class.
obs (
numpy.ndarray) – Array of arbitrary size and dimensions containing the data from which the membership shall be calculated.
- Returns
out (
numpy.ndarray) – Array which is of (obs.shape) containing calculated membership probabilities. | https://docs.wradlib.org/en/stable/generated/wradlib.classify.trapezoid.html | 2021-04-10T15:17:48 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.wradlib.org |
In Salesforce, partners can be added to an opportunity. When the sync is done, the partner associated with the Salesforce opportunity is synced as a customer in NetSuite and set as an end user on a sales order. You can have more than one partner associated with an opportunity.
As channel sales is not supported in Salesforce - NetSuite Integration App, but it can be achieved by adding manual configurations to salesforce opportunities to NetSuite sales order flow.
Prerequisite: Partner accounts in Salesforce must be in sync with NetSuite as a customer inorder to successfully sync the Salesforce opportunity as NetSuite sales order with the channel sales details.
Configure flow level settings
- Sign in to your integrator.io app, and click on the Salesforce - NetSuite Integration App.
- Navigate to Flows > Opportunity.
- Go to ‘Salesforce opportunity to NetSuite sales order’ flow and click settings.
- Go to Additional data to export from Lookup fields and add the below fields to the existing reference fields.
Account.Name,Account.ShippingCity,Account.ShippingCountry,Account.ShippingState,Account.ShippingStreet
- Go to Additional data to export from Related Lists or Sublists and add the related list with the below fields.
- Select Child SObject Type as Partners and add a reference field AccountTo.Id.
- Add a filter as IsPrimary = true AND Role = 'System Integrator'.
Note: Role is not mandatory
Configuring Field Mappings
- Sign in to your integrator.io app, and click on the Salesforce - NetSuite Integration App.
- Navigate to Flows > Opportunity.
- Go to ‘Salesforce opportunity to NetSuite sales order’ flow and click edit mappings and add the below mappings.
- Change the dynamic lookup of customer(internalid) field mapping as below.
- For customer (internalid), click settings and change the dynamic lookup.
- Add the following expressions in the dynamic mapping the Save mappings.
- Search Record Type: Customer
- Refresh search filters : Remove {{{Account.Id}}} and add {{#if Partners}}{{#eachPartners}}{{AccountTo.Id}}{{/each}}{{else}}{{Account.Id}}{{/if}}
- Value Field: InternalId
- Click Save
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360057306992-Channel-sales-sync | 2021-04-10T15:44:06 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/hc/article_attachments/360088268752/filter_exp.png',
'filter_exp.png'], dtype=object) ] | docs.celigo.com |
Automatic deactivation of indexing of Product information for non-production Projects
Indexing of Product information for non-production Projects will automatically be deactivated if there have been no calls against the following API endpoints within the last 30 days:
- Product Projection Search
- Product Suggestions
- Project update with the action Change Product Search Indexing Enabled
The deactivation of Product information indexing will always happen on the day after a 30 day observation period.
The first deactivations will happen on 12 April 2021 and will include Projects that had no calls against the aforementioned API endpoints between 13 March 2021 and 11 April 2021. Please see the example below:
- Start date 30 days observation period: 13 March 2021
- End date 30 days observation period: 11 April 2021
- Last API call or day of activation (search indexing activated): 12 March 2021
- Day of automatic deactivation: 12 April 2021
Deactivation of Product information indexing means that calls to either of the above Product Projection Search or Product Suggestions endpoint will return a status code of 400 unless Product information indexing is explicitly re-activated. Please refer to API documentation for more details.
At the moment, this change does not apply to Projects that are marked as production Projects or any Projects hosted on AWS Hosts. | https://docs.commercetools.com/api/releases/2021-04-07-automatic-deactivation-of-indexing-of-product-information-for-non-production-projects | 2021-04-10T15:11:35 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.commercetools.com |
Customize the Merchant Center with Custom Applications
The Merchant Center can now be extended with Custom Applications to cater to specific business needs that are not supported out of the box.
A Custom Application is a web application that is developed and self-hosted by the customer but made accessible to users in the Merchant Center like commercetools’ standard applications. Custom Applications can use the commercetools platform APIs as well as integrate with external systems via APIs exposed by these.
The Custom Applications documentation describes everything you need to get started developing and deploying Custom Applications for the Merchant Center. In addition, we provide a starter application to get up and running in a few minutes.
We are offering one-day trainings in building Custom Applications, targeted to developers with experience developing in React. For more information, see our training offerings.
If you plan to use Custom Applications, please make sure to also read the Support Policy. | https://docs.commercetools.com/merchant-center/releases/2020-02-03-customize-the-merchant-center-with-custom-applications | 2021-04-10T14:16:46 | CC-MAIN-2021-17 | 1618038057142.4 | [array(['/merchant-center/static/ac7d6d108a1a58999037cbb2e4333536/7527b/mc-dev-starter-main.png',
'Custom Applications - Starter App Custom Applications - Starter App'],
dtype=object) ] | docs.commercetools.com |
Hi Team,
We have done O365 authentication in our small application. After some months we have found that now O365 authentication is not working when browse cache is cleared, Sometimes either that login will redirect and to same page again and again and not hits the redirect URL or sometimes ClaimsIdentity is returns Null. I am using MVC please suggest where we are going wrong or why the application is working abnormally now. | https://docs.microsoft.com/en-us/answers/questions/7823/o365-login-is-not-working-when-browser-cache-is-cl.html | 2021-04-10T14:06:10 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
Firmware updates are supported for USB 3.0 devices only. The USB 2.0 device firmware has been stable for several years and no updates are expected for the lifecycle of that firmware.
Firmware updates require an internet connection to the host computer performing the update. If one is not available, the device will need to be sent back to Opal Kelly for update. We charge $25 per device for this service.
Firmware updates are currently only supported on Windows.
Overview
Content Tools
ThemeBuilder | https://docs.opalkelly.com/display/FPSDK/Firmware+Updates | 2021-04-10T15:03:46 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.opalkelly.com |
What’s New In FormEncode 0 to 1.2.4¶
Contents
1.2.3¶
- Code repository moved to BitBucket at ianb/formencode.
- Allowed
formencode.validators.UnicodeStringto use different encoding of input and output, or no encoding/decoding at all.
- Fixes #2666139: DateValidator bug happening only in March under Windows in Germany :)
- Don’t let
formencode.compound.Anyshortcut validation when it gets an empty value (this same change had already been made to
formencode.compound.All).
- Really updated German translation
- Fixes #2799713: force_defaults=False and <select> fields, regarding
formencode.htmlfill.render(). Thanks to w31rd0 for the bug report and patch to fix it.
1.2.2¶
- Added keyword argument
force_defaultsto
formencode.htmlfill.render(); when this is True (the default) this will uncheck checkboxes, unselect select boxes, etc., when a value is missing from the default dictionary.
- Updated German translation
1.2.1¶
- Be more careful about
unicode(Invalid(...)), to make sure it always returns unicode.
- Fix broken
formencode.nationalzip code validators.
- In
formencode.nationalonly warn about the pycountry or TG requirement when creating validators that require them.
- Fix another
formencode.htmlfillerror due to a field with no explicit value.
1.2¶
- Added
formencode.validators.IPAddress, validating IP addresses, from Leandro Lucarella.
- Added
formencode.api.Invalid.__unicode__()
- In
formencode.htmlfilluse a default encoding of utf8 when handling mixed
str/
unicodecontent. Also do not modify
<input type="image">tags (previously
srcwould be overwritten, for no good reason).
- In
formencode.validators.Emailallow single-character domain names (like x.com).
- Make
formencode.validators.FieldsMatchgive a normal
Invalidexception if you pass it a non-dictionary. Also treat all missing keys as the empty string (previously the first key was required and would raise KeyError).
formencode.validators.Numberworks with
inffloat values (before it would raise a OverflowError).
- The
twlocale has been renamed to the more standard
zh_TW.
- Added Japanese and Turkish translations.
- Fixed some outdated translations and errors in Spanish and Greek translations. Translations now managed with Babel.
1.1¶
- Fixed the
is_empty()method in
formencode.validators.FieldStorageUploadConverter; previously it returned the opposite of the intended result.
- Added a parameter to
htmlfill.render():
prefix_error. If this parameter is true (the default) then errors automatically go before the input field; if false then they go after the input field.
- Remove deprecated modules:
fields,
formgen,
htmlform,
sqlformgen, and
sqlschema.
- Added
formencode.htmlrename, which renames HTML inputs.
- In
formencode.htmlfill, non-string values are compared usefully (e.g., a select box with integer values).
- The validators
Intand
Numberboth take min/max arguments (from Shannon Behrens).
- Validators based on
formencode.validators.FormValidatorwill not treat
{}as an empty (unvalidated) value.
- Some adjustments to the URL validator.
formencode.compound.Alldoes not handle empty values, instead relying on sub-validators to check for emptiness.
- Fixed the
if_missingattribute in
formencode.foreach.ForEach; previously it would be the same list instance, so if you modified it then it would effect future
if_missingvalues (reported by Felix Schwarz).
- Added formatter to
formencode.htmlfill, so you can use
<form:error– this will cause the error to be swallowed, not shown to the user.
- Added
formencode.validators.XRIfor validation i-names, i-numbers, URLs, etc (as used in OpenID).
- Look in
/usr/share/localefor locale files, in addition to the normal locations.
- Quiet Python 2.6 deprecation warnings.
- Fix
formencode.validators.URL, which was accepting illegal characters (like newlines) and did not accept
1.0.1¶
chained_validatorswere removed from Schema somehow; now replaced and working.
- Put in missing
htmlfill.render(error_class=...)parameter (was documented and implemented, but
render()did not pass it through).
1.0¶
- Added
formencode.schema.SimpleFormValidator, which wraps a simple function to make it a validator.
- Changed the use of
chained_validatorsin Schemas, so that all chained validators get run even when there are previous errors (to detect all the errors).
- While something like
Int.to_python()worked, other methods like
Int.message(...)didn’t work. Now it does.
- Added Italian, Finnish, and Norwegian translations.
0.9¶
Backward incompatible changes¶
- The notion of “empty” has changed to include empty lists, dictionaries, and tuples. If you get one of these values passed into (or generated by) a validator with
not_empty=Trueyou can get exceptions where you didn’t previously.
Enhancements¶
- Added support for Paste’s MultiDict dictionary as input to Schema.to_python, by converting it to a normal dict via MultiDict.mixed. Previously MultiDicts wouldn’t work with CompoundValidators (like ForEach)
- Added encoding parameter to htmlfill, which will handle cases when mixed str and unicode objects are used (turning all str objects into unicode)
- Include
formencode.validators.InternationalPhoneNumberfrom W-Mark Kubacki.
validators.Inttakes
minand
maxoptions (from Felix Schwarz).
- You can control the missing message (which by default is just “Missing Value”) using the message
"missing"in a validator (also from James Gardner).
- Added
validators.CADR(for IP addresses with an optional range) and
validators.MACAddress(from Christoph Haas).
Bug Fixes¶
- Be friendlier when loaded from a zip file (as with py2exe); previously only egg zip files would work.
- Fixed bug in htmlfill when a document ends with no trailing text after the last tag.
- Fix problem with HTMLParser’s default unescaping routing, which only understood a very limited number of entities in attribute values.
- Fix problem with looking up A records for email addresses.
validators.Stringnow always returns strings. It also converts lists to comma-separated strings (no
[...]), and can encode unicode if an
encodingparameter is given. Empty values are handled better.
validators.UnicodeStringproperly handles non-Unicode inputs.
- Make
validators.DateConverterserialize dates properly (from James Gardner).
- Minor fix to setup.py to make FormEncode more friendly with zc.buildout.
0.7.1¶
- Set
if_missing=()on
validators.Set, as a missing value usually means empty for this value.
- Fix for
- Fixes for the
eslocale.
0.7¶
- Backward compatibility issue: Due to the addition of i18n (internationalization) to FormEncode, Invalid exceptions now have unicode messages. You may encounter unicode-related errors if you are mixing these messages with non-ASCII
strstrings.
- gettext-enabled branch merged in
- Fixes #1457145: Fails on URLs with port numbers
- Fixes #1559918 Schema fails to accept unicode errors
from formencode.validators import *will import the
Invalidexception now.
Invalid().unpack_errors(encode_variables=True)now filters out None values (which
ForEachcan produce even for keys with no errors).
0.6¶
String(min=1)implies
not_empty(which seems more intuitive)
- Added
list_charand
dict_chararguments to
Invalid.unpack_errors(passed through to
variable_encode)
- Added a
use_datetimeoption to
TimeValidator, which will cause it to use
datetime.timeobjects instead of tuples. It was previously able to consume but not produce these objects.
- Added
<form:iferrorwhen you want to include text only when a field has no errors.
- There was a problem installing 0.5.1 on Windows with Python 2.5, now resolved.
0.5¶
- Added
htmlfill.default_formatter_dict, and you can poke new formatters in there to effective register them.
- Added an
escapenlformatter (nl=newline) that escapes HTML and turns newlines into
<br>.
- When
not_empty=False, empty is assumed to be allowed. Thus
Int().to_python(None)will now return
None.
0.4¶
- Fixed up all the documentation.
- Validator
__doc__attributes will include some automatically-appended information about all the message strings that validator uses.
- Deprecated
formencode.htmlformmodule, because it is dumb.
- Added an
.all_messages()method to all validators, primarily intended to be used for documentation purposes.
- Changed preferred name of
StringBooleanto
StringBool(to go with
booland
validators.Bool). Old alias still available.
- Added
today_or_afteroption to
validators.DateValidator.
- Added a
validators.FileUploadKeepervalidator for helping with file uploads in failed forms. It still requires some annoying fiddling to make work, though, since file upload fields are so weird.
- Added
text_as_defaultoption to htmlfill. This treats all
<input type="something-weird">elements as text fields. WHAT-WG adds weird input types, which can usually be usefully treated as text fields.
- Make all validators accept empty values if
not_emptyis False (the default). “Empty” means
""or
None, and will generally be converted None.
- Added
accept_pythonboolean to all
FancyValidatorvalidators (which is most validators). This is a fixed version of the broken
validate_pythonboolean added in 0.3. Also, it defaults to true, which means that all validators will not validate during
.from_python()calls by default.
- Added
htmlfill.render(form, defaults, errors)for easier rendering of forms.
- Errors automatically inserted by
htmlfillwill go at the top of the form if there’s no field associated with the error (raised an error in 0.3).
- Added
formencode.sqlschemafor wrapping SQLObject classes/instances. See the docstring for more.
- Added
ignore_key_missingto
Schemaobjects, which ignore missing keys (where fields are present) when no
if_missingis provided for the field.
- Renamed
validators.StateProvince.extraStatesto
extra_states, to normalize style.
Bugfixes¶
- When checking destinations,
validators.URLnow allows redirect codes, and catches socket errors and turns them into proper errors.
- Fix typo in
htmlfill
- Made URL and email regular expressions a little more lax/correct.
- A bunch of fixes to
validators.SignedString, which apparently was completely broken.
0.3¶
- Allow errors to be inserted automatically into a form when using
formencode.htmlfill, when a
<form:error>tag isn’t found for an error.
- Added
if_key_missingattribute to
schema.Schema, which will fill in any keys that are missing and pass them to the validator.
FancyValidatorhas changed, adding
if_invalid_pythonand
validate_pythonoptions (which also apply to all subclasses). Also
if_emptyonly applies to
to_pythonconversions.
FancyValidatornow has a
stripoption, which if true and if input is a string, will strip whitespace from the string.
- Allow chained validators to validate otherwise-invalid forms, if they define a
validate_partialmethod. The credit card validator does this.
- Handle
FieldStorageinput (from file uploads); added a
formencode.fieldstoragemodule to wrap those instances in something a bit nicer. Added
validators.FieldStorageUploadConverterto make this conversion.
- Added
StringBooleanconverter, which converts strings like
"true"to Python booleans.
Bugfixes¶
- A couple fixes to
DateConverter,
FieldsMatch,
StringBoolean,
CreditCardValidator.
- Added missing
Validator.assert_stringmethod.
formencode.htmlfill_schemabuilderhandles checkboxes better.
- Be a little more careful about how
Invalidexceptions are created (catch some errors sooner).
- Improved handling of non-string input in
htmlfill.
Experiments¶
- Some experimental work in
formencode.formgen. Experimental, I say!
- Added an experimental
formencode.contextmodule for dynamically-scoped variables. | https://formencode.readthedocs.io/en/stable/whatsnew-0-to-1.2.4.html | 2021-04-10T13:57:41 | CC-MAIN-2021-17 | 1618038057142.4 | [] | formencode.readthedocs.io |
College Route
At college you will put all your attention and energy into one subject, or study programme as we like to call it. This one study programme will give you a qualification that can be equivalent to up to three A levels (Level 3). No option is better than the other, it entirely depends on you and your goals.
What it involves
College courses focus on vocational training - essentially learning through doing. You'll be required to complete work experience outside of college, you'll work in industry-standard facilities and focus on your next steps.
What's next?
Once you've completed your college course, the sky's the limit! Whether you want to go to university, into an apprenticeship, or straight into the workplace, the skills and experience you've gained will look awesome on any UCAS or job application.
Employment Route
Whether it's an apprenticeship or a traineeship, this is a great option for someone who knows what they want to do and just wants to get stuck in. They're also a good option for people who excel outside of traditional education and like to learn from others and experience things for themselves.
What it involves
Depending on which route you go down, you would be spending a large portion of your time employed by a company, working and being treated just like any other member of staff. The other portion of your time will be spent learning in college - as you're still studying for a qualification you know!
What's next?
These qualifications are designed to get you ready to continue life in the workplace. Whether that's with the current employer, or you progress into another role - if you want to get to work sooner rather than later, this is the pathway for you.
Sixth Form Route
A levels are a good choice for people who have a few different ideas of what they want to do, or maybe the job they want requires qualifications in a variety of subjects.
What it involves
When you study A levels, you will study three to four different subjects and will be required to complete a combination of exams and coursework. You could stay on at your current school to study A levels, or go to a dedicated Sixth Form.
What's next?
Just like college study programmes, what you do next is up to you, and all of it will be open to you. Whether you've got your sights on university, an apprenticeship, or going straight into employment, it all depends on your goals and ambitions. You may need to acquire additional work experience if you wish to go straight into employment.
What's the right choice for me?
This is completely up to you! No one can tell you what to do, but if you're struggling to decide which pathway would suit you best, we've put together a short quiz to suggest what could be a good option for you. | http://docs.mkcollege.ac.uk/school-leaver-guide/what-are-my-options-quiz | 2021-04-10T14:35:49 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.mkcollege.ac.uk |
BootstrapDateEditCalendarProperties.ShowWeekNumbers Property
Gets or sets a value that specifies whether the week number section is displayed within the calendar.
Namespace: DevExpress.Web.Bootstrap
Assembly: DevExpress.Web.Bootstrap.v20.2.dll
Declaration
[DefaultValue(true)] public bool ShowWeekNumbers { get; set; }
<DefaultValue(True)> Public Property ShowWeekNumbers As Boolean
Property Value
Remarks
Use the ShowWeekNumbers property to specify whether week number markers are displayed within the calendar.
Note that the calendar's week numbers are represented as ISO week numbers. According to the ISO (International Standards Organization) in document ISO 8601, an ISO week starts on a Monday (which is counted as day 1 of the week), and week 1 for a given year is the week that contains the first Thursday of the Gregorian year.. | https://docs.devexpress.com/AspNetBootstrap/DevExpress.Web.Bootstrap.BootstrapDateEditCalendarProperties.ShowWeekNumbers | 2021-04-10T15:58:00 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.devexpress.com |
Script Tracing and Logging.
Logging the capacity of a single event,, one way to attack this infrastructure is to flood the log with spurious events to hide earlier evidence. To protect yourself from this attack, ensure that you have some form of event log collection set up Windows Event Forwarding. For more information, see Spotting the Adversary with Windows Event Log Monitoring. | https://docs.microsoft.com/en-us/powershell/scripting/windows-powershell/wmf/whats-new/script-logging?view=powershell-7 | 2021-04-10T15:50:55 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.microsoft.com |
Warning: this is an internal module, and does not have a stable API or name. Functions in this module may not check or enforce preconditions expected by public modules. Use at your own risk!
Core stream fusion functionality for text.
Specialised tuple for case conversion.
Strict pair.
An intermediate result in a scan.
Restreaming state.
Intermediate result in a processing pipeline.
The empty stream.
© The University of Glasgow and others
Licensed under a BSD-style license (see top of the page). | https://docs.w3cub.com/haskell~8/libraries/text-1.2.4.0/data-text-internal-fusion-types | 2021-04-10T14:35:39 | CC-MAIN-2021-17 | 1618038057142.4 | [] | docs.w3cub.com |
You're viewing Apigee Edge documentation.
View Apigee X documentation.
Virtual host configuration requests are meant for creating new TLS virtual hosts or performing changes on existing TLS virtual hosts. These are secure virtual hosts configured with a TLS keystore and/or truststore, depending on whether one-way or two-way TLS is used. Apigee does not create or modify non-TLS virtual hosts, including the default virtualhost on port 80. For more information, refer to Virtual Hosts FAQ. | https://docs.apigee.com/api-platform/troubleshoot/service-requests/vhost-req?hl=nl-NL | 2022-05-16T14:54:36 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.apigee.com |
buildschema() (aggregation function)
Returns the minimal schema that admits all values of DynamicExpr.
Syntax
buildschema
(DynamicExpr
)
Arguments
- DynamicExpr: Expression that is used for the aggregation calculation. The parameter column type must be
dynamic.
Returns
The maximum value of
Expr across the group.
Tip
If
buildschema(json_column) gives a syntax error:
Is your
json_columna string rather than a dynamic object?
then use
buildschema(parsejson(json_column)).
Example
Assume the input column has three dynamic values.
{"x":1, "y":3.5}
{"x":"somevalue", "z":[1, 2, 3]}
{"y":{"w":"zzz"}, "t":["aa", "bb"], "z":["foo"]}
The resulting schema would be:
{ "x":["int", "string"], "y":["double", {"w": "string"}], "z":{"`indexer`": ["int", "string"]}, "t":{"`indexer`": "string"} }
The schema tells us that:
- The root object is a container with four properties named x, y, z, and t.
- The property called "x" that could be of type "int" or of type "string".
- The property called "y" that could be of type "double", or another container with a property called "w" of type "string".
- The
indexerkeyword indicates that "z" and "t" are arrays.
- Each item in the array "z" is of type "int" or of type "string".
- "t" is an array of strings.
- Every property is implicitly optional, and any array may be empty.
Schema model
The syntax of the returned schema is:
Container ::= '{' Named-type* '}'; Named-type ::= (name | '"`indexer`"') ':' Type; Type ::= Primitive-type | Union-type | Container; Union-type ::= '[' Type* ']'; Primitive-type ::= "int" | "string" | ...;
The values are equivalent to a subset of the TypeScript type annotations, encoded as a Kusto dynamic value. In Typescript, the example schema would be:
var someobject: { x?: (number | string), y?: (number | { w?: string}), z?: { [n:number] : (int | string)}, t?: { [n:number]: string } } | https://docs.azure.cn/en-us/data-explorer/kusto/query/buildschema-aggfunction | 2022-05-16T15:36:25 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.azure.cn |
This page displays project components, including procedures, workflow specifications, jobs, schedules, any credentials, properties, and reports.
Links and actions at the top of the page
Edit —Opens the Edit Project page.
Access Control —Opens he Access Control page for this project.
"star" icon—Saves this job information to your Home page.
"curved arrow" icon—Returns you to the Projects page.
Pagination—Use the "previous" and "next" arrow icons to view the previous or next project.
The numbers between the arrow icons display the number of projects you can view and the first number indicates which project [in the list] you are viewing.
The tabbed sections
The tabs at the top of the table allow you to select the type of information you want to see.
The Project Details page opens with the Procedures tab highlighted, so the first table you see is the Procedures table.
You can use the DSL Export () button to download the objects to a DSL file.
Procedures tab
The following links and actions are available in the Procedures table:
Create Procedure (at the top of the table)—Opens the New Procedure page to create another procedure.
Click on a procedure name (first column) to go to the Procedure Details page for that procedure.
Action column—Click any of the links ( Run, Edit, Copy, or Delete ) to perform that function for the procedure listed in that row.
Run Immediately —Runs the procedure "as-is" immediately.
Run… —Opens the Run Procedure web page where you can modify any existing parameters for this procedure or set an existing credential.
Workflow Definitions tab
This tab provides a table listing all defined workflow definitions and includes the following functionality:
Click the Create Workflow Definition link to go to the New Workflow Definition page to define additional definitions.
Name column—Click a workflow definition name to go to the Workflow Definition Details page (for that workflow definition.
Action column
Run —Use the down arrow to select Run… or Run Immediately
Selecting Run… takes you to the Run Workflow page where you can set the starting state for the workflow. Selecting Run Immediately runs the workflow "as-is" immediately.
Copy —Make an exact copy of an existing workflow definition.
Edit —Edit an existing workflow definition or just to change the name of the copy you created.
Delete —Deletes an existing workflow definition on the same row.
You can use the DSL Export ( ) button to download the objects as a DSL file.
Jobs tab
This tab provides a project-specific job table. Selecting the Jobs tab on the Project Details page displays only those jobs for the project you selected.
This table is a subset of the main Jobs tab that displays all jobs, regardless of the project of origin, and all Jobs page functionality is basically the same on either Jobs tab.
The following links and actions are available on this Jobs page:
Job—Sort the column alphabetically or click on a job name to go to that job’s Job Details page.
Status—Sort this column by Running, Success, Warning, Error, or Aborted.
Priority—Display the job’s priority set by a "run procedure" command or by the job’s schedule.
Procedure—Click a Procedure name to go to the Procedure Details page for that procedure.
Launched By—Click a name in this column to go to the page for the schedule (Edit Schedule page) or the user that ran the job.
Elapsed Time—Sort the time values from longest to shortest time, or the reverse.
Start Time—Sort this column from the start time of the first job to the start time of the most recent job, or the reverse.
Actions—Use this column to Abort a running job or Delete a completed job.
If your job has finished running, you will not see the Abort link. Aborting a job requires the execute privilege on the job—not just the modify privilege.
Select either OK or Force to abort the job, or Cancel if you change your mind.
Note: Deleting a job on this page causes removal of job information from the CloudBees CD database, but information in the job’s on-disk workspace area is not affected. You must delete workspace information manually.
Workflows tab
The Workflows tab provides the following functionality:
Name column—Click on a workflow name to go to its Workflow Details page.
State column—Click on a state name to go to its State Details page.
Modify Time—This time value represents the most recent workflow activity.
Workflow Definition column—Click on a workflow definition name to go to the Workflow Definition Details page.
Completed column—A check mark in this column identifies this workflow as completed.
Actions column—Click the Delete link to remove the workflow in that row.
Schedules tab
The Schedules tab provides the following functionality:
To create a new schedule for this project, click either the Schedule link to create a new standard schedule to run at specific and/or days or the CI Configuration link to setup a new continuous integration schedule.
Choosing "Schedule" displays the New Schedule page.
Choosing "CI Configuration" displays the New CI Configuration page.
Table column descriptions
To edit an existing schedule, click on the schedule name (first column).
Select the check box in the Enabled column to enable or disable the schedule in that row.
The Priority column displays the job’s priority you selected while creating the schedule for this procedure.
Action column links
Run —Runs the schedule with the permissions of the logged in user. For example, if this schedule is timed to run every day at 11:00 pm, you might decide to run the schedule at 6:00 pm on a particular Thursday, without changing the regular schedule settings. However, if you do not have the appropriate permissions to run this schedule, it will fail.
Copy —Makes a copy of the schedule on that row.
Delete —Deletes the schedule in that row.
For information on creating a standard schedule, see the Schedule - create new or edit existing schedule Help topic.
For information about CI Manager and adding a configuration, click the Home tab, the Continuous Integration subtab, then click the Help link in the upper-right corner of the page.
Credentials tab
The Credentials tab provides the following functionality:
To create a new credential for this project, click the Create Credential link at the top of the table.
To edit an existing credential, click on the credential name (first column).
Action column—The Delete link deletes the credential on that row.
For information about credentials and access control, see the Credentials and User Impersonation or Access Control Help topics.
Properties tab
The Properties tab provides the following functionality:
To create a new property for this project, click the Create Property or Create Nested Sheet link.
To view or change privileges on the property sheet, click the Access Control link.
To edit an existing property, click on the property name (first column).
Action column—The Delete link deletes the property on that row.
For more information about properties, see the Properties Help topic. | https://docs.cloudbees.com/docs/cloudbees-cd/10.0/automation-platform/help-projectdetails | 2022-05-16T14:27:49 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.cloudbees.com |
What is an MXL file?
An MXL file is the compressed form of MusicXML file format that is an open standard format for exchange of digital sheet music. Plain text MusicXML files are large in size and the use of such files as a sheet distribution format was affected with the large file size. This issues was treated with MusicXML 2.0 by introducing the MXL file format that compresses the files enough to reduce the file size similar to that of original MIDI files. Recommended media type for MXL files is application/vnd.recordare.musicxml.
MXL File Format
MXL files are stored as ZIP compressed XML files with .mxl file extension. MXL files are compressed with the DEFLATE algorithm as specified in the RFC 1951.
MXL File Structure
Each MXL file has a ZIP-based XML format that must have a META-INF/container.xml file which describes the starting point of the MusicXML version of the file. There is no corresponding .xsd file defined for the MXL file format.
A simple container.xml file has contents as follow. This example is taken from Dichterliebe01.mxl file available on the MakeMusic web site.
<?xml version="1.0" encoding="UTF-8"> <container> <rootfiles> <rootfile full- </rootfiles> </container>
In this example, the element is the document element. The element can contain one or more elements, with the first element describing the MusicXML root. A MusicXML file used as a may have , , or as its document element. | https://docs.fileformat.com/audio/mxl/ | 2022-05-16T15:48:25 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.fileformat.com |
Webhooks are HTTP callbacks processed and sent by Loop to a pre-defined URL. They allow your application to receive information from Loop without having to reach out to Loop via an API call. Each webhook has a topic, which defines the information payload, & a trigger, which determines the event which triggers the request to be sent from Loop.
Loop offers 3 webhook topics:
- Return
- Label
- Restock
And 5 triggers:
- Return created: A new return was submitted in the Returns Portal
- Return updated: A return has been updated (e.g. state has changed from "open" to "closed")
- Label created: A shipping label has been created via EasyPost (or other shipping service)
- Label updated: A shipping label has been updated (e.g. status has changed from "new" to "in transit")
- Restock requested: An item in the return has been restocked in Shopify
To set up webhooks you can go to the Developers page in your Loop admin or sandbox account. There you can create a webhook and select the topic, trigger, and define the URL the payload will be sent to.
Loop webhook tips
Webhook topics & triggers can be combined to suit your desired outcome. There are also certain combinations that will not yield results. (For example, If you set up a webhook with a topic of "Label" & a trigger of "Return create", there will be no payload available, as a return needs to be created before a label is generated.)
Each of our webhooks expect a successful response and will, in the case of failure, reattempt up to 3 times.
Our webhooks will disable if they get a non-2xx response more than 10 times consecutively. You can reactivate the webhook in the Developers page of your Loop Admin. To prevent webhooks from disabling themselves after 10 failures you can turn on testing mode, which prevents this for 72 hours.
If you wish to see the information payload of a webhook before incorporating the webhook into your code, the webpage webhook.site is a wonderful resource. They provide you with a URL to use in the webhook and display the information you receive on their page, allowing you to examine the JSON structure of the payload when the webhook is fired (even as a test). Later on you can change the webhook URL for production use.
If you need additional support regarding webhooks, please Contact the Loop team. | https://docs.loopreturns.com/reference/setting-up-webhooks | 2022-05-16T15:35:48 | CC-MAIN-2022-21 | 1652662510138.6 | [] | docs.loopreturns.com |
Base Overlay¶
The purpose of the base overlay design is to allow PYNQ to use peripherals on a board out-of-the-box. The design includes hardware IP to control peripherals on the target board, and connects these IP blocks to the Zynq PS. If a base overlay is available for a board, peripherals can be used from the Python environment immediately after the system boots.
Board peripherals typically include GPIO devices (LEDs, Switches, Buttons), Video, Audio, and other custom interfaces.
As the base overlay includes IP for the peripherals on a board, it can also be used as a reference design for creating new customized overlays.
In the case of general purpose interfaces, for example Pmod or Arduino headers, the base overlay may include a PYNQ MicroBlaze. A PYNQ MicroBlaze allows control of devices with different interfaces and protocols on the same port without requiring a change to the programmable logic design.
See PYNQ Libraries for more information on PYNQ MicroBlazes.
PYNQ-Z2 Block Diagram¶
The base overlay on PYNQ-Z2 includes the following hardware:
- HDMI (Input and Output)
- Audio codec
- User LEDs, Switches, Pushbuttons
- 2x Pmod PYNQ MicroBlaze
- Arduino PYNQ MicroBlaze
- RPi (Raspberry Pi) PYNQ MicroBlaze
- 4x Trace Analyzer (PMODA, PMODB, ARDUINO, RASPBERRYPI)
HDMI¶
The PYNQ-Z2 has HDMI in and HDMI out ports. The HDMI interfaces are connected directly to PL pins. i.e. There is no external HDMI circuitry on the board. The HDMI interfaces are controlled by HDMI IP in the programmable logic.
The HDMI IP is connected to PS DRAM. Video can be streamed from the HDMI in to memory, and from memory to HDMI out. This allows processing of video data from python, or writing an image or Video stream from Python to the HDMI out.
Note that while Jupyter notebook supports embedded video, video captured from the HDMI will be in raw format and would not be suitable for playback in a notebook without appropriate encoding.
HDMI In¶
The HDMI in IP can capture standard HDMI resolutions. After a HDMI source has been connected, and the HDMI controller for the IP is started, it will automatically detect the incoming data. The resolution can be read from the HDMI Python class, and the image data can be streamed to the PS DRAM.
HDMI Out¶
The HDMI out IP supports the following resolutions:
- 640x480
- 800x600
- 1280x720 (720p)
- 1280x1024
- 1920x1080 (1080p)*
*While the Pynq-Z2 cannot meet the official HDMI specification for 1080p, some HDMI devices at this resolution may work.
Data can be streamed from the PS DRAM to the HDMI output. The HDMI Out controller contains framebuffers to allow for smooth display of video data.
See example video notebooks in the
<Jupyter Dashboard>/base/video directory
on the board.
Audio¶
The PYNQ-Z2 base overlay supports line in, and Headphones out/Mic. The audio source can be selected, either line-in or Mic, and the audio in to the board can be either recorded to file, or played out on the headphone output.
User IO¶
The PYNQ-Z2 board includes two tri-color LEDs, 2 switches, 4 push buttons, and 4 individual LEDs. These IO are connected directly to Zynq PL pins. In the PYNQ-Z2 base overlay, these IO are routed to the PS GPIO, and can be controlled directly from Python.
PYNQ MicroBlaze¶
PYNQ MicroBlazes are dedicated MicroBlaze soft-processor subsystems that allow peripherals with different IO standards to be connected to the system on demand. This allows a software programmer to use a wide range of peripherals with different interfaces and protocols. By using a PYNQ MicroBlaze, the same overlay can be used to support different peripheral without requiring a different overlay for each peripheral.
The PYNQ-Z2 has three types of PYNQ MicroBlaze: Pmod, Arduino, and RPi (Raspberry Pi), connecting to each type of corresponding interface. There is one instance of the Arduino, and one instance of the RPi PYNQ MicroBlaze, and two instances of the Pmod PYNQ MicroBlaze in the base overlay.
Each physical interface has a different number of pins and can support different sets of peripherals. Each PYNQ MicroBlaze has the same core architecture, but can have different IP configurations to support the different sets of peripheral and interface pins.
Note that because the 8 data pins of PmodA are shared with the lower 8 data
pins of the RPi header, the
base.select_pmoda() function must be called
before loading an application on PmodA, and
base.select_pmoda() must be
called before loading an application on the RPi PYNQ MicroBlaze.
PYNQ MicroBlaze block diagram and examples can be found in MicroBlaze Subsystem.
Trace Analyzer¶
Trace analyzer blocks are connected to the interface pins for the two Pmod PYNQ MicroBlazes, the Arduino and RPi PYNQ MicroBlazes. The trace analyzer can capture IO signals and stream the data to the PS DRAM for analysis in the Python environment.
Using the Python Wavedrom package, the signals from the trace analyzer can be displayed as waveforms in a Jupyter notebook.
On the base overlay, the trace analyzers are controlled by PS directly. In fact, on other overlays, the trace analyzers can also be controlled by PYNQ MicroBlaze.
See the example notebook in the
<Jupyter Dashboard>/base/trace
directory on the board.
Python API¶
The Python API for the peripherals in the base overlay is covered in PYNQ Libraries. Example notebooks are also provided on the board to show how to use the base overlay.
Rebuilding the Overlay¶
The project files for the overlays can be found here:
<PYNQ repository>/boards/<board>/base
Linux¶
A Makefile is provided to rebuild the base overlay in Linux. The Makefile calls two tcl files. The first Tcl files compiles any HLS IP used in the design. The second Tcl builds the overlay.
To rebuild the overlay, source the Xilinx tools first. Then assuming PYNQ has been cloned:
cd <PYNQ repository>/boards/Pynq-Z2/base make
Windows¶
In Windows, the two Tcl files can be sourced in Vivado to rebuild the overlay. The Tcl files to rebuild the overlay can be sourced from the Vivado GUI, or from the Vivado Tcl Shell (command line)./base source ./build_base_ip.tcl source ./base.tcl
To build from the command line, open the Vivado 2017.4 Tcl Shell, and run the following:
cd <PYNQ repository>/boards/Pynq-Z2/base vivado -mode batch -source build_base_ip.tcl vivado -mode batch -source base.tcl
Note that you must change to the overlay directory, as the tcl files has relative paths that will break if sourced from a different location. | https://pynq.readthedocs.io/en/v2.7.0/pynq_overlays/pynqz2/pynqz2_base_overlay.html | 2022-05-16T15:28:50 | CC-MAIN-2022-21 | 1652662510138.6 | [] | pynq.readthedocs.io |
, there does not yet exist a HelloWorld menu type. Adding this functionnality next article how translation is provided.>
Content of your code directory
Create a compressed file of this directory or directly download the archive and install it using the extension manager of Joomla!1.6. You can add a menu item of this component using the menu manager in the backend. | http://docs.joomla.org/index.php?title=Developing_a_Model-View-Controller_Component/2.5/Adding_a_menu_type_to_the_site_part&diff=17529&oldid=17501 | 2014-09-16T06:03:06 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.joomla.org |
.
Joomla! is software that lets you make and update web pages easily.
You can think of a Joomla! website as bringing together three elements.:.
Most of this content applies to Joomla! 1.5 only..
Check out the Converting an existing website to a Joomla! website guide for great step by step instructions.! | http://docs.joomla.org/index.php?title=Portal:Beginners&diff=prev&oldid=63153 | 2014-09-16T05:01:49 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.joomla.org |
Status codes
PAP standard status codes are four digits long. The first digit indicates the type of response (for example, a 1 indicates success, a 2 indicates a push initiator error, and so on.) RIM specific requests include a 2 before the four-digit status code, making it a five-digit status code. For example, a value of 2000 indicates an invalid standard request, but a value of 22000 indicates an invalid RIM specific request. When there is an error, it is typically because of a request with bad syntax or a request that cannot be fulfilled.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/es-es/developers/deliverables/51382/pap_response_status_codes_607260_11.jsp | 2014-09-16T05:21:33 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.blackberry.com |
This.
A chunk colheader chunk is very easy. However, please be careful to following the naming convention so that others will be able to re-use your chunks.
The steps for creating a new chunk are as follows:
The key idea for naming conventions is to be consistent in how we name chunks so we don't end up with duplicates. Here are a few guidelines: | http://docs.joomla.org/index.php?title=Using_chunks_in_Joomla_help_screens&oldid=102979 | 2014-09-16T05:34:25 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.joomla.org |
Compute the standard deviation along the specified axis.
Returns the standard deviation, a measure of the spread of a distribution, of the array elements. The standard deviation is computed for the flattened array by default, otherwise over the specified axis.
See also
Notes
The standard deviation is the square root of the average of the squared deviations from the mean, i.e., var = sqrt(mean(abs(x - x.mean())**2)).
The mean is normally calculated as x.sum() / N, where N = len(x). If, however, ddof is specified, the divisor N - ddof is used instead.
Note that, for complex numbers, std takes the absolute value before squaring, so that the result is always real and nonnegative.
Examples
>>> a = np.array([[1, 2], [3, 4]]) >>> np.std(a) 1.1180339887498949 >>> np.std(a, 0) array([ 1., 1.]) >>> np.std(a, 1) array([ 0.5, 0.5]) | http://docs.scipy.org/doc/numpy-1.3.x/reference/generated/numpy.std.html | 2014-09-16T04:55:42 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.scipy.org |
Manage iOS, Android, and BlackBerry devices
Find planning, installation, administration, and quick reference guides and tutorials to help you manage BlackBerry, iOS, and Android mobile devices in your organization.
Read a quick introduction to our mobility management solutions.
The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License. | http://docs.blackberry.com/en/admin/?userType=2&LID=au:bb:business:resources&LPOS=au:bb:business | 2014-09-16T05:16:53 | CC-MAIN-2014-41 | 1410657113000.87 | [array(['mainImage_lineup.png',
'With business software from BlackBerry, you can manage BlackBerry, Android, and iOS devices such as iPhone and iPad. With business software from BlackBerry, you can manage BlackBerry, Android, and iOS devices such as iPhone and iPad.'],
dtype=object) ] | docs.blackberry.com |
Utility functions and classes available for use by Controllers
Pylons subclasses the WebOb webob.Request and webob.Response classes to provide backwards compatible functions for earlier versions of Pylons as well as add a few helper functions to assist with signed cookies.
For reference use, refer to the Request and Response below.
Functions available:
abort(), forward(), etag_cache(), mimetype() and redirect()
Bases: webob.request.BaseRequest
WebOb Request subclass
The WebOb webob.Request has no charset, or other defaults. This subclass adds defaults, along with several methods for backwards compatibility with paste.wsgiwrappers.WSGIRequest.
Legacy method to return the webob.Request.accept_charset
Extract a signed cookie of name from the request
The cookie is expected to have been created with Response.signed_cookie, and the secret should be the same as the one used to sign it.
Any failure in the signature of the data will result in None being returned.
Bases: webob.response.Response
WebOb Response subclass
The WebOb Response has no default content type, or error defaults. This subclass adds defaults, along with several methods for backwards compatibility with paste.wsgiwrappers.WSGIResponse.
The body of the response, as a str..
Aborts the request immediately by returning an HTTP exception
In the event that the status_code is a 300 series error, the detail attribute will be used as the Location header should one not be specified in the headers attribute..
Suggested use is within a Controller Action like so:
import pylons class YourController(BaseController): def index(self): etag_cache(key=1) return render('/splash.mako')
Note
This works because etag_cache will raise an HTTPNotModified exception if the ETag received matches the key provided.
Forward the request to a WSGI application. Returns its response.
return forward(FileApp('filename'))
Raises a redirect exception to the specified URL
Optionally, a code variable may be passed with the status code of the redirect, ie:
redirect(url(controller='home', action='index'), code=303) | http://docs.pylonsproject.org/projects/pylons-webframework/en/latest/modules/controllers_util.html | 2014-09-16T04:56:59 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.pylonsproject.org |
Reload the BlackBerry Device Software using the BlackBerry Desktop Software
Before you reload the BlackBerry Device Software, download and install the latest version of the BlackBerry Desktop Software.
You may need to reload the BlackBerry Device Software to resolve a technical issue. It might take up to an hour to reload the BlackBerry Device Software. During that time, you can't disconnect your BlackBerry smartphone from your computer.
Note: You might not be able to back up your smartphone data if you are reloading the BlackBerry Device Software to resolve a technical issue, such as an application error.
- Connect your smartphone to your computer.
- Open the BlackBerry Desktop Software.
- In the BlackBerry Desktop Software, click Update.
- To download the most recent version of your BlackBerry Device Software, click Get update.
- Do any of the following:
- To keep a backup file of your smartphone data and settings, select the Back up your device data checkbox. This backup file is restored to your smartphone after the software reload finishes. If you don't back up your data, your smartphone data, settings, and email messages are deleted from your smartphone when you reload the software.
- To encrypt your backup data, click Encrypt backup file.
- If you want to receive an email when an updated version of the BlackBerry Device Software is available, select the Email me when new versions are available checkbox.
- Click Install update.
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/42557/1525826.jsp | 2014-09-16T05:17:20 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.blackberry.com |
All files in the SVN repository should have the Unix style (LF) line endings. This is controlled via the Subversion "eol-style=LF" property. In addition, most files (.php, .css, .js, .ini) should have the "keywords=Id" property set. This is what allows Subversion to record the "@version" tag in the Doc block at the beginning of each file. For example,
* @version $Id: index.php 14276 2010-01-18 14:20:28Z louis $.
You only need to set these properties when you create new files and add them to version control. If you are editing existing files that are already under version control, the properties should already be set correctly.
You can set file properties either manually or you can configure Subversion to do this automatically each time you add a file to version control.
Subversion uses a configuration file to control automatic property settings. For Mac or Linux users, this file is called
~/.subversion/config. For Windows users, this file is called
C:\Documents and Settings\<your user name>\Application Data\Subversion\config or
C:\Users\<your user name>\AppData\Roaming\Subversion\config. For TortoiseSVN, you can edit the configuration file right from the program. Right-click on a project, select TortoiseSVN→Settings and click on the "Edit" button next to "Subversion configuration file:".
Edit the configuration file as shown below. At the end of the file, you will see a section called
[auto-props]. Be sure to set the
enable-auto-props = yes. Also, note that "Id" is upper-case "I" and lower-case "d".
### Section for configuring automatic properties. enable-auto-props = yes [auto-props] ... *.php = svn:eol-style=LF;svn:keywords=Id *.html = svn:eol-style=LF *.xml = svn:eol-style=LF *.ini = svn:eol-style=LF;svn:keywords=Id *.css = svn:eol-style=LF;svn:keywords=Id *.js = svn:eol-style=LF;svn:keywords=Id
You can test that the change was successful by creating a new file of the desired type and adding it to version control. Then examine the SVN properties for this file and you should see the new properties automatically. See below for instructions on examining the properties.. | http://docs.joomla.org/The_use_of_Subversion | 2014-09-16T06:13:30 | CC-MAIN-2014-41 | 1410657113000.87 | [] | docs.joomla.org |
You can select the same planning folder in all three swimlanes. This enables an efficient and faster way of planning a sprint simultaneously for three different teams residing in the same planning folder. You can also select the same planning folder and same team in all three swimlanes for easy handling of artifact cards.
View Artifact Cards
Click TRACKERS from the Project Home menu.
In the List Trackers, Planning Folders and Teams page, click PLAN. A Planning Board for the current project context is displayed.
Select a planning folder from the drop-down list.Note: You can select the same planning folder in all three swimlanes.
View the artifact cards based on your requirement:
Note: If there are no teams created for a project, then the
Select a team from the Select a Team drop-down list to view artifacts of a specific team.
Select None to view unassigned artifacts.
Select All artifacts to view all the artifact cards residing in the selected planning folder.
Select a Teamdrop-down list is not displayed at all.
Assign Artifacts to a Team
You can select an unassigned artifact from one swimlane and drag it to another where you have a team selected. | https://docs.collab.net/teamforge182/pb_teamview.html | 2021-01-16T06:48:53 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.collab.net |
Types of Mail Merge Operation
The main idea of mail merge is to automatically create a document or multiple documents based on your template and data fetched from your data source. Aspose.Words allows you to perform two different types of mail merge operations: simple mail merge and mail merge with regions.
The most common example of using simple mail merge is when you want to send a document for different clients by including their names at the beginning of the document. To do this, you need to create merge fields such as First Name and Last Name in your template, and then fill them in with data from your data source. Whereas the most common example of using mail merge with regions is when you want to send a document that includes specific orders with the list of all items within each order. To do this, you will need to create merge regions inside your template – own region for each order, in order to fill it with all required data for the items.
The main difference between both merge operations is that simple mail merge (without regions) repeats the entire document per each data source record, whereas mail merge with regions repeats only designated regions per record. You can think of a simple mail merge operation as a particular case of merge with regions where the only region is the whole document.
Simple Mail Merge Operation
A simple mail merge is used to fill the mail merge fields inside your template with the required data from your data source (single table representation). So it is similar to the classic mail merge in Microsoft Word.
You can add one or more merge fields in your template and then execute the simple mail merge operation. It is recommended to use it if your template does not contain any merge regions.
The main limitation of using this type is the whole document content will be repeated for each record in the data source.
How to Execute a Simple Mail Merge Operation
Once your template is ready, you can start performing the simple mail merge operation. Aspose.Words allows you to execute a simple mail merge operation using different Execute methods that accept various data objects as the data source.
The following code example shows how to execute a simple mail merge operation using one of the Execute method:
You can notice the difference between the document before executing simple mail merge:
And after executing simple mail merge:
How to Create Multiple Merged Documents
In Aspose.Words, the standard mail merge operation fills only a single document with content from your data source. So, you will need to execute the mail merge operation multiple times to create multiple merged documents as an output.
The following code example shows how to generate multiple merged documents during a mail merge operation:
Mail Merge with Regions
You can create different regions in your template to have special areas that you can simply fill with your data. Use the mail merge with regions if you want to insert tables, rows with repeating data to make your documents dynamically grow by specifying those regions within your template.
You can create nested (child) regions as well as merge regions. The main advantage of using this type is to dynamically increase parts inside a document. See more details in the next article “Nested Mail Merge with Regions”.
How to Execute Mail Merge with Regions
A mail merge region is a specific part inside a document that has a start point and an end point. Both points are represented as mail merge fields that have specific names “TableStart:XXX” and “TableEnd:XXX”. All content that is included in a mail merge region will automatically be repeated for every record in the data source.
Aspose.Words allows you to execute mail merge with regions using different Execute methods that accept various data objects as the data source.
As a first step, we need to create the DataSet to pass it later as an input parameter to the ExecuteWithRegions method:
The following code example shows how to execute mail merge with regions using the ExecuteWithRegions(DataSet) method:
You can notice the difference between the document before executing mail merge with regions:
And after executing mail merge with regions:
Limitations of Mail Merge with Regions
There are some important points that you need to consider when performing a mail merge with regions:
- The start point TableStart:Orders and the end point TableEnd:Orders both need to be in the same row or cell. For example, if you start a merge region in a cell of a table, you must end the merge region in the same row as the first cell.
- The merge field name must match the column’s name in your DataTable. Unless you have specified mapped fields, the mail merge with regions will not be successful for any merge field that has a different name than the column’s name.
If one of these rules is broken, you will get unexpected results or an exception may be thrown. | https://docs.aspose.com/words/java/types-of-mail-merge-operations/ | 2021-01-16T05:37:18 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['execute_simple_mail_merge_1.png', 'simple_mail_merge_template'],
dtype=object)
array(['execute_simple_mail_merge_2.png', 'execute_simple_mail_merge'],
dtype=object)
array(['execute_mail_merge_with_regions_1.png',
'mail_merge_with_regions_template'], dtype=object)
array(['execute_mail_merge_with_regions_2.png',
'mail_merge_with_regions_execute'], dtype=object)] | docs.aspose.com |
Marketing Assist
Last Updated:
Overview: Marketing Assist is a co-branded email drip campaign that speaks to specific problems a potential client may be experiencing, and aims to help you stay at the forefront of their mind until they make a final decision. Marketing Assist is totally optional, and disabled on any account by default. If enabled, Marketing Assist emails will be sent to an account's provided contact email, and will link back to the Marketing Assist Call-To-Action URL provided by the user in their Account Settings (see: Configure IQMAP Settings).
Process:
To enable Marketing Assist for an account, open the account's details and ensure that contact information is provided. From here, the 'Marketing Assist' toggle can be enabled.
Applies to: CloudConnect Partners | https://docs.cloudconnect.net/display/HOME/Marketing+Assist | 2021-01-16T05:17:16 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.cloudconnect.net |
TeamForge 19.2 Product Release
TeamForge 19.2 is now available.
TeamForge 19.2 is out in the market to make Agile Lifecycle Management much easier. TeamForge 19.2 brings you many more salient features and fixes, which include:
TeamForge Baseline
- Create a New Baseline from Approved Baselines
- Support for
index.htmlFile in Baseline Package
- Export to Excel to Compare Baselines
Trackers
- Show Pending and Obsolete Releases
Documents
- Review Tab Enhancements
- Show Review Notification Message in Review Tab
- Command Buttons Replaced with Icons
File Releases
- New Status “Obsolete” Added to the “Edit Release” Page
Integrations
- TeamForge—Nexus 2 Integration is No Longer Supported
- TeamForge—TestLink Integration Supported by Webhooks-based Event Broker
- TeamForge Maven Deploy Plugin
GitAgile™—Enterprise Version Control
- Git Repository Creation Simplified
- Import Git Repository into TeamForge from the Code Browser
- Add Files to Git Repository from the Code Browser
- Show Old and New Images in Commit and Code Review Diffs
- No support for HA Setup for Gerrit
- History Protection Email Template
For more information, see TeamForge 19.2 Release Notes. | https://docs.collab.net/teamforge192.html | 2021-01-16T05:57:34 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.collab.net |
Chain Sync
Read (part of) this in other languages: Korean. Read about fast sync in other languages: Español, Korean, 简体中文.
We describe here the different methods used by a new node when joining the network to catch up with the latest chain state. We start with reminding the reader of the following assumptions, which are all characteristics of Grin or Mimblewimble:
- All block headers include the root hash of all unspent outputs in the chain at the time of that block.
- Inputs or outputs cannot be tampered with or forged without invalidating the whole block state.
We intentionally only focus on major node types and high level algorithms that may impact the security model. Detailed heuristics that can provide some additional improvements (like header first), while useful, will not be mentioned in this document.
Full History Syncing
This model is the one used by full nodes on most major public blockchains. The new node has prior knowledge of the genesis block. It connects to other peers in the network and starts asking for blocks until it reaches the latest block known to its peers.
The security model here is similar to Bitcoin. We're able to verify the whole chain, the total work, the validity of each block, their full content, etc. In addition, with Mimblewimble and full UTXO set commitments, even more integrity validation can be performed.
We do not try to do any space or bandwidth optimization in this mode (for example, once validated the range proofs could possibly be deleted). The point here is to provide history archival and allow later checks and verifications to be made.
However, such full history sync, also called Initial Block Download (IBD), is unnecessary for a new node to fully validate the Grin chain history, as most blocks may be only partially downloaded.
Fast Sync
We call
fast sync the process of synchronizing a new node, or a node that hasn't been keeping up with the chain for a while, and bringing it up to the latest known most-worked block. In this model we try to optimize for very fast syncing while sacrificing as little security assumptions as possible. As a matter of fact, the security model is almost identical as a full history sync, despite requiring orders of magnitude less data to download.
At a high level, a fast-sync goes through the following process:
- Sync all block headers on the most worked chain as advertized by other nodes. Also, pick a header sufficiently back from the chain head. This is called the node horizon as it's the furthest a node can reorganize its chain on a new fork if it were to occur without triggering another new full sync.
- Once all headers have are synced, Download the full state as it was at the horizon, including the unspent output, range proof and kernel data, as well as all corresponding MMRs. This is just one large zip file named
txhashset.
- Validate the full state.
- Download full blocks since the horizon to reach the chain head.
A new node is pre-configured with a horizon
Z, which is a distance in number of blocks from the head. For example, if horizon
Z=5000 and the head is at height
H=23000, the block at horizon is the block at height
h=18000 on the most worked chain.
The new node also has prior knowledge of the genesis block. It connects to other
peers and learns about the head of the most worked chain. It asks for the block
header at the horizon block, requiring peer agreement. If consensus is not reached at
h = H - Z, the node gradually increases the horizon
Z, moving
h backward until consensus is reached. Then it gets the full UTXO set at the horizon block.
With this information it can verify:
- The total difficulty on that chain (present in all block headers).
- The sum of all UTXO commitments equals the expected money supply.
- The root hash of all UTXOs match the root hash in the block header.
Once the validation is done, the peer can download and validate the blocks content from the horizon up to the head.
While this algorithm still works for very low values of
Z (or in the extreme case
where
Z=1), low values may be problematic due to the normal forking activity that
can occur on any blockchain. To prevent those problems and to increase the amount
of locally validated work, we recommend values of
Z of at least a few days worth
of blocks, up to a few weeks.
Security Discussion
While this sync mode is simple to describe, it may seem non-obvious how it still can be secure. We describe here some possible attacks, how they're defeated and other possible failure scenarios.
An attacker tries to forge the state at horizon
This range of attacks attempt to have a node believe it is properly synchronized with the network when it's actually is in a forged state. Multiple strategies can be attempted:
- Completely fake but valid horizon state (including header and proof of work). Assuming at least one honest peer, neither the UTXO set root hash nor the block hash will match other peers' horizon states.
- Valid block header but faked UTXO set. The UTXO set root hash from the header will not match what the node calculates from the received UTXO set itself.
- Completely valid block with fake total difficulty, which could lead the node down a fake fork. The block hash changes if the total difficulty is changed, no honest peer will produce a valid head for that hash.
A fork occurs that's older than the local UTXO history
Our node downloaded the full UTXO set at horizon height. If a fork occurs on a block
at an older horizon H+delta, the UTXO set can't be validated. In this situation the
node has no choice but to put itself back in sync mode with a new horizon of
Z'=Z+delta.
Note that an alternate fork at Z+delta that has less work than our current head can safely be ignored, only a winning fork of total work greater than our head would. To do this resolution, every block header includes the total chain difficulty up to that block.
The chain is permanently forked
If a hard fork occurs, the network may become split, forcing new nodes to always push their horizon back to when the hard fork occurred. While this is not a problem for short-term hard forks, it may become an issue for long-term or permanent forks. To prevent this situation, peers should always be checked for hard fork related capabilities (a bitmask of features a peer exposes) on connection.
Several nodes continuously give fake horizon blocks
If a peer can't reach consensus on the header at h, it gradually moves back. In the degenerate case, rogue peers could force all new peers to always become full nodes (move back until genesis) by systematically preventing consensus and feeding fake headers.
While this is a valid issue, several mitigation strategies exist:
- Peers must still provide valid block headers at horizon
Z. This includes the proof of work.
- A group of block headers around the horizon could be asked to increase the cost of the attack.
- Differing block headers providing a proof of work significantly lower could be rejected.
- The user or node operator may be asked to confirm a block hash.
- In last resort, if none of the above strategies are effective, checkpoints could be used. | https://docs.grin.mw/wiki/chain-state/chain-sync/ | 2021-01-16T05:51:09 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
What was decided upon? (e.g. what has been updated or changed?) Left 930 field — provider location — blank in P2E migration form
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) Because VUL doesn’t traditionally use the 930 field
Who decided this? (e.g. what unit/group) E-Resources
When was this decided? Pre-implementation
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/11/p2e-migration-form-provider-location/ | 2021-01-16T06:17:15 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
two jobs (one to generate the pipeline dynamically, and one to run the generated jobs), similar to this one:
stages: [generate, build] generate-pipeline: stage: generate tags: - <custom-tag> script: - spack env activate --without-view . - spack ci generate --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" artifacts: paths: - "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" build-jobs: stage: build trigger: include: - artifact: "jobs_scratch_dir/pipeline.yml" job: generate-pipeline strategy: depend to
run the pipeline generation phase (this is implemented in the
spack ci generate
command, which assumes the runner has an appropriate version of spack installed
and configured for use). Of course, there are many’s pipelines are now making use of the
trigger syntax to run
dynamically generated
child pipelines.
Note that the use of dynamic child pipelines requires running Gitlab version
>= 12.9.
Spack commands supporting pipelines¶
Spack provides a command
ci with two sub-commands:
spack ci generate generates
a pipeline (a .gitlab-ci.yml file) from a spack environment, and
spack ci rebuild
checks a spec against a remote mirror and possibly rebuilds it from source and updates
the binary mirror with the latest built package. Both
spack ci ... commands must
be run from within the same generate¶
Concretizes the specs in the active environment, stages them (as described in
Summary of .gitlab-ci.yml generation algorithm), and writes the resulting
.gitlab-ci.yml to disk.
This sub-command takes two arguments, but the most useful is
--output-file,
which should be an absolute path (including file name) to the generated
pipeline, if the default (
./.gitlab-ci.yml) is not desired.
spack ci rebuild¶
This sub-command is responsible for ensuring a single spec from the release
environment is up to date on the remote mirror configured in the environment,
and as such, corresponds to a single job in the
.gitlab-ci.yml file.
Rather than taking command-line arguments, this sub-command expects information
to be communicated via environment variables, which will typically come via the
.gitlab-ci.yml job as
variables.
A pipeline-enabled spack environment¶
Here’s an example of a spack environment file that has been enhanced with sections describingube image: spack/ubuntu-bionic - match: - os=centos7 runner-attributes: tags: - spack-kube image: spack/centos7.
Both the top-level
gitlab-ci section as well as each
runner-attributes
section can also contain the following keys:
image,
variables,
before_script,
script, and
after_script. If any of these keys are
provided at the
gitlab-ci level, they will be used as the defaults for any
runner-attributes, unless they are overridden in those sections. Specifying
any of these keys at the
runner-attributes level generally overrides the
keys specified at the higher level, with a couple exceptions. Any
variables
specified at both levels result in those dictionaries getting merged in the
resulting generated job, and any duplicate variable names get assigned the value
provided in the specific
runner-attributes. If
tags are specified both
at the
gitlab-ci level as well as the
runner-attributes level, then the
lists of tags are combined, and any duplicates are removed.
See the section below on using a custom spack for an example of how these keys could be used..
Take a look at the schema for the gitlab-ci section of the spack environment file, to see precisely what syntax is allowed there..). Any
variables provided here will be added, verbatim, to
each job.
The
runner-attributes section also allows users to supply custom
script,
before_script, and
after_script sections to be applied to every job
scheduled on that runner. This allows users to do any custom preparation or
cleanup tasks that fit their particular workflow, as well as completely
customize the rebuilding of a spec if they so choose. Spack will not generate
a
before_script or
after_script for jobs, but if you do not provide
a custom
script, spack will generate one for you that assumes your
spack.yaml is at the root of the repository, activates that environment for
you, and invokes
spack ci rebuild., spack assumes ...
The example above adds a list to the
definitions called
compiler-pkgs
(you can add any number of these), which lists compiler packages that should
be staged ahead of the full matrix of release specs (in this example, only
readline). Then within the
gitlab-ci section, note the addition of section provides an example of how you could take advantage of
user-provided pipeline scripts to accomplish this fairly simply. First, you
could use the GitLab user interface to create CI environment variables
containing the url and branch or tag you want to use (calling them, for
example,
SPACK_REPO and
SPACK_REF), then refer to those in a custom shell
script invoked both from your pipeline generation job, as well as in your rebuild
jobs. Here’s the
generate-pipeline job from the top of this document,
updated to invoke a custom shell script that will clone and source a custom
spack:
generate-pipeline: tags: - <some-other-tag> before_script: - ./cloneSpack.sh script: - spack env activate --without-view . - spack ci generate --output-file "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml" after_script: - rm -rf ./spack artifacts: paths: - "${CI_PROJECT_DIR}/jobs_scratch_dir/pipeline.yml"
And the
cloneSpack.sh script could contain:
#!/bin/bash git clone ${SPACK_REPO} pushd ./spack git checkout ${SPACK_REF} popd . "./spack/share/spack/setup-env.sh" spack --version
Finally, you would also want your generated rebuild jobs to clone that version
of spack, so you would update your
spack.yaml from above as follows:
spack: ... gitlab-ci: mappings: - match: - os=ubuntu18.04 runner-attributes: tags: - spack-kube image: spack/ubuntu-bionic before_script: - ./cloneSpack.sh script: - spack env activate --without-view . - spack -d ci rebuild after_script: - rm -rf ./spack
Now all of the generated rebuild jobs will use the same shell script to clone
spack before running their actual workload. Note in the above example the
provision of a custom
script section. The reason for this is to run
spack ci rebuild in debug mode to get more information when builds fail.
Now imagine you have long pipelines with many specs to be built, and you
are pointing to a spack repository and branch that has a tendency to change
frequently, such as the main repo and it’s
develop branch. If each child
job checks out the
develop branch, that could result in some jobs running
with one SHA of spack, while later jobs run with another. To help avoid this
issue, the pipeline generation process saves global variables called
SPACK_VERSION and
SPACK_CHECKOUT_VERSION that capture the version
of spack used to generate the pipeline. While the
SPACK_VERSION variable
simply contains the human-readable value produced by
spack -V at pipeline
generation time, the
SPACK_CHECKOUT_VERSION variable can be used in a
git checkout command to make sure all child jobs checkout the same version
of spack used to generate the pipeline. To take advantage of this, you could
simply replace
git checkout ${SPACK_REF} in the example
cloneSpack.sh
script above with
git checkout ${SPACK_CHECKOUT_VERSION}.
On the other hand, if you’re pointing to a spack repository and branch under your
control, there may be no benefit in using the captured
SPACK_CHECKOUT_VERSION,
and you can instead just clone using the project CI variables you set (in the
earlier example these were
SPACK_REPO and
SPACK_REF).. | https://spack.readthedocs.io/en/latest/pipelines.html | 2021-01-16T06:06:33 | CC-MAIN-2021-04 | 1610703500028.5 | [] | spack.readthedocs.io |
Asynchronous server¶
Authentication¶
In order to use the asynchronous server, you must first authenticate through the API. This will give you a cookie that is valid for 2 minutes and that has to be used when connecting to the server.
Example request:
POST /2.0/accounts/action/?do=authenticate_asynchronous HTTP/1.1 Host: api.cloudsigma.com Accept: application/json Authorization: Basic dGVzdHVzZXJAY2xvdWRzaWdtYS5jb206dmJudmJu
Example response:
HTTP/1.0 200 OK Content-Type: application/json; charset=utf-8 Set-Cookie: async_auth=YTlhZmMwYTctOWYzNi00ZmUzLThlYmUtMGZiOGZlODE0ZmQx|1356012032|f785e3d8083c7666209e54477652de0d057f0791; expires=Thu, 20-Dec-2012 14:02:32 GMT; Max-Age=120; Path=/ {}
Using this cookie will allow you to connect to the server at /2.0/websocket
Information¶
The websocket server sends frames every time one of the following changes:
- The frame is a JSON object that contains the following fields:
- resource_type: A text field that describes the type of resource covered by the notification.
- resource_uri: The URI of the resource that has changed.
The JSON object might contain a ‘object’ key, that will contain the full blown resource referenced by the notification, JSON encoded. | https://cloudsigma-docs.readthedocs.io/en/latest/async.html | 2021-01-16T05:37:33 | CC-MAIN-2021-04 | 1610703500028.5 | [] | cloudsigma-docs.readthedocs.io |
Apache Knox Overview Securing Access to Hadoop Cluster: Apache KnoxT.Apache Knox Gateway OverviewA conceptual overview of the Apache Knox Gateway, a reverse proxy.Knox Supported Services MatrixA support matrix showing which services Apache Knox supports for Proxy and SSO, for both Kerberized and Non-Kerberized clusters.Knox Topology Management in Cloudera ManagerIn CDP Private Cloud, you can manage Apache Knox topologies via Cloudera Manager using cdp-proxy and cdp-proxy-api. | https://docs.cloudera.com/cdp-private-cloud-base/7.1.4/knox-authentication/topics/security-knox-overview.html | 2021-01-16T06:28:50 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.cloudera.com |
CleverTap
ON THIS PAGE
Hevo lets you load your CleverTap events to your data warehouse..
Connection Settings.
CleverTap doesn’t provide an API to list all distinct event names nor does it provide an API to download all events at one go. Hence, Hevo needs you to provide a comma-separated list of all events you want to capture.
How it works
Hevo uses CleverTap’s Get Events API to fetch download and ingest raw events from CleverTap. Please note that the API provides data on a daily granularity and hence, Hevo fetches data at 3 AM (UTC) every day for the previous day.
If you wish to ingest data from CleverTap in real-time, you can create a Pipeline with a Webhook Type Source in Hevo and use CleverTap webhooks. | https://docs.hevodata.com/sources/analytics/clevertap-source/ | 2021-01-16T05:27:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.hevodata.com |
How-to articles, tricks, and solutions about REGEX
How to Check If a String Contains Another Substring in JavaScript
Read this tutorial and learn several methods used to check if a string contains another substring in JavaScript or not. Read information and see examples.
How to Check Whether a String Matches a RegEx in JavaScript
Read and learn two methods used to check whether a string matches RegEx in JavaScript. Also, see which method is the fastest and differences between them.
How to Compare Strings in Java
The comparison of strings is one of the mostly used Java operations. If you’re looking for different ways to compare two strings in Java, you’re at the right place.
How to Count String Occurrence in String
Read this tutorial and learn several useful methods that are used to count the string occurrence in a string. Choose one of the working methods for you.
How to HTML-encode a String
Read this JavaScript tutorial and learn about some useful and fast methods that help you to HTML-encode the string without causing the XSS vulnerability.
How to Remove Spaces From String
Read this JavaScript tutorial and learn several methods that are used to remove spaces from a string. Also, know which is the fastest method to choose.
How to Remove Text from String
Read this JavaScript tutorial and learn useful information about several easy and fast built-in methods that are used for removing the text from a string.
How to Split a String in Java
Learn the ways to split a string in Java. The most common way is using the String.split () method, also see how to use StringTokenizer and Pattern.compile ().. | https://www.w3docs.com/snippets-tags/regex | 2021-01-16T06:51:49 | CC-MAIN-2021-04 | 1610703500028.5 | [] | www.w3docs.com |
« Previous Tour
Next Tour »
We've added this layer to hold the Theme Press Tour Guide.
changes.mady.by.user Nichelle Green
Saved on Dec 29, 2019
To fully use this screen, you must have the following permissions added to your account: : | https://docs.armor.com/pages/diffpages.action?pageId=64913518&originalId=57738414 | 2021-01-16T05:26:31 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.armor.com |
Configure device pools for AARI on the web You can configure a device pool in the Enterprise Control Room to use in AARI. Prerequisites You must create a custom role in the Enterprise Control Room. Log in to the Enterprise Control Room as an Enterprise Control Room admin. Navigate to Administration > Roles. Create a custom role (for example: AARI-pool-scheduler).Create and assign roles Set your permissions to View my bots, View packages and Run my bots. Select your unattended users in the Run As section. Select the AARI admin and unattended users in the User section. Save your changes. Create a user (for example AARI-scheduler-user) and assign the custom role (AARI-pool-scheduler) to this user. ProcedureFollow these steps to create a device pool in the Enterprise Control Room and configure it for AARI on the web. Log in to the Enterprise Control Room as an AARI admin. AARI admins can create device pools in the Enterprise Control Room. Navigate to Devices > My device pools. Create a device pool (for example: AARI-pool). Create device pools Note: The owner of the device pool is the AARI admin. At least one device pool is required for configuring AARI on the web. Select your custom role (AARI-pool-scheduler) in Device Pool Consumers. Save your changes. Click configuration page to navigate to the web interface. Click Edit. Enter the Username (for example, AARI-scheduler-user) and click Authorize. The AARI process uses the device pool and unattended Bot Runners configured with this user for each bot deployment. | https://docs.automationanywhere.com/bundle/enterprise-v2019/page/enterprise-cloud/topics/hbc/cloud-hbc-device-pool-configuration.html | 2021-01-16T06:07:12 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.automationanywhere.com |
Distribution-wide Changes
- Fedora Workstation now uses Btrfs by default
- Fedora Workstation edition contains
thermaldby default
- FlexiBLAS enables runtime switching of BLAS/LAPACK backend
nanois a default terminal text editor
- Fedora Internet of Things (IoT) is now an official Fedora Edition
- Increase usage of
%make_buildand
%make_install
- Fedora workstation livecd does not contain
device-mapper-multipath
- .NET Core now available with 64-bit ARM systems
- The
earlyoomservice is now enabled by default in Fedora KDE
dmraid-activation.serviceno longer depends on
systemd-udev-settle.service
- Swap on zRAM
Fedora Workstation now uses Btrfs by default
Btrfs is a native Linux copy-on-write file system. It provides advanced features including error detection, fault tolerance, recovery, transparent compression, cheap snapshots, integrated volume management, and easier administration. Btrfs will be the file system used for new desktop installations.
What’s changing
Use Btrfs instead of LVM+ext4.
/and
/homeare no longer separate file systems, but are on "one big Btrfs file system".
/and
/homeare on Btrfs subvolumes and share the space on the Btrfs volume.
Always-on features
Copy-on-write means data is never overwritten, and the file system stays consistent even in the case of power failures.
Data integrity: Checksumming for all data and metadata ensures corruptions do not propagate.
Efficient copies, also known as filing cloning or efficient copies.
Opt-in features
Subvolumes and snapshots (See also
man btrfs subvolume)
Online scrub (See also
man btrfs scrub)
And also…
It’s still possible to choose other file system layouts in Custom partitioning, including the LVM+ext4 layout
The "Everything" netinstaller will use Btrfs by default. It’s advised that headless and PXE installation use cases should use the Fedora Server netinstaller
Fedora Btrfs landing page
-
-
-
Fedora Workstation edition contains
thermald by default
Modern Intel-based systems provide sensors and methods to monitor and control temperature of their CPUs. The
thermald daemon harnesses those sensors to monitor the CPU temperature. Based on the received data,
thermald uses the best available method to keep the CPU in the right temperature zone.
Fedora Workstation users can now enjoy better out-of-the-box experience due to improved CPU cooling methods and enhanced performance of their Intel systems.
Optionally, users can achieve further performance improvements by using specific per-CPU model
thermald configurations.
FlexiBLAS enables runtime switching of BLAS/LAPACK backend
Basic Linear Algebra Subprograms (BLAS) and Linear Algebra PACKage (LAPACK) are API standards for basic linear algebra operations.
From Fedora 33, the packages that use BLAS and LAPACK APIs will be compiled against FlexiBLAS.
FlexiBLAS is a framework that wraps the BLAS and LAPACK APIs with interfaces for both 32-bit and 64-bit integers.
As a result, FlexiBLAS will set the OpenBLAS standard as the system-wide default backend. At the same time the change will resolve the following issues:
Fedora lacks a system-wide default.
Fedora lacks a proper switching mechanism.
This update also brings the following changes:
Recompilation of all BLAS and LAPACK dependent packages that link against FlexiBLAS instead of the current implementation they are using.
Changing the packaging guidelines to reflect the previous requirement for BLAS and LAPACK consumers. For more details, see the PackagingDrafts/BLAS LAPACK Fedora Wiki page.
nano is a default terminal text editor
In Fedora 33,
nano has been set as the default terminal text editor. See the System Utilities section for more information.
Fedora Internet of Things (IoT) is now an official Fedora Edition
Fedora IoT has been promoted to an official Fedora Edition status, alongside Workstation and Server.
With this enhancement, Fedora IoT becomes more prominent, which will help spread adoption between users.
As a result, this will help drive improvements in Fedora IoT and other ostree-based deliverables. Additionally, It also gives Fedora a strong presence in the IoT ecosystem.
For more details, see:
Increase usage of
%make_build and
%make_install
Many invocations of the
make utility in spec files that use the
%{_smp_mflags} macro have been modified to use the
%make_build macro. All
make invocations that use the install target have been updated to use the
%make_install macro. Any additional arguments to
make that are not included in either
%make_build and
%make_install are preserved.
Packages that already use
%make_build and
%make_install remain unchanged.
This change aims to standardize
make usage, and to facilitate enforcing consistent build flag usage across all Fedora editions.
Fedora workstation livecd does not contain
device-mapper-multipath
The
device-mapper-multipath package requires an obsoleted service
systemd-udev-settle.service in the default install of Fedora. This service waits a long time for detection of all devices. As a result, a system booting is significantly prolonged.
As multipath support is only necessary for installations in data centers or other enterprise setups,
device-mapper-multipath is not needed. Therefore the Fedora workstation livecd will no longer contain
device-mapper-multipath package.
Users which need
device-mapper-multipath are advised to use the server installation.
.NET Core now available with 64-bit ARM systems
.NET Core is now available on the
Aarch64 architecture in addition to
x86_64. See Developers/.NET for more information.
The
earlyoom service is now enabled by default in Fedora KDE RAM goes below 4% free and swap goes below 10% free,
earlyoomsends the
SIGTERMsignal to the process with the largest
oom_score.
If RAM goes below 2% free and swap goes below 5% free,
earlyoomsends the
SIGKILLsignal to the process with the largest
oom_score.
This update brings the following benefits:
Users regain control over their system more quickly.
Reduction of forced poweroff increases data collection and improves understanding of low-memory situations.
The
earlyoomservice first sends
SIGTERMto a selected process, so that it has a chance to shutdown properly.
dmraid-activation.service no longer depends on
systemd-udev-settle.service
Swap on zRAM
Starting with Fedora 33, a swap partition is not created by default at installation time. Instead, a zram device is created, and swap enabled on it during start-up. zram is a RAM drive that uses compression. See
man zram-generator for a brief overview of its function.
The swap-on-zram feature can be disabled with
sudo touch /etc/systemd/zram-generator.conf and reenabled by removing this file, and customized by editing it. See
man zram-generator.conf for configuration information, including a description of the default configuration plus ASCII art.
The installer’s Custom and Advanced-Custom interfaces continue to support the manual creation of disk-based swap.
See the Change proposal for detailed information about the rationale for this feature. | https://docs.fedoraproject.org/hu/fedora/f33/release-notes/sysadmin/Distribution/ | 2021-01-16T06:06:48 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.fedoraproject.org |
Weekly mail integration The weekly mail is a staple of batch integrations for retailers. The goal of the mail is to drive traffic to the website, by personalising these mails the content is tuned to the user, who is therefore more likely to click and go to the website. In this example we cover a Froomle batch integration for an imaginary retailer "Lorem" that shows the user 2 lists. The first one contains items that are interesting for the user from the whole catalog. The second list contains items that have as brand the "Lorem" house brand. To set up the batch integration there are 2 major steps: Getting Batch recommendations Setting up events Getting batch recommendations Getting batch recommendations is done by uploading a request file to the requests folder on the SFTP. For more information on how to use the SFTP see the API Reference section on the batch recommendations API Given that we want to send a mail the users will be identified by their user_id. That is a long lived identifier typically linked to the user’s email address. In this example we will use only 2 users for simplicity, but a typical batch integration has hundreds of thousands of users. The request file will look as follows: "environment","user_id","device_id","list_name","list_size","page_type","category_to_filter" "lorem","AbaT1sagqq1r",,"Recommended for you",4,"weekly_mail", "lorem","CdgAgagewaga",,"Recommended for you",4,"weekly_mail", "lorem","AbaT1sagqq1r",,"From our brand",4,"weekly_mail","Lorem" "lorem","CdgAgagewaga",,"From our brand",4,"weekly_mail","Lorem" Things to note about this request file: Per user 2 lines (1 per list to fill in the mail) with a different list_name field. list_name field matches the titles of the lists that will be shown in the mail. lines per user don’t need to be next to each other, they can be, but it is not required. if the recommendations don’t need to be filtered the category_filter field is empty. device_id fields are empty, this is a mail integration, and therefore not linked to devices. Once the file is uploaded on the SFTP the batch API will start processing it. When ready it will put the response file in the responses folder. This response file will contain a row per item recommended. "user_id","device_id","user_group","version","campaign_id","request_id","list_name","item_id","item_info","rank" "AbaT1sagqq1r",,"froomle","froomle_1","11211211","12411511511","Recommended for you","item_id_1",1 "AbaT1sagqq1r",,"froomle","froomle_1","11211211","12411511511","Recommended for you","item_id_2",2 ... "AbaT1sagqq1r",,"froomle","froomle_1","11211211","12411511511","From our brand","item_id_lorem_7",1 "AbaT1sagqq1r",,"froomle","froomle_1","11211211","12411511511","From our brand","item_id_lorem_9",2 ... "CdgAgagewaga",,"froomle","Control","11211211","52411511541","Recommended for you","item_id_1",1 "CdgAgagewaga",,"froomle","Control","11211211","52411511541","Recommended for you","item_id_10",2 ... Things to note about the response file: user_id, device_id, list_name are exactly the same as the request If you want to collect the recommendations for a user you combine each row with that user’s identifier, split on list_name so that the items are in their respective lists. campaign_id is the same value for each row in the file. This is the identifier for the batch of recommendations generated. In the events this will need to be used. request_id is the same value for each row per user_identifier, this is the identifier of a user’s recommendation. In the events this also needs to be sent to Froomle. rank is a field used for sorting within a list of recommendations. In our example the 2 lists will each have items with ranks 1 - 4. Integration events Using the batch integration introduces a set of specific integration events which you will need to communicate to Froomle. To link these events to the recommendations you will have to use the campaign_id and request_id from the response file in the events. Froomle will use these events to monitor performance and improve the algorithm through AB testing. Batch open When the user opens the mail, a batch open event should be sent to Froomle with the campaign_id used to generate recommendations in the mail. Impression Once a user has opened a mail, the recommendations in that mail will be impressed. At this point a batch impression event for each recommendation should be communicated to Froomle. Batch click on recommendation Once a user clicks on one of the recommendation in the mail a batch click on recommendation has to be sent to Froomle. | https://docs.froomle.com/froomle-doc-main/examples/retailer/batch_recommendations/weekly_mail_integration.html | 2021-01-16T05:13:21 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.froomle.com |
Office 2007 + WPF rocks!:
- MSDN + WPF:
- Book: Applications = Code + Markup: A Guide to the Microsoft Windows Presentation Foundation by Charles Petzold
- Rob Relyea's blog:
Building a Custom Add-in for Outlook 2007 Using Windows Presentation Foundation.
- You can find the article at
- You can download the sample code at (). | https://docs.microsoft.com/en-us/archive/blogs/erikaehrli/office-2007-wpf-rocks | 2021-01-16T06:05:38 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['http://images.amazon.com/images/P/0735619573.01._SCTHUMBZZZ_.jpg',
None], dtype=object) ] | docs.microsoft.com |
Information about maintaining your H-series hardware is available in the NetApp HCI Documentation Center.
Explains how to replace a failed H610S chassis. For H610S, the terms "chassis" and "node" are used interchangeably, because the chassis and node are not separate components.
Explains how to replace the drives for H610S storage nodes.
Replacing a failed power supply unit in an H-series system
Explains how to replace the power supply unit in an H-series chassis.
Explains how to install the rails for H-series nodes. | https://docs.netapp.com/sfe-115/topic/com.netapp.ndc.h610s-ref/GUID-48ED3F02-9A06-497B-94B0-A0E3E1C142A5.html?lang=en | 2021-01-16T05:31:34 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.netapp.com |
A number of objects in RezMela Apps are “sittable” – that is, you can sit on them, and your avatar will be animated.
Some such objects are not actually seats at all. For example, Animation Orbs are spheres that you can “sit on” which could trigger animations such as dancing.
How it works
Seat objects work in different ways depending on whether or not you’re signed into the RezMela App.
If you (or anyone else) are signed in to the App, you’re effectively “edit mode”, and you can select seat objects like any other object, by long-clicking on them.
You can see here how the mouse cursor over the chair is the hand icon (touch), which is normal for objects when you’re signed in to the App:
Clicking the object normally (short click) will normally have no effect unless you have another object selected, or are in “Create” mode. In those cases, the object will be placed on the chair. Of course, you can select the chair by long-clicking in this mode.
When nobody is signed into the App, the seat objects change into their normal mode. You can see this with the mouse cursor, which now the “sit” icon:
So clicking on the chair will seat your avatar. You will also receive a message in chat saying “Click for menu”. If you move your mouse over the chair, the icon is now the “touch” (hand) icon, and you can click the chair to get a menu:
Let’s look at that menu a little closer:
The first six buttons are different animations you can use (you may see different animations depending on what the creator has set up for this object). The “Close” button makes the menu go away.
The “[ ADJUST ]” button loads another menu, in which you can adjust the position of your avatar:
Clicking the buttons on that menu will adjust the avatar’s position. This is only a temporary adjustment, and will not affect other avatars that use the object later. So it’s ideal for adjusting positions for unusually shaped or sized avatars.
Note that not all sittable objects have a menu. Animation orbs, for example, disappear while in use and don’t have a menu. If you need to adjust the position of the avatar in this case, you’ll need to move the orb itself. | https://docs.rezmela.org/seating-in-rezmela/ | 2021-01-16T06:23:32 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['https://docs.rezmela.org/wp-content/uploads/2020/10/Seating-guide-hand-icon.png',
None], dtype=object)
array(['https://docs.rezmela.org/wp-content/uploads/2020/10/Seating-guide-sit-icon.png',
None], dtype=object)
array(['https://docs.rezmela.org/wp-content/uploads/2020/10/Seating-guide-click-for-menu.png',
None], dtype=object)
array(['https://docs.rezmela.org/wp-content/uploads/2020/10/Seating-guide-menu-main.png',
None], dtype=object)
array(['https://docs.rezmela.org/wp-content/uploads/2020/10/Seating-guide-menu-adjust.png',
None], dtype=object) ] | docs.rezmela.org |
Offsetting Trajectories
By offsetting a trajectory,.
-.
| https://docs.toonboom.com/help/harmony-17/premium/motion-path/offset-trajectory.html | 2021-01-16T05:54:43 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/an_offset1.png', None],
dtype=object)
array(['../Resources/Images/HAR/Stage/Paths/an_offset2.png', None],
dtype=object) ] | docs.toonboom.com |
Overview
The Activity Stream is a feed of recent project activities, listed in reverse chronological order (newest activities on top). There is one activity stream per TeamForge project. The Activity Stream is useful to keep apprised of recent project updates without switching your context.
The Activity Stream includes events from:
- Tracker
- Source Code (Git and Subversion)
- Pull Request and Gerrit code reviews
- All configured EventQ activity sources (for example, Jenkins, JIRA®, Chef, Nexus, Testlink, Reviewboard, and so on)
Usage
The Activity Stream is activated by clicking the “Activity Stream” icon in the TeamForge header (only available in a project context).
Please contact your system Administrator. Something went wrong.
When activated, the Activity Stream expands and lays over the page content, anchored to the right side of the browser. Activities appear in reverse chronological order (newest at the top) and are scrollable. As you scroll near the bottom, additional activities load automatically providing more scrolling real estate (and so on).
Each activity represents a distinct event in TeamForge and integrated tools. When available, you can click links and they will open the object in question in a new tab. You will see a notice in your Activity Stream when new activities occur. Click the notice to new activities.
Click the Activity Stream icon in the header a second time to collapse the Activity Stream. | https://docs.collab.net/teamforge182/activitystream.html | 2021-01-16T06:40:14 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['images/167_as01.png', None], dtype=object)
array(['images/167_as02.png', None], dtype=object)
array(['images/167_as03.png', None], dtype=object)] | docs.collab.net |
Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Tutorials and Demos¶
Demos¶
- Kafka Event Streaming Application
- This demo shows you how to deploy a Kafka streaming ETL using KSQL for stream processing and Control Center for monitoring, with security enabled end-to-end. You can follow along with the playbook and watch the video tutorials.
- Hybrid Kafka Clusters from Self-Hosted to Confluent Cloud
- This Confluent Cloud demo is the automated version of the KSQL Tutorial, but instead of KSQL stream processing on your local install, it runs on your Confluent Cloud cluster. You can follow along with the playbook.
- Examples of streaming ETLs using Confluent Platform
- End-to-end demo applications using KSQL, Confluent Replicator, Confluent Control Center, and more. Requires Confluent Platform local install.
- Ansible playbooks for Confluent Platform
- Ansible playbooks for deploying Confluent Platform (No Confluent support, community support only).
- Security Tutorial
- This tutorial is a step-by-step guide to configuring Confluent Platform with SSL encryption, SASL authentication, and authorization.
- Designing Event Driven Systems (Microservices) Demo
- This project goes hand in hand with the book ‘Designing Event Driven Systems’, demonstrating how you can build a small microservices application with Kafka and Kafka Streams.
- Introducing KSQL: Streaming SQL for Apache Kafka
Apache Kafka. KSQL runs continuous queries, which are transformations that run persistently as new data passes through them, on streams of data in Kafka topics.
- Real-Time Streaming ETL from Oracle Transactional Data
- Replace batch extracts with event streams, and batch transformations with in-flight transformation of event streams. Take a stream of data from a transactional system built on Oracle, transform it, and stream it into Elasticsearch. Use KSQL to filter streams of events in real-time from a database and join between events from two database tables to create rolling aggregates on the data.
- Write a User Defined Function (UDF) for KSQL
- Build, deploy, and test a user-defined function (UDF) to extend the set of available functions in your KSQL code. Write Java code within the UDF to convert a timestamp from String to BigInt.
- Monitoring Kafka in Confluent Control Center
- Use the KSQL CLI and Confluent Control Center to view streams and throughput of incoming records for persistent KSQL queries.
Examples¶
- Confluent Platform Demos Repository
- End-to-end demos and quickstarts showcasing stream processing on Confluent Platform.
- Kafka Streams Example Repository
- Demo applications and code examples for Apache Kafka’s Streams API.
- Kafka Client Application Examples
- Client examples for Java producers, Java consumers, and other clients connecting to Confluent Cloud..
- Intro to KSQL | Streaming SQL for Apache Kafka
- KSQL is the streaming SQL engine that implements continuous, interactive queries against Apache Kafka. KSQL makes it easy to read, write and process streaming data in real-time, at scale, using SQL-like semantics.
- Neha Narkhede | Kafka Summit 2018 Keynote (The Present and Future of the Streaming Platform)
-. | https://docs.confluent.io/5.1.0/tutorials/index.html | 2021-01-16T06:37:44 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.confluent.io |
Items We consider an item anything that could be offered to an end-user, such as a product for sale, a news article, or a video. Each item should have an identifier, which we expect to receive as a string, under the field ITEM_ID. In addition to an identifier, we like to have additional metadata, such as the title, price, and any other attributes that could allow us to filter, categorise, and search items to your liking. To structurally manage this extra information we offer different metadata templates for each use case. In this section we go over the schema of item metadata. To learn more about the packaging, delivery and usage of item metadata, please refer to the Managing item metadata how to page. Item metadata templates Before sending us your item metadata it is important that it is prepared correctly, so we can parse and ingest it without any issues. To simplify the preparation of your metadata we have a limited number of data templates that you can adhere to. Each template corresponds to a specific use case of the Froomle platform. While not all fields within a template are required, we highly encourage you to provide as many fields as possible within the chosen template, as it will improve your personalization performance. Some fields in our templates are nested which doesn’t map directly to some of our accepted formats (e.g. CSV). For these formats we ask you to include these nested fields as their serialized JSON representation. Base item metadata template As our current 3 templates have a lot in common we created a base template which lists fields that are available in all the other templates. Field Description item_id* string Required. The unique identifier of the item. environment* string Required. The environment to which the item belongs. language* string Required. The metadata’s main language, if an item’s metadata is delivered in multiple languages specify the most dominant language here. The provided value should be a ISO 639-1 standard language code like en, fr, nl, en-us, nl-be, … item_type* string Required. Items is the collection of all different entities on your platform. Item type allows you to further specify which type of entity this item represents. E.g. article, video, profile, category, … title string Optional. The title of the item. If your item contains translations use the title_map field instead. title_map map<string, string> Optional. A mapping of language_code → title. Use this field when your title has multiple translations. Keys in this map should be ISO 639-1 standard language codes like en, fr, nl, en-us, nl-be, … description string Optional. The description of the item. If your item contains translations use the description_map field instead. description_map map<string, string> Optional. A mapping of language_code → description. Use this field when your description has multiple translations. Keys in this map should be ISO 639-1 standard language codes like en, fr, nl, en-us, nl-be, … uri string Optional. The uri of the item. If your item has a resource identifier unique to each language use the uri_map field instead. uri_map map<string, string> Optional. A mapping of language_code → uri. Use this field when your item has resource identifiers unique to each language. Keys in this map should be ISO 639-1 standard language codes like en, fr, nl, en-us, nl-be, … tags list<string> Optional. Tags are often associated with items to provide extra context. While this field is optional it should be clear that tags carry valuable information for personalisation algorithms. images list<string> Optional. If your item contains one or more images you can optionally provide their URLs using this field. parent string Optional. When your items are structured in a hierarchy you can supply the parent identifier using this field. ancestry_level string Optional. When your items are structured in a hierarchy you can specify the level of the provided item in the hierarchy. Valid values are (from highest to lowest in the hierarchy): group, item and model. categories list<list<string>> Optional. As items often belong to a hierarchical category structure you can specify each path within the structure that can be used to reach the current item. Categories within a path should be listed from most general to most specific. Specialized templates For our most popular use cases we have created more specialized templates which extend the base template with extra information tailored to each use case. If your use case maps on one of these specialized templates you can leverage the extra level of detail they provide and improve your personalization experience even more. Retail Template News Template Social Network Template Inherits all the fields from the Base template (see above) + Field Description price* double Required. The retail/listing price of the item. currency_code* string Required. ISO-4217 compliant triple-character currency code. E.g. EUR, USD, GBP, … costs map<string, double> Optional. Extra cost-related monetary amounts like: VAT, profit, purchase price, … stock_state string Optional. The current state of the product’s stock. Valid values are: IN_STOCK, OUT_OF_STOCK, PREORDER and BACKORDER. available_stock integer Optional. Current number of articles in stock for this item. original_price double Optional. The original price of a discounted product. discount_percentage double Optional. The percentage of discount for discounted products. When both price and original_price are specified we can automatically compute this field if you don’t specify it. When you also provide this field along with the original_price and price fields, the provided discount percentage value will also be validated using both price values. Inherits all the fields from the Base template (see above) + Field Description publication_datetime* string Required. ISO-8601 formatted timestamp string of the publication datetime of the news item. end_datetime string Optional. ISO-8601 formatted timestamp string of the end of publication datetime of the news item. access_type string Optional. When access to an article is restricted for certain users, use this field to specify the restriction. Valid values are: free, paid and subscribed. Inherits all the fields from the Base template (see above) + Field Description publication_datetime* string Required. ISO-8601 formatted timestamp string of the publication datetime of the content item. end_datetime string Optional. ISO-8601 formatted timestamp string of the end of publication datetime of the content item. owner string Optional. User identifier of the owner of the content. privacy string Optional. Privacy setting applied to the content piece which controls the visibility of the item. Valid values are: me, friends, friends_of_friends and public. Item types Item types can be used to distinguish items of substantially different kinds, such as news articles from videos, or physical products from services. However, smaller differences such as different kinds of household appliances do not typically warrant separate item types. Usually our customers have no more than a few item types at a time. Fundamentally different kinds of items generally require different kinds of metadata. Therefore, each item type can have its own metadata structure. If there is no such distinction between items in your system, then there will be just one item type, and you don’t have to worry about item types at all. As we mentioned before, each item should have a string identifier. Such an identifier alone doesn’t fully identify an item, however. Items are fully identified by the pair (item_type, item_id). As such, two items of a different item type with the same item ID are considered unrelated. We prefer you communicate these item types explicitly wherever possible. If you are unable to communicate item types to Froomle, contact your Solution Architect to discuss if they can be determined from context. A note about timestamps It can be useful to include timestamps as part of the item metadata. Commonly, time restrictions on the recommendability of an item apply. For instance, we may not be allowed to recommend news articles that are too old, or a soon to be released product may not be allowed to be shown before a certain time. When sharing timestamps between different systems – such as between yours and ours – the time offset is often a cause for confusion. A timestamp without this information (either by context or explicitly given) does not uniquely define a point in time. If you provide us with timestamps, please either: Provide a time offset with it, such that it is a fully qualified timestamp. With the ISO-8601 notation, the preferable format is: yyyy-mm-ddThh:mm:ss+|-hh:mm For example: 2019-09-06T12:00:00+02:00 This way no assumptions need to be made. Provide us timestamps in UTC+00. For any timestamp without any time zone or offset information, we assume UTC+00. | https://docs.froomle.com/froomle-doc-main/concepts/items/item_concepts.html | 2021-01-16T05:28:09 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.froomle.com |
Request the Private Domain Object
Last Updated:
Background: The first step to deploying a PrivateDomain is to request the OrgVDC object from CloudConnect. This can be done by completing the Private Domain Request Form linked below.
Process:
To request a Private Domain, follow this link. You will need your CloudConnect login information.
Applies to: CloudConnect Partners | https://docs.cloudconnect.net/display/HOME/Request+the+Private+Domain+Object | 2021-01-16T06:39:10 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.cloudconnect.net |
0017 fix-fees
- Title: fix-fees
- Authors: John Tromp
- Start date: August 23, 2020
- RFC PR: mimblewimble/grin-rfcs#63
- Tracking issue: mimblewimble/grin/issues/3459
Summary
Change Grin's minimum relay fees to be weight proportional and make output creation cost about a Grin-cent. Restrict fees to 40 bits, using 4 of the freed up 24 bits to specify the desired tx priority, and leaving the remaining 20 bits for future use. NOTE: this is a hard-forking change.
Motivation
The former fee requirements suffer from being somewhat arbitrary, and not miner incentive compatible.
They are not even linear; to avoid a negative minimum fee, they are rounded up to
BASE_FEE.
As a consequence, the minimum fees for the aggregate of two
transactions was not necessarily equal to the sum of the individual ones.
Worse, a miner had no incentive to include a transaction that pays 0 fees, while they take resources to relay.
The current (and foreseeable) low price of Grin makes spamming the UTXO set rather cheaper than desired.
Fee overpaying, for higher priority to be included in full blocks, fails when aggregated with minimal fee transactions.
Community-level explanation
For clarity, let's adopt the following definitions. Let the feerate of
a transaction be the fees paid by the transaction divided by the transaction
weight with regards to block inclusion (
Transaction::weight_as_block).
Let the
minfee of a transaction be the amount of fees required for relay and mempool inclusion.
A transaction paying at least
minfee in fees is said to be relayable.
Grin constrains the size of blocks by a maximum block weight, which is a linear combination of the number of inputs, outputs, and kernels. When blocks fill up, miners are incentivized to pick transactions in decreasing order of feerate. Ideally then, higher feerate transactions are more relayable than lower feerate ones. Having a lower feerate transaction be relayable while a higher feerate transaction is not is very much undesirable and a recipe for problems in a congested network.
The only way to avoid this mismatch between relay and block inclusion incentives is to make the minimum relay fee be proportional to the block weight of the transaction. This leaves only the constant of proportionality to be decided. We can calibrate the fee system by stipulating that creating one output should cost at least one Grin-cent (formerly 0.4 Grin-cent).
We want minfee to be linear to make splitting and combining of fees well-behaved. For instance, two parties in a payjoin may each want pay their own input and output fees, while splitting the kernel fee, without overpaying for the whole transaction.
Only the least significant 40 bits will be used to specify a fee, while some of the remaining bits will specify a minimum fee overpayment factor as a power-of-2, preventing aggregation with lesser overpaying transactions, and earlier inclusion into full blocks.
The largest possible 40-bit fee is 2^40 - 1 nanogrin, or approximately 1099.5 grin, which should be more than enough to guarantee inclusion in the next available block. Technically speaking, transactions can pay higher fees by having multiple kernels, but we still want to avoid that kind of friction. As a side effect, this limits the damage of fat fingering a manually entered fee amount.
Reference-level explanation
The minimum relay fee of a transaction shall be proportional to
Transaction::weight_as_block,
which uses weights of
BLOCK_INPUT_WEIGHT = 1,
BLOCK_OUTPUT_WEIGHT = 21, and
BLOCK_KERNEL_WEIGHT = 3,
which correspond to the nearest multiple of 32 bytes that it takes to serialize.
Formerly, we used
Transaction::weight,
which uses arbitrary weights of -1, 4, and 1 respectively, but also non-linearly rounds negative results up to 1
(as happens when the number of inputs exceeds the number of kernels plus 4 times the number of outputs).
The
Transaction::weight_as_block shall be multiplied by a base fee.
This will not be hardcoded, but configurable in grin-server.toml,
using the already present
accept_fee_base parameter.
There is no reason to use different fees for relay and mempool acceptance.
Its value shall default to
GRIN_BASE / 100 / 20 = 500000, which makes each output
incur just over 1 Grin-cent in fees.
The 64-bit fee field in kernel types
Plain and
HeightLocked shall be renamed to
fee_bits and be composed of bitfields
{
future_use: 20,
fee_shift: 4, fee: 40 }. All former uses of the fee will use
fee_bits &
FEE_MASK,
where
FEE_MASK = 0xffffffffff.
A nonzero fee shift places an additional restriction on transaction relay and mempool inclusion.
Namely, a transaction containining a kernel with
fee_shift = s must pay a total fee
of at least 2^s times the minfee (the minfee shifted left by
fee_shift).
Aggregation of two relayable transactions should only happen when the result remains relayable. By linearity, this is always the case for two transactions with identical fee shift. Transactions with differing fee shift can be aggregated only if either one pays more than required by their fee shift.
For instance, suppose two transactions have the same minfee. If one pays 3 times the minfee with a fee shift of 1, while the other pays exactly minfee with a zero fee shift, then both can be aggregated into one that pays twice the joint minfee (which is double the old one).
While a transaction can choose an arbitrarily high feerate to incentivize early inclusion into full blocks, the fee shift provides for 16 different levels of protection against feerate dilution (through transaction aggregation).
The new tx relay rules and new fee computation in wallets shall take effect at the HF4 block height of 1048320 (but see below about alternatives for 3rd party wallets).
The 20 bits of
future_use bits provide a possible soft-forking mechanism.
We can imagine a future soft-fork further constraining the validity of kernels,
or the relayability of transactions containing them, depending on the value of some of these bits.
Drawbacks
Leaving 20 bits of
future_use increases the amount of zero cost zero friction
spam that can be added to the chain. However, we already have a similar amount
of spammable bits in the lock height of HeightLocked kernels, where any lock
height under the current height of around a million has no effect.
So this appears to be an acceptable increase.
At the time of writing, only one other perceived drawback could be identified, which is what motivated the former fee rules. A negative weight on inputs was supposed to encourage spending many outputs, and lead to a reduction in UTXO size. But this aligned poorly with miner incentives, as for instance they have no economic reason to include zero fee transactions. A positive weight on inputs is no serious drawback, as long as creation of outputs is more strongly discouraged. Most wallets, especially those relying on payjoins, do about equal amounts of output creation and spending, the difference being just the wallet's UTXO set. Mining pools are the major exception. While they obviously don't incur any fees in coinbase creation, they need not spend fees on spending them either, as they can mine these payout transactions directly.
Rationale and alternatives
There are no good alternative to economically significant fees. Any blockchain lacking them is an open invitation to abuse. For chains with a maximum blocksize, fees are also necessary to allow prioritization of transactions.
There is a small window prior to HF4 where transactions constructed using the former lower won't be finalized before HF4 and will thus fail to be relayed. Third party wallets are free to switch fee computation some arbitrary time before HF4 to minimize this risk.
Instead of a fee shift, one could specify a fee factor.
So then a transaction containing a kernel with the
fee_factor bitfield having value f would require total fees
of at least f+1 times minfee (preserving old behaviour for a 0 bitfield).
A reasonable overpayment range would then require 8 bits instead of the 4 used for fee
shift, and would create 256 levels of priority. It does seem
wasteful though, to allow a distinction between, say, overpaying by 128 times,
or by 129 times.
Worse still, that many priority levels will severely reduce the potential for aggregation when blocks fill up.
To maximize aggregation opportunities then, forcing priority levels to be a factor of 2 apart seems quite reasonable.
We can avoid a consensus change by keeping the fee at the full 64 bits, instead using its least significant 4 bits to specify a fee shift. This only makes sense if we never plan to make any use of the top 24 bits, which would always be forced to 0 unless someone pays a monstrous fee, most likely by accident. It would also lead to balances that are no longer a multiple of a milligrin, making for slightly worse UX.
Prior art
Several chains have suffered spam attacks. In early days, bitcoin was swamped with feeless wagers on Satoshi Dice [1]. At least those served some purpose.
Nano was under a deluge of meaningless 0-value back and forth transfers for weeks, that added dozens of GB of redundant bloat to its chain data [2]. Although nano requires client PoW as a substitute for fees, these attacks showed that the PoW was poorly calibrated and not an effective deterrant.
Unresolved questions
Future possibilities
While the input and kernel weights are pretty fixed, the output weight is subject to improvements in range proof technology. If Grin implements BP+, the weight should go down from 21 to 18. That would make outputs incur slightly less than a Grin-cent in fees, which is not worth bothering with. If range proofs were to halve in size though, then we might want to double the base fee to compensate.
If Grin ever becomes worth many dollars, then a lowering of fees is desirable. This can then be achieved by getting the majority of running nodes to reconfigure their base fee to a lower value, after which wallets can have their fee computation adjusted. In the converse case, where Grin becomes worth only a few cents, then an increase in fees might be needed to avoid spam. Both cases will be much easier to deal with if they coincide with a hard fork, but those may not be on the horizon.
References
[1] Satoshi Dice
[2] Nano github | https://docs.grin.mw/grin-rfcs/text/0017-fix-fees/ | 2021-01-16T05:29:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.grin.mw |
Configuring MongoDB Atlas as a Source
ON THIS PAGE
After you have selected MongoDB Atlas as the Source for creating the Pipeline, provide the connection settings and data replication details listed here in the Configure Your Source page. You can fetch the database settings from your MongoDB Atlas account.
Prerequisites
- Database settings of your MongoDB Atlas account are available.
- Hevo has permissions to read your MongoDB Atlas databases.
- Hevo’s IP addresses have access permissions on your SSH server.
Configuring MongoDB Atlas Settings in Hevo
Provide the following information in the Configure Your Source page during Pipeline creation:
Pipeline Name: A unique name for the Pipeline.
Provide the following database settings fetched from the MongoDB Atlas account:
Database Host: The MongoDB DNS name fetched from your MongoDB Atlas account. No additional information needs to be provided in this field irrespective of your MongoDB configuration.
Database User: The authenticated user that can read the collections in your database. Read Setting up Permissions to Read MongoDB Atlas Databases
Note: It is recommended that only
read-onlypermissions be provided to the user.
Database Password: The password for the database user.
Connection Settings:
- Connect through SSH: Enable this toggle. data written in your database after the time of creation of the Pipeline. If enabled, the entire table data is fetched during the first run of the Pipeline.
Setting up Permissions to Read MongoDB Atlas Databases
Whether you have selected OpLog or Change Streams as the Pipeline mode, you need to assign the required permissions to the different databases for the Hevo user to be able to read from these, as follows:
- In your MongoDB Atlas console, click Database Access, and then, Add New Database User.
- Select the Authentication Method as Password.
- In the Password Authentication section, provide a username and password for the new user. You can skip this if you are editing an existing user.
- In the Database User Privileges drop-down, select Grant Specific Privileges from the Advanced Privileges list.
- Grant the following access privileges. Click + Add Another Role to set up access for multiple databases.
readaccess to the local database.
readaccess to the databases you want to replicate.
readAnyDatabaseaccess to the
admindatabase if you want to load all databases.
Retrieving Database Settings from MongoDB Atlas
Perform the following steps to retrieve your MongoDB Atlas database settings:
Access the MongoDB Atlas console and select the project whose data you want to replicate.
Click Clusters in the left navigation bar.
Click CONNECT.
Click
Connect with the mongo shell.
Ensure that the
mongo shell versionis greater than 3.6, and click COPY to copy the connection string.
Copy the database host name and the user name.
Whitelisting Hevo’s IP Address in MongoDB Atlas
Access your MongoDB Atlas console and:
- Click Network Access.
In the IP Whitelist tab, click Add IP Address.
- Provide the Hevo IP address you want to whitelist.
Note: To provide all IPs with access, enter 0.0.0.0.
| https://docs.hevodata.com/sources/databases/mongodb/configuring-mongodb-atlas/ | 2021-01-16T05:32:13 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['https://res.cloudinary.com/hevo/image/upload/hevo-docs/MongoDBSetupGuide2912/add_IPadd.gif',
"Adding Hevo's IP addresses"], dtype=object) ] | docs.hevodata.com |
What's in the Release NotesThe release notes cover the following topics:
- What's New
- Earlier Releases of VMware Tools
- Before You Begin
- Internationalization
- End of Feature Support Notice
- Compatibility Notes
- Guest Operating System Customization Support
- Interoperability Matrix
- Installation and Upgrades for This Release
- Resolved Issues
- Known Issues
What's New
- OSS updates:
- openssl version is upgraded to 1.0.2v.
- libpng version is upgraded to 1.6.37.
- libxml2 version is upgraded to 2.9.10.
Earlier Releases of VMware Tools
- For earlier releases of VMware Tools, see the VMware Tools Documentation page.
Before You Begin
- VMware Tools 10.3.23 supports the following guest operating systems:
linux.isosupports Linux guest operating systems Red Hat Enterprise Linux (RHEL) 6, SUSE Linux Enterprise Server (SLES) 11, Ubuntu 12.04. It also supports other distributions with glibc versions 2.5 and later.
For later versions of the above guest operating systems, Red Hat Enterprise Linux (RHEL) 7 and later, SUSE Linux Enterprise Server (SLES) 12 and later, Ubuntu 14.04 and later, it is recommended to install the OS vendor provided open-vm-tools . For instructions, refer Installing open-vm-tools. For more details, refer KB 2073803.
-.
Note:
For other ISOs, use VMware Tools 10.3.10 or 11.0.0 release versions. 10.3.23 is available in the following languages:
- German
- Spanish
- Italian
- Japanese
- Korean
- Simplified Chinese
- Traditional Chinese
End of Feature Support Notice
- VMware Tools 10.3.5 freezes feature support for tar tools and OSPs.
VMware Tools Operating System Specific Packages can be downloaded from. For more information on installing OSPs, see the VMware Tools Installation Guide for Operating System Specific Packages..
Note:
If you are using VMware Tools version earlier than 9.4, refer to VMware Tools 10.1.0 Release Notes for specific upgrade guidelines.
Resolved Issues
- VMware Tools installation fails on older Linux versions (including RHEL5).
While upgrading to VMware Tools 10.3.23 version, glibc-2.11 or later is required for installing VMware Tools on older versions of Linux (including RHEL5) successfully.
Users attempting to install VMware Tools 10.3.23 on an unsupported Linux guest are directed to KB 2147454. This KB article provides guidance on selecting the correct version of VMware Tools supported on this guest.
Known Issues
- | https://docs.vmware.com/en/VMware-Tools/10.3/rn/VMware-Tools-10323-Release-Notes.html | 2021-01-16T06:40:28 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.vmware.com |
To upgrade the network topology, you upgrade NSX-T Data Center, vCenter Server, and all vSphere with Tanzu components.
Prerequisites
- Verify that your environment meets the system requirements. For information about requirements, see System Requirements and Topologies for Setting Up a Supervisor Cluster with NSX-T Data Center.
- For configuration limits specific to vSphere with Tanzu, see vSphere Configuration Limits in VMware Configuration Maximums tool.
Procedure
- Upgrade NSX-T Data Center to version 3.1.
- Upgrade vCenter Server.
- Upgrade ESXi hosts.
- Update the Supervisor Namespace.To perform an update, see Update the Supervisor Cluster by Performing a vSphere Namespaces Update. | https://docs.vmware.com/en/VMware-vSphere/7.0/vmware-vsphere-with-tanzu/GUID-BEAF45D2-9ABA-46CA-ABE8-52A6B94AF085.html | 2021-01-16T06:28:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.vmware.com |
Software Maintenance APIs
a
- Activate Software
- API calls for activating a Viptela software image.d
- Delete Software
- API calls for deleting a Viptela software image.r
- Reboot Device
- API calls for rebooting Viptela devices.s
- Set Default Software
- API call for setting a software image as the default image on a Viptela device.u
- Upgrade Software
- API calls for upgrading software on Viptela devices.
- Upload Software Image
- API calls for uploading and downloading software images on Viptela devices.v
- VPN
- API calls for displaying VPNs. | https://sdwan-docs.cisco.com/Product_Documentation/Command_Reference/Command_Reference/vManage_REST_APIs/Software_Maintenance_APIs | 2021-01-16T06:33:36 | CC-MAIN-2021-04 | 1610703500028.5 | [] | sdwan-docs.cisco.com |
Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.. | https://docs.confluent.io/5.1.0/connect/kafka-connect-omnisci/changelog.html | 2021-01-16T05:12:36 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.confluent.io |
influx query
The
influx query command executes a literal Flux query provided as a string
or a literal Flux query contained in a file.
Usage
influx query [query literal] [flags]
Remove unnecessary columns in large datasets
When using the
influx query command to query and download large datasets,
drop columns such as
_start and
_stop to optimize the download file size.
// ... |> drop(columns: ["_start", "_stop"])
Flags
Examples
Authentication credentials
The examples below assume your InfluxDB host, organization, and token are
provided by the active
influx CLI configuration.
If you do not have a CLI configuration set up, use the appropriate flags to provide these required credentials.
- Query InfluxDB with a Flux string
- Query InfluxDB using a Flux file
- Query InfluxDB and return annotated CSV
- Query InfluxDB and append query profile data to results
Query InfluxDB with a Flux string
influx query 'from(bucket:"example-bucket") |> range(start:-1m)'
Query InfluxDB with a Flux file
influx query --file /path/to/example-query.flux
Query InfluxDB and return annotated CSV
influx query 'from(bucket:"example-bucket") |> range(start:-1m)' --raw
Query InfluxDB and append query profile data to results
For more information about profilers, see Flux profilers.
influx query \ --profilers operator,query \ 'from(bucket:"example-bucket") |> range(start:-1. | https://test2.docs.influxdata.com/influxdb/v2.3/reference/cli/influx/query/ | 2022-09-25T04:12:53 | CC-MAIN-2022-40 | 1664030334514.38 | [] | test2.docs.influxdata.com |
- Username pattern- The Regex pattern for usernames.
- Valid identification fields- The ID types a user can login with, such as their username or email address.
- Cookie Domain for SSO- The domain for your cookie for a cookie SSO.
- Email Domain Whitelist- Only allows emails from a specific domain name to register.
- Ratio of similarity between username and password-determines how much of a password can share characters with a user’s username.
- Use real name as screen name- The user’s real name (as provided in the user profile) is used in place of a username in the AnswerHub User Interface (UI).
- Allow email reuse- Allows multiple users to share a single email address.
- Gravatar enabled- The Gravatar for a user is displayed in the AnswerHub UI.
- Remove Users membership on joining higher permission group - If selected, when a user is added to a group with higher level permissions set than those in the Users group, they are automatically removed from the Users group.
Note: This is important for permissions settings. For more information, see theAnswerHub Permissions Guide.
- Max login attempts before temporary blocking- Enter the number of times that a user can fail to log in before the username they are using is blocked.
- Seconds Between Temporary Login Requests- How many seconds will be waited between requests.
- Number of minutes to check for failed login attempts - How many minutes will be dedicated to attempting to validate failed log in requests.
- per network
- per site
- Hide content from deactivated users - If selected, hides all of your site content from deactivated users.
- Match social logins by username & email - This feature is used to try and prevent duplicate accounts. If selected, your AnswerHub site will automatically attempt to locate an existing account and connect users to their social logins according to username and email address. If no account matches, a new one will be created. | https://docs.dzonesoftware.com/articles/17295/edit-general-user-settings-1.html | 2022-09-25T06:03:45 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.dzonesoftware.com |
Managing Product Notification Preferences
- Select the management context from the context selection box, then select Preferences.This displays the Preferences overview page.
- Select Product Notification.The details page is displayed.
- Provide an E-mail From address, a subject and a valid e-mail template.
- Specify the intervals at which the agent is to check whether notification e-mails have to be sent out.
- Click Apply. | https://docs.intershop.com/icm/latest/olh/icm/en/operation_maintenance/task_managing_product_notification_preferences.html | 2022-09-25T05:26:27 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.intershop.com |
Account Details
Your profile provides SendGrid with the information we need to contact you with alerts and notifications as well as send and track your emails.
To edit your account information:
- Click the edit icon next to the section you wish to change.
- Once you have made your changes, click Save. This will only save the settings for that section. If you decide to abandon your changes, click Cancel.
Your Account
First Name - This is the first name of the representative from your company who should receive contacts from SendGrid.
Last Name - This.
Once you've changed your contact email, you will need to confirm the new email address. You will be prompted to send a confirmation email to your inbox. Click Send Confirmation Email to finalize the change. Make sure you click the link in the confirmation email to receive account updates at your new address.
Username - Your SendGrid Username is used to access our API and our SMTP Relay. Changing this will immediately cause all of your calls to SendGrid to stop working.
Password - The password criterion that your SendGrid password must include: 16 to 128 characters, at least one number, and one letter.
Your Company.
Timezone - The timezone in which your company operates. This setting will be used by other SendGrid functionality such as Statistics and scheduling sends in Marketing Campaigns. Please make sure that your timezone is set to the same as your business.
If you find that your scheduled sends or statistics seem like they are not quite correct, please double check your timezone.
Website - Your company’s website
Phone - Your company’s phone number, where SendGrid can reach the representative that should be contacted.. | https://docs.sendgrid.com/ui/account-and-settings/account | 2022-09-25T06:02:19 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.sendgrid.com |
Using a9s Harbor
This topic describes how developers use a9s Harbor.
Use a9s Harbor as private Docker Registry
To use a9s Harbor as private Docker Registry, create a service instance and create a Service Key. For more information on managing service instances, see Managing Service Instances with the cf CLI.
View the a9s Harbor Service
After the service is installed, you can see the
a9s-harbor and its service plans appear in your CF marketplace. Run
cf marketplace to see the service listing:
$ cf marketplace
Getting services from marketplace in org test / space test as admin...
OK
service plans description
a9s-harbor harbor [Beta] This is a service creating and managing dedicated Harbor servers.
Create a Service Instance
To provision a Harbor service, run
cf create-service. For example:
cf create-service a9s-harbor harbor my-harbor.
Create a Service Key
After your Harbor service is created, run
cf create-service-key NAME-OF-YOUR-SERVICE NAME-OF-SERVICE-KEY in order to create a Service Key for you Harbor Service:
cf create-service-key my-harbor-service my-service-key
If you want to create a Harbor user with the
admin role privilege, you need
to set the custom paramter
has_admin_role to true, for example:
cf create-service-key my-harbor-service my-service-key -c '{ "has_admin_role": true }'
Obtain Service Instance Access Credentials
After a Service Key is created, the credentials of your Harbor service can be displayed by running
cf service-key NAME-OF-YOUR-SERVICE NAME-OF-SERVICE-KEY:
$ cf service-key my-harbor-service my-service-key
Getting key my-service-key for service instance my-harbor-service as admin...
OK
{
"dashboard_url": "EXAMPLE_DASHBOARD_URL",
"username": "EXAMPLE_USERNAME",
"password": "EXAMPLE_PASSWORD",
"has_admin_role": false,
"port": 443,
"project": "default",
"cacrt": "EXAMPLE_CERTIFICATE",
"host": "EXAMPLE_HOST_INTERNAL",
"uri": "EXAMPLE_URI_INTERNAL"
}
Connect to Harbor with Docker CLI
For using the newly created Harbor service as private Docker Hub with your Docker CLI, simply use the following command and insert username as well as password when asked:
docker login EXAMPLE_DASHBOARD_URL
Push Images
Now that your Docker CLI is connected with the Harbor service as private Docker Hub, you can push images using the following instructions:
First, add new tag to any Docker image with the following command:
docker tag image-name DASHBOARD_URL_WITHOUT_HTTPS/PROJECT/image-name
Example:
docker tag my-image 8bdb157c-26c4-4eae-b0b4-701c8fcd1844.system.example.com/test-project/my-image
Then, use the following command to push the image:
docker push DASHBOARD_URL_WITHOUT_HTTPS/PROJECT/image-name
Example:
docker push 8bdb157c-26c4-4eae-b0b4-901c8fcd1844.system.example.com/test-project/my-image
Pull Images
For Pulling images from Harbor we use the following command:
docker pull DASHBOARD_URL_WITHOUT_HTTPS/PROJECT/image-name
Example:
docker pull 8bdb157c-26c4-4eae-b0b4-901c8fcd1844.system.example.com/test-project/my-image
Using Harbor with Notary
The Notary server can be used for image signing by setting the following environmental variables:
export DOCKER_CONTENT_TRUST_SERVER=
export DOCKER_CONTENT_TRUST=1
More information is available at: Notary Documentation
Notary With Self-Signed Certificates
In order to use Notary with self-signed certificates, you need to trust the CA which created the certificate, on your local system. You can use the following command to retrieve the certificate in PEM format:
openssl s_client -showcerts -connect {DASHBOARD_URL_WITHOUT_HTTPS}:443 < /dev/null 2> /dev/null | openssl x509 -outform PEM > CA.pem
The file
CA.pem can then be imported to the local TLS certificate trust
store. Please refer to the documentation of your Operating System how to
achieve this.
Use Harbor with Kubernetes
If you want to make a deployment based on an Image from Harbor please have a look at this.
Delete an a9s Harbor Service Instance
Before you can delete a service instance, you must delete all existing Service Key associated to that service instance.
List Service Keys for Service Instance
Run
cf service-keys NAME-OF-YOUR-SERVICE to list all Service Keys for the respective service.
$ cf service-keys my-harbor-service
Getting keys for service instance my-harbor-service as admin...
name
my-service-key
Delete Service Keys for Service Instance
Run
cf delete-service-key to delete the service key.
cf delete-service-key my-harbor-service my-service-key
Delete a Service Instance
After deleting the service keys, you can run
cf delete-service to delete the service:
cf delete-service my-harbor-harbor-service -p a-bigger-plan my-harbor.
Access Service Dashboard
It is possible to access the dashboard of your Harbor service instance. There you need to create a service key, which is described in the section Create a Service Key.
After creating a service key, you can simply call the Dashboard by pasting the
dashboard_url into your browser.
Set a Custom Service Dashboard Domain
You can set a custom route for the Harbor application.
This works by passing your prefered host in the service configuration at the
time of creation or update of the service. You will need to use the
public_host custom parameter for that.
First, you will need to find a suitable domain from your Cloud Foundry
installation. You can find the list of your Cloud Foundry's available domains
with the command
cf domains.
$ cf domains
Getting domains in org system as [email protected]...
name status type details
apps.example.com shared
system.example.com owned
Given the Cloud Foundry domain
system.example.com, you could set a custom
host
my-custom-host.system.example.com with the following command:
$ cf create-service a9s-harbor harbor-cluster-medium my-medium-cluster -c '{"public_host":"my-custom-host.system.example.com"}'
Creating service instance my-medium-cluster in org system / space staging as [email protected]...
OK
Create in progress. Use 'cf services' or 'cf service my-medium-cluster' to check
operation status.
You can read the documentation of
cf create-service and
cf update-service
for finer information on how to best pass custom parameters for those functions
with the commands
cf help create-service or
cf help update-service.
Once you've created or updated your service with your custom host, this one will be used to host the Harbor application. You can verify this by creating a service key for the service instance:
$ cf create-service-key my-medium-cluster my-svc-key
Creating service key my-svc-key for service instance my-medium-cluster as [email protected]...
OK
$ cf service-key my-medium-cluster my-svc-key
{
"dashboard_url": "",
"username": "a9s4d06eb4e154ab5"
"password": "79acf84d70c2e0ae4c09",
"has_admin_role": false,
"port": 443,
"project": "default",
"cacrt": "-----BEGIN CERTIFICATE-----\nMIIDPDCCAiSgAwIBAgITBwHCvYoMBmB+7uiYvO0PcjT8vzANBgkqhkiG9w0BAQsF\nADAuMSwwKgYDVQQDDCNDZXJ0aWZpY2F0ZSBhdXRob3JpdHkgZm9yIG1zZDQ2YWQ2\nYzAeFw0xOTA3MDgxMTM5MTNaFw0yOTA3MDUxMTM5MTNaMC4xLDAqBgNVBAMMI0Nl\ncnRpZmljYXRlIGF1dGhvcml0eSBmb3IgbXNkNDZhZDZjMIIBIjANBgkqhkiG9w0B\nAQEFAAOCAQ8AMIIBCgKCAQEA6q5rSihLAhoJM0PLGshYizqGRNa9/UCXsao/UVQ0\nFm4luGHZZjCFkgBsOS4TS7GwJdvwnSIhj7eW9VOC6c16d2WzURak3Cge7HlNO/4o\nr3hrheaWCLFz3YbysqnzQLw/V1EBLFvkHE02hhUTSy/R5y4OPWww0Yvup+T4iSHM\nLKdIkqyDptPrC/4adcH89dcyD/EXNyl5gwaEykwfD0THH3xVTeLUQSlh5fh1+Pvb\nPWgsEqETr5vXJ9KuP0iRbETJy0llT3oHF63EaAg6FTo7L5s0x9gLM9Y8tWb98RJq\nT0vgEEatrjhPSox3Ih4Bh8ibb0hpRc/zblRKwD5n0JqwQQIDAQABo1MwUTAPBgNV\nHRMBAf8EBTADAQH/MB0GA1UdDgQWBBSQV4w3ZU5v8v8Kvwotnayub0fDgTAfBgNV\nHSMEGDAWgBSQV4w3ZU5v8v8Kvwotnayub0fDgTANBgkqhkiG9w0BAQsFAAOCAQEA\nmS0T/A6f9ov/r5MdYiuA31jfHiO0T5fyHv751t6TwPS3QP65sZcS+Cb8vjl5eUnb\n9t3PjqwGPRgWpro9O9DNPtqJARJuyLDgIKZtl5vwkH+Wlaj24yqU14jBrgBabK/z\ngeMExWC0FP992RdM3OpaeGgB5MyLVAFx1W3pC0pBUFP+0lNxqPvzStfBX8Jfzkls\n1FoKjAUPly8tTupuwaTbtCzPB8gswYimJjeHCID79vqcRarCm7fOFpvCVEMmNVnj\nX0Tia/SSbftnoKWVA0QzYQPxGYyiMmwGvHm57h35VYhm3NJ/PdNt8N0uTRa13O1a\nr2FRBb2q7S/YPmS2AgzuzQ==\n-----END CERTIFICATE-----\n",
"host": "msd46ad6c-harbor-app-0.node.dc1.consul.dsf2",
"uri": "",
}
Setup Disk Usage Alerts
Each service comes with the a9s Parachute. This component monitors ephemeral and persistent disk usage. See the a9s Parachute documentation how to configure the component. | https://docs.anynines.com/docs/application-developer/a9s-harbor/a9s-ad-harbor-using/ | 2022-09-25T04:29:11 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.anynines.com |
- Grant- Assigns the selected permission to the group for the selected site, space, or subspace (depending on which column you selected in step 4).
- By Reputation- When you select this option, a field is displayed in the reputation column. Enter the number of reputation points that a user must have in order to be assigned this permission.
For example: If you enter 10 in this box, a group member must have a reputation that meets or exceeds 10 in order to be assigned this permission automatically.
- Revoke- Removes a previously granted permission.
Note: If the border of the drop-down permission box is a different color than the fill color, that means the permission was inherited from elsewhere but is being overridden for the selected site, space, or subspace. Remember that whichever text, “Revoked” or “Granted,” is displayed in the box, is the current state of the overridden permission in the selected space.
For example: If the border is green and the fill color is red (
), the permission has been granted at a higher level in the AnswerHub Hierarchy, but is revoked at the current level.), the permission has been granted at a higher level in the AnswerHub Hierarchy, but is revoked at the current level. | https://docs.dzonesoftware.com/articles/17030/spaces-4.html | 2022-09-25T06:04:06 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.dzonesoftware.com |
Managing Amazon Config Rules Across all Accounts in Your Organization
Amazon Config allows you to manage Amazon Config rules across all Amazon accounts within an organization. You can:
Centrally create, update, and delete Amazon Config rules across all accounts in your organization.
Deploy a common set of Amazon Config rules across all accounts and specify accounts where Amazon Config rules should not be created.
Use the APIs from the master account in Amazon Organizations to enforce governance by ensuring that the underlying Amazon Config rules are not modifiable by your organization’s member accounts.
For deployments across different regions
The API call to deploy rules and conformance packs across accounts is region specific.
At the organization level, you need to change the context of your API call to a
different region if you want to deploy rules in other regions. For example, to deploy a
rule in US East (N. Virginia), change the region to US East (N. Virginia) and then call
PutOrganizationConfigRule.
For accounts within an organzation
If a new account joins an organization, the rule or conformance pack is deployed to that account. When an account leaves an organization, the rule or conformance pack is removed.
If you deploy an organizational rule or conformance pack in an organization administrator account, and then establish a delegated administrator and deploy an organizational rule or conformance pack in the delegated administrator account, you won't be able to see the organizational rule or conformance pack in the organization administrator account from the delegated administrator account or see the organizational rule or conformance pack in the delegated administrator account from organization administrator account. The DescribeOrganizationConfigRules and DescribeOrganizationConformancePacks APIs can only see and interact with the organization-related resource that were deployed from within the account calling those APIs.
Retry mechanism for new accounts added to an organization
Deployment of existing organizational rules and conformance packs will only be retried for 7 hours after an account is added to your organization if a recorder is not available. You are expected to create a recorder if one doesn't exist within 7 hours of adding an account to your organization.
Ensure Amazon Config recording is on before you use the following APIs to manage Amazon Config rules across all Amazon accounts within an organization:
PutOrganizationConfigRule, adds or updates organization config rule for your entire organization evaluating whether your Amazon resources comply with your desired configurations.
DescribeOrganizationConfigRules, returns a list of organization config rules.
GetOrganizationConfigRuleDetailedStatus, returns detailed status for each member account within an organization for a given organization config rule.
GetOrganizationCustomRulePolicy, returns the policy definition containing the logic for your organization config custom policy rule.
DescribeOrganizationConfigRuleStatuses, provides organization config rule deployment status for an organization.
DeleteOrganizationConfigRule, deletes the specified organization config rule and all of its evaluation results from all member accounts in that organization.
Region Support
Deploying Amazon Config Rules across member accounts in an Amazon Organization is supported in the following Regions. | https://docs.amazonaws.cn/en_us/config/latest/developerguide/config-rule-multi-account-deployment.html | 2022-09-25T05:37:37 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.amazonaws.cn |
Arianee Decentralized Platform
What is a decentralized platform?What is a decentralized platform?
In the Blockchain context, a decentralized platform means that there is no centralized data storage mechanism. The information is available to all the participants in the network.
From a system design perspective, there are nodes instead of a client server.
About the POA Network blockchainAbout the POA Network blockchain
Interoperability is a major advantage for a Blockchain standard and the Arianee protocol aims to be as flexible as possible when it comes to a company selecting the underlying Blockchain it wishes to use.
For the first deployment of the Arianee protocol, the Arianee project selected the POA Network Blockchain for the following reasons:
- POA Network is a sidechain of the Ethereum blockchain which has the second-biggest market capitalization in the Blockchain world. That means POA Network provides interoperability and facilitates asset transfers between Ethereum-based networks.
- POA Network is based on the Ethereum protocol and, as we write these lines, the most mature and spread blockchain protocol to manage smart contracts. To learn more about smart contracts please refer to the dedicated section (just below).
- The price of transactions is cheaper and more stable on POA Network than the Ethereum main blockchain. This is essentially due to the difference of consensus mechanism between POA Network (Proof of Autonomy) and the Ethereum mainnet (Proof of Work).
- The time to validate transactions is lower on POA Network (around 5 seconds) than the Ethereum mainnet (around 15 seconds).
What is a smart contract?What is a smart contract?
Smart contracts were first proposed in 1994 by Nick Szabo. He defined them as computerized transaction protocols that execute terms of a contract.
In practical words, a Blockchain smart contract is a collection of code (its functions) and data (its states) that resides at a specific address on the Blockchain. Smart contracts are self-executing and render transactions traceable and irreversible.
The Arianee smart contractsThe Arianee smart contracts
The Arianee protocol defines a set of smart contracts to manage tokens related to certificates. Certificates themselves are one type of smart contract, called Arianee Smart Asset.
Here is an overview of the Arianee protocol smart contracts:
To favor interoperability and adoption the Arianee protocol designed certificates (the Arianee Smart Assets) compliant with the ERC-721 Token Standard and a crypto currency (the ARIA token) compliant with the ERC-20 Token Standard. To learn more about the ARIA token please refer to the Arianee Economy chapter.
About data storageAbout data storage
The Arianee protocol when used at its core never asks, stores or uses personal information about owners. Ownership is attached to a wallet public key. Owners are anonymous when using the Arianee protocol.
Information and content of certificates remain available to stakeholders with the right level of permission if:
- The stakeholder has access to the Blockchain.
- The stakeholder can provide either the secret keys provided by the Owner or the Arianee tag on the product to unlock access to the information and content of the related certificate.
- The links within the certificates still point to active content.
About data recoveryAbout data recovery
Data from the Arianee protocol stored on a specific address on a Blockchain can be:
- Read and/or enriched if and only if a set of cryptographic keys, including the public key but excluding the private key of this address, is provided.
- Transferred if and only if the private key of this address is used in a signed transaction. There is no recovery if the private key cannot be used for signing.
Data stored by the Certificate Management Platform provider can be updated and recovered according to the Certificate Management Platform provider policy. Brands should be especially cautious when selecting their Certificate Management Platform provider because:
- Updates of the data provided by the Certificate Management Platform provider may lead to an authentication failure of certificates.
- If the links within the Blockchain Certificates do not point to active content anymore, owners will not have access to this content.
- Vault breach may lead to the hack of the Brand's private keys, leading to the misuse of features available to the Brand.
Data stored by the Wallet provider can be recovered according to the Wallet provider policy. Owners should be especially cautious when selecting their Wallet provider because:
- Recovery of the owner’s wallet depends on the Wallet provider ability to safely save the owner’s private keys or to give him/her a way, such as mnemonic words, to recover their private keys in case of the loss of the wallet.
- Wallet breach may lead to the hack of the owner's private keys, leading to the misuse of his/her certificates.
About certificate authenticityAbout certificate authenticity
The Arianee protocol provides the tools to verify the authenticity of a certificate based on three criteria:
- The Brand identity is verified. That means the Brand went through a Know your business (KYB) process and was registered by the Arianee project as a verified Brand ont the Arianee Identity smart contract.
- The Brand identity is authentic. That means the Brand identity used to issue the certificate is the same than the one verified by the Arianee project.
- The certificate is authentic. That means the content of the certificate has not changed since the certificate issuance.
The Arianee protocol uses the certification process designed by 0xcert. | https://docs.arianee.org/docs/arianee-decentralized | 2022-09-25T04:13:07 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['../img/arianeesmartcontract.png', 'image_tooltip alt_text'],
dtype=object)
array(['../img/arianeedatastorage.png', 'image_tooltip alt_text'],
dtype=object)
array(['../img/arianeewallet.png', 'image_tooltip alt_text'], dtype=object)] | docs.arianee.org |
Paytm Wallet
You can instantly add Paytm wallet as a Fund Source and make payouts to the beneficiary's Paytm wallet.
To add Paytm Wallet as a fund source,
- Go to Payouts Dashboard > Fund Sources. You will see a screen as shown below.
Add Fund Source
- Click Add Fund Source and select Paytm Wallet.
- Specify a unique name for the fund source in the Fund Source Name field. This will help you to easily identify the fund source while initiating payouts.
Add Paytm Wallet
- To connect your Paytm wallet with Cashfree Payments, enter your Paytm Merchant ID, Merchant Key, and Disbursal Account GUID. You can find these details here.
- Click Verify and Submit.
Connect Paytm Wallet
To connect your Paytm Wallet as a Fund Source with Cashfree Payments, share our IP address - 52.66.101.190, with the Paytm merchant integration team to whitelist it.
After receiving the confirmation from the Paytm team, check Fund Sources section and click Connect against the Paytm fund source. You can now initiate payouts via this fund source. Specify the percentage of transfers to be routed via this fund source.
Manage Fund Source Weightage
Note:
You can make Paytm wallet transfers to beneficiaries who have a Paytm wallet.
- Beneficiary should have completed the KYC validation for the Paytm payout to go through, or it fails.
- The maximum limit is up to Rs. 1 Lakh if the customer has completed the KYC registration.
- You only need the beneficiary phone number for making the payouts.
Updated about 1 month ago
Did this page help you? | https://docs.cashfree.com/docs/paytm-wallet | 2022-09-25T05:26:17 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://files.readme.io/0b16078-Screenshot_2022-08-19_at_8.32.13_AM.png',
'Screenshot 2022-08-19 at 8.32.13 AM.png 1434'], dtype=object)
array(['https://files.readme.io/0b16078-Screenshot_2022-08-19_at_8.32.13_AM.png',
'Click to close... 1434'], dtype=object)
array(['https://files.readme.io/b7d20d7-Screenshot_2022-08-19_at_9.48.24_AM.png',
'Screenshot 2022-08-19 at 9.48.24 AM.png 1434'], dtype=object)
array(['https://files.readme.io/b7d20d7-Screenshot_2022-08-19_at_9.48.24_AM.png',
'Click to close... 1434'], dtype=object)
array(['https://files.readme.io/9406522-Screenshot_2022-08-19_at_9.49.38_AM.png',
'Screenshot 2022-08-19 at 9.49.38 AM.png 1424'], dtype=object)
array(['https://files.readme.io/9406522-Screenshot_2022-08-19_at_9.49.38_AM.png',
'Click to close... 1424'], dtype=object)
array(['https://files.readme.io/daeb45c-Screenshot_2022-08-19_at_9.45.34_AM.png',
'Screenshot 2022-08-19 at 9.45.34 AM.png 1434'], dtype=object)
array(['https://files.readme.io/daeb45c-Screenshot_2022-08-19_at_9.45.34_AM.png',
'Click to close... 1434'], dtype=object) ] | docs.cashfree.com |
What is changing?
If you use NetSuite OneWorld, prior to the 2018.1 release, NetSuite allowed users to assign a customer record to only one subsidiary. With the 2018.1 release, NetSuite is introducing the Multi-Subsidiary Customer feature that permits you to share a customer or sub-customer record with multiple subsidiaries.
For more information about this feature, see the Customer Record Shared with Multiple Subsidiaries topic in NetSuite’s 2018.1 release notes.
Salesforce-NetSuite Connector uses the subsidiary information associated with the customer to sync Sales Order, Opportunity, and Quote. Enabling this feature in NetSuite may impact the way this connector process information in NetSuite. For more information on possible impacts, see What is the impact?
Our Recommendation
At present, the Multi-Subsidiary Customer feature is not supported by Salesforce-NetSuite Connector. We recommend you not to use/enable this feature in NetSuite until this feature is supported with this connector.
What is the impact?
When this feature is enabled in NetSuite, regardless of the available subsidiaries for a customer, Sales Order, Opportunity, or Quote will always be created for the primary subsidiary when information is synced from Salesforce to NetSuite.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360002207111-Impacts-of-the-Multi-Subsidiary-Customer-feature-after-release-2018-1 | 2022-09-25T06:08:19 | CC-MAIN-2022-40 | 1664030334514.38 | [] | docs.celigo.com |
Information: You can find the integrator.io release notes for October 2020 here
What’s enhanced
Renamed “Zendesk” integration app to “Zendesk Support”
Note: This change will be reflected on the new installations
The Zendesk - NetSuite integration app is now renamed to “Zendesk Support - NetSuite integration app”. You can find the changes in the:
- Integration app name
- On the integration app installation steps
- Zendesk Support Connection
- Zendesk Support Bundle
- Zendesk Support App
- On the integration app uninstallation steps
- Marketplace
Upgrade your integration app
We've upgraded the existing infrastructure.
- For new installation : Integration App v1.13.0
- For existing customers : Integration App v1.12.0
- Zendesk App 1.5.0
- Celigo integrator.io (NetSuite Bundle 20038) 1.12.0.0
- Celigo Zendesk Connector (NetSuite Bundle 80381) 1.11.0.5
What’s fixed
Last Zendesk Comment ID type displays integer value
The “Last Zendesk Comment ID” field on NetSuite support case form is now changed to integer, thus comment ID in NetSuite will display in integer format which will now fix the exponential value.
Please sign in to leave a comment. | https://docs.celigo.com/hc/en-us/articles/360051279051-Zendesk-NetSuite-release-notes-v1-13-0-October-2020 | 2022-09-25T04:19:02 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['/hc/article_attachments/360073550211/65.png', '65.png'],
dtype=object)
array(['/hc/article_attachments/360073540811/32.png', '32.png'],
dtype=object)
array(['/hc/article_attachments/360073540871/33.png', '33.png'],
dtype=object)
array(['/hc/article_attachments/360073540891/34.png', '34.png'],
dtype=object)
array(['/hc/article_attachments/360073541511/35.png', '35.png'],
dtype=object) ] | docs.celigo.com |
In addition to managing user access via Roles, you can assign Groups to Users to control which Forms / Devices they have access to. (Check out this help article for more information on how to create these Groups.)
Managing the User Groups is done from the same edit page as the Users Role.
First, log into your Management Console. On the left side of the page you'll see a "Settings" button. Click that button and then click the "Manage Users and Roles" link that appears underneath. You'll then be brought to a separate page.
To alter the user, click the Edit button on the right hand side.
Then, navigate to the "Groups" section and tick/un-tick the groups.
Click "Update User" and you're done!
If you have any questions or comments feel free to send us a message at [email protected]. | https://docs.devicemagic.com/en/articles/392892-managing-user-groups | 2022-09-25T05:43:14 | CC-MAIN-2022-40 | 1664030334514.38 | [array(['https://downloads.intercomcdn.com/i/o/279966364/ac543dd5aea831fa8ea12d8b/Screen+Shot+2020-12-21+at+2.57.43+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279967286/256e8de0e710de4febcf8862/Screen+Shot+2020-12-21+at+3.01.05+PM.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/279968242/f25120d821b2795669966845/Screen+Shot+2020-12-21+at+3.04.18+PM.png',
None], dtype=object) ] | docs.devicemagic.com |
If the default morphs that are automatically created upon running the Transfer Utility are not as good as they need to be, tools exist for fixing these problems. Creating JCM's or Joint Corrective Morphs is one technique to fine tune these problem areas.
This example shows a coat that didn't look right when the head and neck were bent backwards. The desire is to build a morph to fix the coat when the head was bent back.
Follow the steps below to create a JCM to fix a clothing item's bending issues.
Pose the Genesis neck (with the clothing item loaded). Bend the neck backwards all the way (30%, X-rotate). This places the figure and clothing item in the proper location for making the necessary modeling changes.
Load the .obj model into the modeling software (we use Modo). Now fix the collar of the coat to look the proper way in this bent position.
In preparation to save the model it's important to delete Genesis from the scene. Now save out the coat as JCM morph. Make sure genesis is still posed exactly how it was when you exported the model from DAZ Studio.
Load the morph into DAZ Studio 4 Pro using Morph Loader Pro... and make sure to set REVERSE DEFORMATIONS to True.
These techniques can be used in any area of the model. The goal of the Transfer Utility is to have as few corrections as possible. But the nature of many products will require fixes to show the item in it's best light. The more accurate and natural your product bends and looks, the better received the product will be to the consuming public. Take time to test your product to be sure it works as best it can. | http://docs.daz3d.com/doku.php/public/publishing/adding_jcm_clothing/start | 2017-11-17T19:19:28 | CC-MAIN-2017-47 | 1510934803906.12 | [] | docs.daz3d.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.