content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Disk Image Resize How to manually resize an APFS container or HFS volume Quick navigation Overview | Determine Filesystem Type | Resizing APFS Containers | Resizing HFS Volumes Note Reducing the image size is not supported. An error should be displayed when attempting to do so. Image Resize for Intel-based VMs Overview When resizing a disk image using the command orka image resize, you have two options: - Automatic Resize: Provide SSH credentials for the VM along with the new image size. In this scenario, the Orka API will resize the virtual disk image as well as growing the disk partition to fill the available space. - Manual Resize: Provide only the new image size. If you choose not to provide SSH credentials (or SSH is not enabled in the VM) then you will need to manually resize the disk partition using the instructions below. Determine Filesystem Type In order to manually resize the disk partition, you must first determine the filesystem type used by macOS. Establish an SSH connection to the VM or connect via VNC and open Applications -> Utilities -> Terminal. Run the command diskutil list to determine the filesystem type. For APFS filesystems (Mojave and later) you should see output similar to the following: Output of diskutil list on APFS filesystem For HFS filesystems (High Sierra and earlier) you should see output similar to the following: Output of diskutil list on HFS filesystem IMPORTANT Note the identifier for the container (APFS) or volume (HFS) in the output seen from diskutil list. In the example above, the APFS container is located on disk1with identifier disk1s2and the HFS volume is located on disk1with identifier disk1s2. The disk and identifier will be needed in the next step. Resizing APFS Containers In order to resize an APFS container, run the following commands using the appropriate disk name and identifier from the previous step: diskutil repairDisk disk1 diskutil apfs resizeContainer disk1s2 0 If the resize was successful, you should see the tail of the output similar to what is shown below: Successful resize on APFS container Resizing HFS Volumes In order to resize an HFS volume, run the following commands using the appropriate disk name and identifier from the previous step: diskutil repairDisk disk1 diskutil resizeVolume disk1s2 R If the resize was successful, you should see the tail of the output similar to what is shown below: Successful resize on HFS volume Image Resize for Apple ARM-based VMs Overview Resizing an Apple ARM-based VM's disk image is possible by using the command orka image resize. Resizing happens automatically and SSH credentials are not required. To automatically resize an Apple ARM-based VM's disk, you just need to provide the VM ID and the new image size. The SSH credentials will automatically populate to N/A since they are not required. In this scenario, the Orka API will resize the virtual disk image as well as grow the disk partition to fill the available space. Run orka image resize to resize the image to 200GB If the resize was successful, once you SSH to the VM, diskutil list should have output similar to what is shown below: 200GB M1 disk image after successful resize IMPORTANT To use Image Resize with Apple ARM-based VMs you need to use the latest version of Orka VM Tools. It comes out of the box with the latest .orkasi images. You can also install it manually. For more information, see the downloads page. IMPORTANT The image size has a direct effect on the cached images on a node. Read more about ARM Nodes Image Caching to see how image caching works on ARM-based nodes. Compared to Intel-based VMs, manual disk resize is not supported. Apple ARM-based VMs' disk images have APFS filesystems (similar to Mojave and later), with an additional Apple_APFS_Recovery partition. This recovery partition is the reason why you cannot manually resize the apfs container like you can for Intel disk images. A standard 90GB M1 disk image Updated 10 days ago
https://orkadocs.macstadium.com/docs/disk-image-resize
2022-09-25T02:13:53
CC-MAIN-2022-40
1664030334332.96
[array(['https://files.readme.io/36db76a-Screen_Shot_2021-02-23_at_11.21.38_AM.png', 'Screen Shot 2021-02-23 at 11.21.38 AM.png 610'], dtype=object) array(['https://files.readme.io/36db76a-Screen_Shot_2021-02-23_at_11.21.38_AM.png', 'Click to close... 610'], dtype=object) array(['https://files.readme.io/feaf3ac-Screen_Shot_2021-02-23_at_11.24.27_AM.png', 'Screen Shot 2021-02-23 at 11.24.27 AM.png 608'], dtype=object) array(['https://files.readme.io/feaf3ac-Screen_Shot_2021-02-23_at_11.24.27_AM.png', 'Click to close... 608'], dtype=object) array(['https://files.readme.io/41db520-Screen_Shot_2021-02-23_at_11.38.19_AM.png', 'Screen Shot 2021-02-23 at 11.38.19 AM.png 596'], dtype=object) array(['https://files.readme.io/41db520-Screen_Shot_2021-02-23_at_11.38.19_AM.png', 'Click to close... 596'], dtype=object) array(['https://files.readme.io/a043b7d-Screen_Shot_2021-02-23_at_12.53.09_PM.png', 'Screen Shot 2021-02-23 at 12.53.09 PM.png 567'], dtype=object) array(['https://files.readme.io/a043b7d-Screen_Shot_2021-02-23_at_12.53.09_PM.png', 'Click to close... 567'], dtype=object) array(['https://files.readme.io/d256244-Screen_Shot_2022-08-23_at_7.59.09_AM.png', 'Screen Shot 2022-08-23 at 7.59.09 AM.png 638'], dtype=object) array(['https://files.readme.io/d256244-Screen_Shot_2022-08-23_at_7.59.09_AM.png', 'Click to close... 638'], dtype=object) array(['https://files.readme.io/cdb963d-Screen_Shot_2022-08-23_at_8.01.08_AM.png', 'Screen Shot 2022-08-23 at 8.01.08 AM.png 556'], dtype=object) array(['https://files.readme.io/cdb963d-Screen_Shot_2022-08-23_at_8.01.08_AM.png', 'Click to close... 556'], dtype=object) array(['https://files.readme.io/378152d-Screen_Shot_2022-08-23_at_7.38.14_AM.png', 'Screen Shot 2022-08-23 at 7.38.14 AM.png 556'], dtype=object) array(['https://files.readme.io/378152d-Screen_Shot_2022-08-23_at_7.38.14_AM.png', 'Click to close... 556'], dtype=object) ]
orkadocs.macstadium.com
Simplicity “If it’s not simple, there’s something wrong with it!” KISS is my favourite word in the world. It translates into “Keep It Stupid Simple”. This of course implies that it’s easy to understand, and doesn’t require a lot of explaning, before easily understood by somebody with zero knowledge of the subject at hand from before. Over the last couple of weeks we have gone through the documentation for Magic and Hyperlambda, and just for kicks, we made sure you could print it into a PDF if you prefer such mediums. Try it out if you want to. Click CTRL+P now, and see how it ends up looking like. Of course, this is just a “bonus”, and probably won’t be useful to many, but it allowed us to easily “measure” the complexity of Hyperlambda, by simply clicking CTRL+P on every single page accompanied with its documentation, add these up, and find its “total cognitive complexity”, and the numbers we found are as follows. - There are 82 pages of tutorials style of documentation - There are 129 pages of reference documentation Of course, when printing a webpage, a lot of additional margins and such is applied, so you can probably remove 20% of the above figures, at which point you’d end up with 185 pages. In addition, this is the complete reference documentation for Magic and Hyperlambda, describing every single moving part of the project, both in tutorial style articles, and in the form of reference documentation. My guess would be you could probably read through a third of that before understanding everything required for you to actually be productive with the thing. This implies that unless you’re Kim Peek and can read two pages every 5 seconds, you could probably read through 60 pages of documentation in probably 180 minutes, possibly less if you concentrated at the task. At which point you’d know a completely new programming language. Compare that with the 1,500 pages or so in a programming language such as C++ or C#, at which point it’s fairly easy to understand that Magic and Hyperlambda is much simpler and more easily understood. Magic and Hyperlambda is the very DEFINITION of KISS!
https://docs.aista.com/blog/simplicity
2022-09-25T00:57:19
CC-MAIN-2022-40
1664030334332.96
[]
docs.aista.com
Available SDKs Aspose.CAD Cloud SDKs Aspose.CAD. Versioning Support in SDKs Aspose.CAD Cloud SDKs have “Version” property for API Configuration classes. The attribute allows targeting a specific version. Supported versions are: - v1 (default) - updated on a monthly basis or more frequent - v2 (stable) - updated once a quarter - v3 (frozen) - previous version of “stable”, updated once a quarter
https://docs.aspose.cloud/cad/available-sdks/
2022-09-25T01:45:51
CC-MAIN-2022-40
1664030334332.96
[]
docs.aspose.cloud
Connecting to your virtual machine This article describes several ways to connect to a running virtual machine. The prerequisites include proper firewall settings and, in case of cPouta, a floating IP. Keypair-based SSH connection Before connecting to your virtual machine, you can check its status in the Instances view of the cPouta/ePouta web interface. Figure The Instances view of the cPouta web interface. The figure above shows a sample of the Instances view in cPouta web interface. In this case, we can see that a virtual machine called test-instance-1 is active and running. The machine has two IP addresses, of which the address 86.50.169.56 is the public one. The machine uses a keypair called skapoor. The ePouta web interface looks similar but the instances in ePouta have only one IP address field specified which is the virtual machine's local IP. When your virtual machine has a public floating IP assigned in the cPouta cloud (or VM local IP in the case of ePouta) and a security group that allows SSH, you can open a remote SSH connection to your instance. Any standard SSH client should work. A new virtual machine only has a default user account and the root or administrator account, or in some cases, only the root account. The user account name depends on the image used. For images provided by CSC, it has usually been "cloud-user", but we are moving towards using the image's upstream defaults. For example, Ubuntu images use "ubuntu". You can only log in using keypair-based authentication, such as: #for cPouta CentOS VMs ssh cloud-user@public-ip -i keyfile.pem #for cPouta Ubuntu VMs ssh ubuntu@public-ip -i keyfile.pem #for ePouta CentOS VMs ssh cloud-user@vm-ip -i keyfile.pem #for ePouta Ubuntu VMs ssh ubuntu@vm-ip -i keyfile.pem With the default CSC images, when you try logging in as root, you get a message that tells you which username to use instead. Some third-party images may use the root account directly or a completely different username. Instead of specifying the path to the key to use for the SSH connection each time, you can use an SSH agent. To use a SSH agent in your local Linux or Mac OS X machine, start a shell and run the following commands: ssh-agent /bin/bash ssh-add ~/.ssh/keyname.pem Now you should be able to connect to the public floating IP of your VM in cPouta (or VM local IP in case of ePouta) using SSH without the need of specifying the key: #for cPouta VMs ssh cloud-user@public-ip #for ePouta VMs ssh cloud-user@vm-ip Tip You can enable agent forwarding when connecting through SSH to a virtual machine by using the -A flag. ssh -A cloud-user@public-ip By enabling agent forwarding, you enable the ssh agent running on the virtual machine to make use of the keys which are loaded in the ssh agent of your local workstation. You can use this feature to reduce the number of floating IPs used in your project: - Assign a floating IP to one of your instances - ssh to the instance enabling agent forwarding - You can now ssh from this instance to the other instances in the network using their private IP Using these steps, you need only a single public IP instead of one public IP for each of the instances. Warning: using agent forwarding has some security implications which might be unacceptable in certain environments or for certain security policies. Getting root access on a virtual machine If you logged in using a default user account, you will be able to run commands as root with sudo: sudo <some command> You can also get a root shell: sudo -i None of the accounts in the default images provided by CSC have password login enabled. In these images, you can utilize sudo without a password. If accounts that do not have root access are needed, they need to be created separately. Connect to a machine using the Pouta virtual console The recommended way of accessing Pouta instances is through an SSH connection, as explained earlier. Nevertheless, if you suddenly find yourself experiencing issues with the SSH connection for example, the web interface includes a console tool that you can use to access your virtual machine directly. In order to be able to use the console, you need to set up a password-based user account first. Once connected through SSH to your instance, you can use tools such as useradd and passwd to set up this type of account. As indicated in our security guidelines, please do not enable remote login for this password-based account, but rather use it only in case you need to access the instance though the console. You can open a console session by clicking Console in the instance dropdown menu: To input text in the console, click the grey bar: After this, you can log in with the user account and password you have created. Note Umlaut characters, such as ä or ö, do not work in the virtual console for most keymaps.
https://docs.csc.fi/cloud/pouta/connecting-to-vm/
2022-09-25T01:48:22
CC-MAIN-2022-40
1664030334332.96
[array(['/img/pouta-instance-details.png', 'VM Status check'], dtype=object) array(['/img/console-button-horizon.png', 'Open a console in the web GUI'], dtype=object) array(['/img/pouta-instances-terminal.png', 'Input text to web console'], dtype=object) ]
docs.csc.fi
Getting Started Fedora Kinoite is designed to be easy and straightforward to use, and specialist knowledge should generally not be required. However, Fedora Kinoite is built differently from other operating systems, and there are therefore some things that it is useful to know. Fedora Kinoite has different options for installing software, compared with a standard Fedora Workstation (or other package-based Linux distributions). These include: Flatpak apps: this is the primary way that (GUI) apps get installed on Fedora Kinoite. Toolbox: Used primarily for CLI apps; development, debugging tools, etc., but also has support for graphical apps. Package layering: Most Fedora packages can be installed on the system with the help of package layering. By default the system operates in pure image mode, but package layering is useful for things like libvirt, drivers, etc. For information on Flatpak and package layering, see below. Flatpak Flatpak is the primary way that apps can be installed on Fedora. Alternatively, you can use the following command from the terminal: $ flatpak remote-add --if-not-exists flathub. Alternatively, each application on flathub.org can be installed through the terminal by running the installation command at the bottom of the page that should look something like this: $ flatpak install flathub <package-name> As an example, Firefox can be installed by running the following command which can be found on Firefox’s flathub page: $ flatpak install flathub org.mozilla.firefox Flatpak command line Additional details about the flatpak command line interface can be found in the official Flatpak documentation. Package layering Package layering works by modifying your Fedora Kinoite installation. As the name implies, it works by extending the packages from which Fedora Kinoite is composed. Good examples of packages to be layered would be: fish: An alternative Unix shell sway: A Wayland tiling compositor libvirt: The libvirt daemon Most (but not all) RPM packages provided by Fedora can be installed on Fedora or updated. You can alternatively use rpm-ostree install --apply-live <pkg> to also temporarily apply the change directly to your currently booted deployment. Fedora Kinoite using: $ rpm-ostree install <package name> This will download the package and any required dependencies, and recompose your Fedora Kinoite image with them. rpm-ostree uses standard Fedora package names, which can be searched using DNF (this is not available on a Fedora.
https://docs.fedoraproject.org/id/fedora-kinoite/getting-started/
2022-09-25T02:59:36
CC-MAIN-2022-40
1664030334332.96
[array(['../_images/sfg_flathub_fedora.png', 'sfg flathub fedora'], dtype=object) array(['../_images/sfg_flathub_download.png', 'sfg flathub download'], dtype=object) array(['../_images/sfg_flathub_install.png', 'sfg flathub install'], dtype=object) array(['../_images/sfg_libreoffice_install.png', 'sfg libreoffice install'], dtype=object)]
docs.fedoraproject.org
KAP Token The KAP token is our connection to our community and contributors. All value created within the Kapital DAO flows right back to the token. The KAP utility token is non-inflationary, with only 1,000,000,000 minted. All fees are charged in KAP where possible. If fees or earnings are taken in a token that is not KAP, those exogenous tokens are swapped into KAP periodically in the treasury. Fees payable in KAP: Asset management app (1% transaction fees) NFT promotions in-app (variable auction bids) Game listings in-app (variable auction bids) Tokens to be swapped into KAP: Token deployment app (4% of FDV of deployed tokens) Gameplay earnings ( Acquisition Council assets operated by the Kapital Guild ) Staking Dual-sided (KAP-ETH) liquidity staking pools will be available near launch Premium in-app features or discounts may be available to staking participants Governance KAP holders may vote on partnerships, acquisitions, and possible rewards The treasury will periodically monetize its holdings to pay developers to maintain and improve the DAO's core protocol offerings. Distribution & Vesting Treasury The ecosystem treasury is to be deployed for community incentives, partnerships, and liquidity rewards. We believe it is important to distribute more to the community and less to the team and investors as the community is ultimately the guiding light that will drive the DAO. The more vested the community is in the shared vision, the more powerful we become together. These tokens have no lock but vest over 48 months. Staking Rewards The initial rewards pool for staking will be seeded with these tokens. The DAO may refill rewards. These tokens have a 12 month lock and vest over a further 12 months. (Note: User rewards will be unlocked one year after claiming, similar to projects like Illuvium.) Market Liquidity Both decentralized exchanges (DEXes) and centralized exchanges (CEXes) will be seeded with a small amount of KAP to enable initial liquidity. These tokens are unlocked and have no vesting. Public Sale Some amount of tokens will be immediately distributed in a public sale. These tokens are unlocked and have no vesting. Private Rounds These tokens are distributed to key investors, games, organizations, and partners in order to access the Asset Management Protocol or list their desired partners inside of the app. These tokens are locked for 12 months and vest over a further 12 months. Core & Advisors Core team members and key advisors are incentivized to construct the DAO and associated engineering platforms using these tokens. Note that not all of these have been allocated, as some have been reserved for additional hires. These tokens are locked for 12 months and vest over a further 12 months. Future Hires Some amount of tokens is reserved to incentivize additional team members beyond the initial core group to contribute to the project at an undetermined date in the future. From TGE, these tokens are estimated to be locked for 24 months and vest over 24 months. Circulating Supply Initial circulating supply will be somewhere between 12,000,000 - 50,000,000 KAP (1.2% - 5.0%) pending market demand. This supply will increase as ecosystem treasury activities and rewards for liquidity provision enter circulation. Governance As a DAO, we strongly believe that the collective voices of the contributors are the beacons that illuminate the best paths forward. The DAO aims to become completely decentralized over a period of 24 months, wherein the community ownership of the DAO will eventually outpace the founding team and investors. This is deliberate and intentional. We're excited and lucky to be here at the beginning, but you are the future that will ultimately set the course of history. Holding the KAP token implies participation in a large number of diverse gaming and metaverse-related opportunities across the space, and may enable better access to early projects. There will only ever be one token, the KAP token. We will not be creating subDAOs and multiple feeder tokens to support individual games, contrary to the trend in current DAOs. The value remains in the KAP ecosystem. Kapital DAO - Previous Disclaimer Next - Tokenomics Staking Last modified 5h ago Copy link Outline Distribution & Vesting Treasury Staking Rewards Market Liquidity Public Sale Private Rounds Core & Advisors Future Hires Circulating Supply Governance
https://docs.kapital.gg/tokenomics/kap-token
2022-09-25T02:17:21
CC-MAIN-2022-40
1664030334332.96
[]
docs.kapital.gg
The effective rights a specific user has on an object - what the user can actually do with the object - are determined by examining ACEs in a specific order. The first ACE that matches both the user and the desired access right determines whether the user has that right on the object. An ACE matches the user if it specifies the user or any group the user is directly or indirectly a member of. An ACE matches the desired right if the right is listed in the ACE. ACEs are examined in the following order : At each object, ACEs are checked in ACL order (the order displayed for an object on the Access Control page). Order can be changed among multiple ACEs on the same object by using the up arrow and down arrow buttons next to the ACEs. If no matching ACE is found after all levels are examined (back to the root or Global ACE), access is allowed by default (this is for back-compatibility with non-ACL mode).
https://docs.thunderstone.com/site/webinatorman/determining_effective_rights.html
2022-09-25T01:10:36
CC-MAIN-2022-40
1664030334332.96
[]
docs.thunderstone.com
Configuring global state policies You can change behavior to be globally stateful by setting a global state policy with security firewall global-state-policy <protocol>. When state policies are defined, state rules for return traffic of that type need not be explicitly mentioned within the rule sets. The following apply to global stateful rules: - A global stateful rule affects only the firewall rules that explicitly (or by inference) refer to that protocol. This inference can occur if the protocol keyword has been omitted for TCP, ICMP or ICMPv6 rules. - ICMP sessions are created only for echo-request packets. Attempting to create a session for an echo-response results in a packet drop. - It is usually not necessary to specify default-action (or default-log). Reserve default-action for use with a stateless firewall if you want to block only a few packets and pass all others using default-action accept. Consider the following configuration. In this configuration, each of the rules 10, 20, 30, 40, 100, 200 act as if they also had state enable present. Rule 400 is not affected, and does not enable a state. The following protocol-specific notes apply to this example: ICMP An IPv4 ICMP echo-request packet matches rule 10, creates a state, and allows ICMP echo-response packets to be received. The same applies to IPv6 ICMP echo-request packets and rule 20. ICMP sessions are created only for echo-request packets. Any attempt to create a session for echo-response packet fails. An echo-response in the presence of the example ruleset will match rule 30 (or 40 for IPv6), and be dropped. Other ICMP packets are allowed through. In this example, it is not necessary to use the security firewall global-state-policy icmp rule because state enable can be used for rule 10 or 20. ICMP errors corresponding to an existing session are always passed (and NAT translated) unless explicitly blocked by a firewall rule. TCP For TCP, rule 200 allows outbound traffic to port 80 (http), and allows its response packets. Rule 400 allows out all other packets (including other TCP packets), but packets matching these rules do not create a state. Outbound TCP traffic to a port such as port 88 is allowed, but its response packets are blocked. UDP The example ruleset allows all UDP traffic, including requests and responses. Example configuration security { firewall { global-state-policy { icmp tcp udp } name GblState { rule 10 { action accept icmp { name echo-request } } rule 20 { action accept icmpv6 { name echo-request } } rule 30 { action accept protocol icmp } rule 40 { action accept protocol ipv6-icmp } rule 100 { action accept protocol udp } rule 200 { action accept destination { port 80 } protocol tcp } rule 400 { action accept } } } } Example steps to configure a global firewall policy to allow all return traffic The following example shows the steps to configure a firewall globally to allow all return traffic. In addition, the firewall allows any traffic (such as FTP data) that is related to allowed traffic in the original direction. The firewall drops invalid traffic. To configure this global stateful behavior, perform the following steps in configuration mode.
https://docs.vyatta.com/en/supported-platforms/vrouter/configuration-vrouter/security-and-vpn/firewall/configuration-examples/stateful-behavior/configuring-global-state-policies
2022-09-25T01:15:34
CC-MAIN-2022-40
1664030334332.96
[]
docs.vyatta.com
xCluster replication YugabyteDB Anywhere allows you to use its UI or API to manage asynchronous replication between independent YugabyteDB clusters. You can perform deployment via unidirectional (master-follower) or bidirectional (multi-master) xCluster replication between two data centers. Within the concept of replication, universes are divided into the following categories: A source universe contains the original data that is subject to replication. Note that in the current release, replicating a source universe that has already been populated with data can be done only by contacting Yugabyte Support. A target universe is the recipient of the replicated data. One source universe can replicate to one or more target universes. For additional information on xCluster replication in YugabyteDB, see the following: - xCluster replication: overview and architecture - xCluster replication between universes in YugabyteDB You can use the YugabyteDB Anywhere UI to set up and configure xCluster replication for universes whose tables do not contain data. In addition, you can perform monitoring by accessing the information about the replication lag and enabling alerts on excessive lag. Set up replication You can set up xCluster replication as follows: Open the YugabyteDB Anywhere UI and navigate to Universes. Select the universe you want to replicate and navigate to Replication. Click Configure Replication to open the dialog shown in the following illustration: Provide the name for your replication. Select the target universe. Click Next: Select Tables. From a list of common tables between source and target universes, select the tables you want to include in the replication and then click Create Replication, as per the following illustration: Configure replication You can configure an existing replication as follows: Open the YugabyteDB Anywhere UI and navigate to Universes. Select the universe whose existing replication you want to modify and then navigate to Replication, as per the following illustration: Click Configure Replication and perform steps 4 through 7 from How to set up replication. View, manage, and monitor replication To view and manage an existing replication, as well as configure monitoring, click the replication name to open the details page shown in the following illustration: This page allows you to do the following: View the replication details. View and modify the list of tables included in the replication, as follows: Select Tables, as per the following illustration: Click Modify Tables. Use the Add tables to the replication dialog to change the table selection, as per the following illustration: The following illustration shows the Add tables to the replication dialog after modifications: Configure the replication, as follows: Click Actions > Edit replication configuration. Make changes using the Edit cluster replication dialog shown in the following illustration: Set up monitoring by configuring alerts, as follows: Click Configure Alert. Use the Configure Replication Alert dialog to enable or disable alert issued when the replication lag exceeds the specified threshold, as per the following illustration: Pause the replication process (stop the traffic) by clicking Pause Replication. This is useful when performing maintenance. Paused replications can be resumed from the last checkpoint. Delete the universe replication by clicking Actions > Delete replication. Set up bidirectional replication You can set up bidirectional replication using either the YugabyteDB Anywhere UI or API by creating two separate replication configurations. Under this scenario, a source universe of the first replication becomes the target universe of the second replication, and vice versa.
https://docs.yugabyte.com/preview/yugabyte-platform/create-deployments/async-replication-platform/
2022-09-25T02:00:26
CC-MAIN-2022-40
1664030334332.96
[array(['/images/yp/asynch-replication-4.png', 'Replication Details Replication Details'], dtype=object)]
docs.yugabyte.com
Random data generation¶ In many studies, stimuli aren’t defined ahead of time, but generated randomly for every participant anew. For this purpose, lab.js contains flexible (pseudo-)random data generation utilities. All random data generation is handled by the util.Random() class. Every component in a study has direct access this utility through its random property. Thus, to generate, for example, a random integer up to n in a component script, one would write this.random.range(n). (for the sake of completeness: Outside of a component, the class can be instantiated and used by itself). As an example, to randomly compute a parameter (which you could later use inside your screen content, or anywhere else where placeholders are accepted), you might use the following code in a script that runs before the component is prepared: this.options.parameters['greeting'] = this.random.choice(['aloha', 'as-salamualaikum', 'shalom', 'namaste']) This will select one of the greetings at random, and save it in the greeting parameter. The value is then available for re-use whereever parameters can be inserted, and will be included in the dataset. You can alternatively use these functions directly inside of a placeholder, such as ${ this.random.choice(['hej', 'hola', 'ciao']) }, and include this placeholder in the screen content. This shows a random greeting without preserving the message in the data. In practice of course, you’ll probably be randomly generating more useful information, such as the assignment to one of several conditions. - class util. Random([options])¶ A set of utilities with (pseudo-)random behavior, all drawing on the same source of randomness. By default, the random source is the browsers built-in random number generator, Math.random. util.Random. constrainedShuffle(array, constraints[, helpers={}, maxIterations=10**4, failOnMaxIterations=false])¶ This method will shuffle an array similar to the shufflefunction described above, but will check whether constraints are met before returning the result. Defining constraints The constraintsargument can be used to define desired properties of the shuffled result, specifically the maximum number of repetitions of the same value in series, and the minimum distance between repetitions of the same value. These are defined using the maxRepSeriesand minRepDistanceparameters, respectively. maxRepSeriesrestricts the number of repetitions of the same value in immediate succession. For example, maxRepSeries: 1ensures that no value appears twice in sequence: // Create a new RNG for demo purposes. Inside a component, // scripts can use the built-in RNG via this.random const rng = new lab.util.Random() rng.constrainedShuffle( // (I was a terror since the public school era) ['party', 'party', 'bullsh!*', 'bullsh!*'], { maxRepSeries: 1 } ) // ➝ ['party', 'bullsh!*', 'party', 'bullsh!*'] Similarly, minRepDistanceensures a minimum distance between successive repetitions of the same value (and implies maxRepSeries: 1). Note that maxRepDistance: 2requires that there is at least one other entry in the shuffled array between subsequent repetitions of the same entry, 3requires two entries in between, and so on: rng.constrainedShuffle( ['dj', 'dj', 'fan', 'fan', 'freak', 'freak'], { minRepDistance: 3 } ) // ➝ ['dj', 'fan', 'freak', 'dj', /* ... */] Custom constraint checkers As an alternative to desired properties of the shuffled result, it’s possible to define a custom constraint checker. This is a function that evaluates a shuffled candidate array, and returns trueor falseto accept or reject the shuffled candidate, depending on whether it meets the desired properties: // Function that evaluates to true only if // the first array entry matches the provided value. const firstThingsFirst = array => array[0] === "I'm the realest" rng.constrainedShuffle( [ "I'm the realest", "givin' lessons in physics", "put my name in bold", "bring the hooks in, where the bass at?", // ... who dat, who dat? ], firstThingsFirst ) // ➝ Shuffled result with fixed first entry util.Random. shuffleTable(table[, columnGroups=[]])¶ Shuffles the rows of a tabular data structure, optionally shuffling groups of columns independently. This function assumes a tabular input in the form of an arrayof one or more objects, each of which represents a row in the table. For example, we might imagine the following tabular input: const stroopTable = [ { word: 'red', color: 'red' }, { word: 'blue', color: 'blue' }, { word: 'green', color: 'green' }, ] Here, the array(in square brackets) holds multiple rows, which contain the entries for every column. This data structure is common in lab.js: The entire data storage mechanism relies on it (though we hope you wouldn’t want to shuffle your collected data!), and (somewhat more usefully) loops represent their iterations in this format. So you might imagine that each of the rows in the example above represents a trial in a Stroop paradigm, with a combination of word and color. However, you’d want to shuffle the words and colors independently to create random combinations. This is probably where the shuffleTablefunction is most useful: Implementing a complex randomization strategy. Invoked without further options, for example as shuffleTable(stroopTable), the function shuffles the rows while keeping their structure intact. This changes if groups of columns are singled out for independent shuffling, as in this example: const rng = new lab.util.Random() rng.shuffleTable(stroopTable, [['word'], ['color']]) Here, the wordand colorcolumns are shuffled independently of one another: The output will have the same number of rows and columns as the input, but values that were previously in a row are no longer joined. Two more things are worth noting: - Any columns not specified in the columnGroupsparameter are treated as a single group: They are also shuffled, but values of these columns in the same row remain intact. - Building on the example above, multiple columns can be shuffled together by combining their names, e.g. shuffleTable(stroopTable, [['word', 'duration'], ['color']]).
https://labjs.readthedocs.io/en/latest/reference/random.html
2022-09-25T02:14:39
CC-MAIN-2022-40
1664030334332.96
[]
labjs.readthedocs.io
Configure next queries The Next Queries management tool lets you customize the next queries experience offered to your shoppers based on business needs. You can preview organic next queries that are displayed in the store for a query. You can add curated next queries and arrange them to determine their order of appearance, listing them before any organic next queries. interact Unsure about the difference between curated and organic next queries? See Next Queries overview. You create a next query configuration to determine the next queries (curated & organic) to display in the commerce store after shoppers perform a search using a query. Configurations are set up by query context: language and query. There can only be one configuration per query context combination. However, you can create multiple next query configurations. Getting started with the Next Queries tool When you access the Next Queries management tool, you view a list of all existing next query configurations for the instance. If you have not created any configurations, the list does not contain any rows. List view of the Next Queries management tool - (A) Action tools, (B) Row selector, (C) Page navigation options. The list table displays the details of the next query configurations: - Query: the initial query that shoppers must use to search before the next query is displayed. - Language: the language set up for your implementation, if applicable. - Curated: top five next queries created for the query - Inactive organic: organic next queries hidden in the store - Total organic: the total number of organic next queries available for the query. - Updated on/by: the user who performed the last changes on the configuration. You can select the number of configurations to be displayed in the list with the row selector (B) in the bottom left corner of the screen. You can navigate to other pages in the configuration list using the page navigation options (C) in the bottom right corner of the screen. To add a new configuration from the list screen, click the + NEW button. To edit or delete a configuration, select the corresponding checkbox next to the configuration you want to modify and use the action tools (A). Tip Hover over the values in the Curated and Inactive organic columns to view the complete list of curated and inactive organic next queries. Creating a next query configuration To add a new configuration from the list screen, click the + NEW button. Next Queries management tool - (A) Search box, (B) Preview, (C) Organic next queries. The search box (A) is used to define the initial query associated with the configuration. Shoppers must use this query before the next queries for the configuration are displayed. If you change the query in the search box, any unsaved changes for the configuration are lost. The preview (B) shows the product results that are returned by the search engine for the query to help you to identify what next queries shoppers might associate with the results. It displays a thumbnail of the product, the product name, and the product ID. The list of organic next queries (C) associated with the initial query is shown below the preview. You can use the left and right arrows to scroll through the organic next queries available. Organic next queries are displayed as tags. To create a configuration: - Enter the initial query in the search box. - Add curated next queries, or hide or show organic next queries as appropriate. You can create a configuration that contains only organic next queries, a combination of organic and curated next queries, or only curated next queries (hiding all organic next queries). - Click Save. Creating a curated next query You can customize the next query experience by adding curated next queries (C) that are displayed before any organic next queries (D) in the commerce store. Edit view of the Next Queries management tool - (A) Search box, (B) Preview, (C) Curated next queries, (D) Organic next queries. Rules for curated next queries: - Curated next queries must not contain the same term as the initial query. - Curated next queries cannot be duplicated. - Curated next queries must have results. To prevent shoppers from being redirected to a page with no results for the next query, every time you create a new curated next query, a request is sent to the search engine to check whether there are results in the product catalogue. If there are no results, you cannot save it. Since organic next queries may change over time depending on shopper behavior, you can choose to create a curated next query that is identical to an organic next query. A warning message appears to tell you that an organic next query already exists, but you can save the curated next query. Similarly, if you edit an existing configuration containing curated next queries and these next queries have become organic next queries thanks to shopper interaction, a warning message appears to help you identify the curated next queries so you can choose to delete them or hide the organic next query instead. Curated next queries have an icon to identify them (C). All curated next queries are listed before any organic next queries. Tip To feature an organic next query higher in the list, you can add it as a curated next query. This way, the query always appears in the first positions of the list despite any changes in shopper behavior. However, it is important to remember to hide the organic next query so shoppers won’t see the next query twice. To add a curated next query: - Click + New or select a configuration to edit. - Click the + icon next to the list of next queries. - Enter the new query you want to use as a next query and press Enter. The curated next query appears at the beginning of the list. - Click Save. Changing the order of next queries Curated next queries always appear before organic next queries. You cannot alternate curated and organic next queries in the list. The order of organic next queries cannot be changed as it is determined by the volume of shopper searches. You can change the order in which curated next queries appear to shoppers: - In the next queries carousel, drag and drop the curated next query to the corresponding position. - Click Save. Showing and hiding next queries Next queries can be displayed in multiple locations in your search experience. You can decide which organic next queries to show, as well as displaying any curated next queries you create. You can even choose to offer shoppers a different experience with each type, transparently highlighting curated next queries with a “promoted” icon in the frontend. To hide an organic next query: - In the next queries carousel, scroll to the organic next query you want to hide. - Click the X icon to the right of the query string in the organic next query tag. All hidden organic queries appear as the last items in the next query carousel. - Click Save. To show a hidden organic next query: - Scroll to the end of the next queries carousel to locate the organic next query you want to show again. - Click the arrow icon to the right of the query string in the organic next query tag. The newly activated organic next query moves to its organic position in the list. - Click Save. design tip As well as determining the number of next queries you display in the commerce store, you can choose to differentiate between curated and organic next queries in your store by using the highlight option in the Next Query components. For more information, see Next Query UI.
https://docs.empathy.co/explore-empathy-platform/fine-tune-search-and-discovery/configure-next-queries.html?utm_source=carousel-2
2022-09-25T01:12:34
CC-MAIN-2022-40
1664030334332.96
[]
docs.empathy.co
Next: Command Syntax and Function Syntax, Previous: Script Files, Up: Functions and Scripts [Contents][Index] It can be very convenient store a function in a variable so that it can be passed to a different function. For example, a function that performs numerical minimization needs access to the function that should be minimized.
https://docs.octave.org/v7.1.0/Function-Handles-and-Anonymous-Functions.html
2022-09-25T02:25:52
CC-MAIN-2022-40
1664030334332.96
[]
docs.octave.org
Web UI (Experimental)¶ Start a web server¶ Open a new terminal or tmux and run osmedeus server The server will be avaliable at HTTPS URL here: Then get the credentials from this file. cat ~/.osmedeus/config.yaml ... client: password: xxxx username: osmedeus ... View results in your workspace via static path¶ NOTE that this static path doesn't require authentication. Be careful when you exposed this prefix to other people. By default, the webserver will allow you to view your workspace folder as a static file via the endpoints /random-prefix-here/workspaces/. You can see the detail below. cat ~/.osmedeus/config.yaml server: ... prefix: random-prefix-here ... curl -k
https://docs.osmedeus.org/installation/web-ui/
2022-09-25T01:20:15
CC-MAIN-2022-40
1664030334332.96
[array(['/static/architecture/ui-login.png', 'login'], dtype=object) array(['/static/architecture/ui-home.png', 'home'], dtype=object) array(['/static/architecture/ui-workspace.png', 'workspace'], dtype=object)]
docs.osmedeus.org
Create field aliases 🔗 An alias is an alternate name that you assign to a field, allowing you to use that name to search for events that contain that field. An alias is added to the event alongside the original field name to make it easier to find the data you want and to connect your data sources through Related Content suggestions. Field Aliasing occurs at search time, not index time, so it does not transform your data. Field Aliasing does not rename or remove the original field name. When you alias a field, you can search for it by its original name or by any of its aliases. When to use Field Aliasing 🔗 Use Field Aliasing when the following situations are true: You use Log Observer Connect to get logs data, and do not have access to Log Observer Pipeline Management. You do not want to use any indexing capacity by creating additional log processing rules. You want to retain your original field names, so you do not want to create a log processing rule, which transforms your data at index time. You want the new alias to affect every log message, even those that came in from a time before you created the alias. You want to display a field separately at the top of the log details flyout Field Aliasing examples 🔗 Displaying a field separately in the log details flyout 🔗 For convenience, your team can choose to always display a particular field separately at the top of the log details flyout. To display the field of your choice separately, alias the desired field to the message field. The log details flyout in Log Observer always displays the message field at the top. When you alias another field to the message field, it appears in the standalone section called MESSAGE at the top of the log details flyout. For example, say your team most frequently uses the summary field. Add an alias for the summary field called message. The summary field still exists but is also known as message and appears in the MESSAGE section of the log details flyout. Normalizing field names 🔗 One data source might have a field called http_referrer. This field might be misspelled in your source data as http_referer. Use field aliases to capture the misspelled field in your original source data and map it to the expected field name without modifying your logging code. You may have two data sources that call the same field by somewhat different names. For example, one data source might have a field called EventID while another data source might have a field called EventRecordID. You can tell by the values that these fields represent the same thing. You can create a field alias that maps EventID to EventRecordID to aggregate all logs with either of those field names to the field EventRecordID for analysis in Log Observer. Create a new field alias 🔗 To create a new field alias, follow these steps: In Splunk Observability Cloud navigation menu, go to Settings > Log Field Aliasing and click Add a new alias. In Original field name, enter the name of the field you want to create an alias for. Start typing then select the field name you want from the drop-down list of all available fields. In Alias, enter the new name that you want this field to have in addition to its original name. A list of other existing field names appears in the drop-down list. Click Save and Activate. Click Save and Activate. Your new field alias appears in Your aliases and defaults to active. It is now applied to your search-time queries. To deactivate the alias, find the field in Your aliases and click the toggle next to Active. Deactivate or delete a field alias 🔗 You can deactivate or delete a field alias if you do not want the alias to be applied to your search-time queries. You cannot edit a field alias. Instead, you must delete it and create a new one. To deactivate or delete a field alias, do the following: Go to Settings > Log Field Aliasing. Find the alias you want to deactivate or delete in the Your aliases list. To deactivate the alias, click the toggle next to Active in the STATUS column. To delete the alias, click the trash icon in the row for that alias.
https://docs.signalfx.com/en/latest/logs/alias.html
2022-09-25T02:20:46
CC-MAIN-2022-40
1664030334332.96
[array(['../_images/log-observer-message-field.png', 'This image shows the location of the message field in a separate section at the top of the log details flyout.'], dtype=object) ]
docs.signalfx.com
8.3. Cryptographic plugin: DDS:Crypto:AES-GCM-GMAC¶ The cryptographic plugin provides the tools and operations required to support encryption and decryption, digests computation, message authentication codes computation and verification, key generation, and key exchange for DomainParticipants, DataWriters and DataReaders. Encryption can be applied over three different levels of DDS protocol: The whole RTPS messages. The RTPS submessages of a specific DDS Entity (DataWriter or DataReader). The payload (user data) of a particular DataWriter. The authentication plugin implemented in Fast DDS is referred to as “DDS:Crypto:AES-GCM-GMAC”, in compliance with the DDS Security specification. This plugin is explained in detail below. The DDS:Crypto:AES-GCM-GMAC plugin provides authentication encryption using Advanced Encryption Standard (AES) in Galois Counter Mode (AES-GCM). It supports 128 bits and 256 bits AES key sizes. It may also provide additional DataReader-specific Message Authentication Codes (MACs) using Galois MAC (AES-GMAC). The DDS:Crypto:AES-GCM-GMAC authentication plugin, can be activated setting the DomainParticipantQos properties() dds.sec.crypto.plugin with the value builtin.AES-GCM-GMAC. Moreover, this plugin needs the activation of the Authentication plugin: DDS:Auth:PKI-DH. The DDS:Crypto:AES-GCM-GMAC plugin is configured using the Access control plugin: DDS:Access:Permissions, i.e the cryptography plugin is configured through the properties and configuration files of the access control plugin. If the Access control plugin: DDS:Access:Permissions plugin will not be used, you can configure the DDS:Crypto:AES-GCM-GMAC plugin manually with the properties outlined in the following table. The following is an example of how to set the properties of DomainParticipantQoS for the DDS:Crypto:AES-GCM-GMAC configuration. Next example shows how to configure DataWriters to encrypt their RTPS submessages and the RTPS message payload, i.e. the user data. This is done by setting the DDS:Crypto:AES-GCM-GMAC properties ( properties()) corresponding to the DataWriters in the DataWriterQos. The last example shows how to configure DataReader to encrypt their RTPS submessages. This is done by setting the DDS:Crypto:AES-GCM-GMAC properties ( properties()) corresponding to the DataReaders in the DataReaderQos.
https://fast-dds.docs.eprosima.com/en/latest/fastdds/security/crypto_plugin/crypto_plugin.html
2022-09-25T02:08:50
CC-MAIN-2022-40
1664030334332.96
[]
fast-dds.docs.eprosima.com
CRUD automation and Low-Code CRUD is probably one of the most boring tasks you can possibly do as a software developer, especially if you have a large database with hundreds of tables, and you need to create CRUD endpoints wrapping your entire database. With Magic you can automatically create CRUD API endpoints in seconds by simply clicking a button. Below is a screenshot. The process is extremely simply; Choose your database, configure your endpoints, click a button - And two seconds later Magic has produced thousands of lines of code for you 100% automatically, wrapping every single table inside CRUD HTTP endpoints for you without you having to write a single line of code yourself. You can of course configure the process and provide custom authorisation requirements for each endpoint, publish socket messages as endpoints are invoked, create log entries if you wish, etc, etc, etc. Magic also automatically generates join SQL statements for you for referenced tables, and you can select which CRUD verbs you want to have Magic create for you on a per table basis. Basically, you’ve got more or less 100% control over the process through checkboxes and a simple to use UI. Customising your endpoints with Hyper IDE After you’re done generating your CRUD HTTP web API you can easily customise your endpoints. Magic’s output is Hyperlambda which is an easily understood DSL you can edit with Magic’s integrated IDE. Below is a screenshot of Hyper IDE allowing you to edit your endpoints after the “Crudifier” is done doing its magic. Hyper IDE again is a lightweight web based integrated development environment allowing you to easily edit your code, save it, for then to immediately see the result. Hyper IDE provides syntax highlighting, autocomplete, and even works perfectly from your phone. Although Hyper IDE is best suited for Hyperlambda, you can still use it to edit CSS, HTML, JavaScript, TypeScript, and most other popular programming languages. You can also invoke your endpoints directly from Hyper IDE, such as illustrated below. Hyper IDE has its own integrated “Postman” component allowing you to immediately invoke your endpoints as you’re editing your code. This allows you to immediately see the result of changes to your code providing you with “instant feedback”, reducing the amount of time it takes for you to discover bugs in your code. Creating HTTP endpoints with SQL Every now and then you need some sort of KPI, and/or statistical component in your frontend, at which point typically the bulk of your code is SQL. With Magic you can easily create HTTP endpoints using nothing but good old fashion SQL. Below is a screenshot of the integrated SQL editor where we’re creating a fairly complex join SQL statement, for then to wrap our SQL inside an HTTP endpoint. The following screenshot also illustrates how to allow for your endpoint to take arguments, which again becomes parts of your SQL statement to for instance filter your results, etc. Automatically generate an Angular frontend When you’re done with generating your backend, you can also just as easily create an Angular frontend by simply clicking another button. Below is a screenshot of how your end result might end up looking like. However, you can also test the following automatically generated CRUD app with the username/password combination of admin/admin. Please realise that we started out with nothing but an existing database, and we created a CRUD web API in 1 second, for then to create the following Angular app in 1 second. All in all we’re talking about 2 seconds of “coding” here, and some few minutes to build and deploy our result to a VPS. Magic is Open Source As a final touch, everything we did above is 100% open source and free of charge to use, also in your own closed source projects. If you’re interested in trying it out for yourself, you can download Magic below. Magic also contains many more components helping you out with your software development efforts, ranging from an integrated SQL editor replacing MySQL Workbench, integrated audit logging components, a Micro Service “AppStore”, etc, etc, etc. What we showed you here was only 3/4 of the components you find in Magic. Magic has 16 components in total. I would appreciate it if you gave our project a star on Magic’s GitHub project page. And of course if you try out Magic we would love to hear what you think about it.
https://docs.aista.com/blog/crud-automation-and-low-code
2022-09-25T02:34:36
CC-MAIN-2022-40
1664030334332.96
[array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/backend-crud.jpg', 'CRUD automation'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/hyper-ide-actions.jpg', 'Hyper IDE'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/hyper-ide-blog.jpg', 'Invoking your endpoints'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/sql-web-api.jpg', 'SQL HTTP endpoints'], dtype=object) array(['https://raw.githubusercontent.com/polterguy/polterguy.github.io/master/images/sakila.jpg', 'CRUD Angular frontend'], dtype=object) ]
docs.aista.com
Using Greasy metascheduler in Puhti In many cases, a computational analysis job contains a number of similar independent sub-tasks. A user may have several datasets that are analyzed in the same way, or the same simulation code is executed with a number of different parameters. These kind of tasks are often called as farming or embarrassingly parallel jobs as the work can in principle be distributed to as many processors as there are subtasks to run. In Puhti GREASY can be used as an alternative for array jobs for running embarrasingly parallel tasks. Greasy is especiallally useful in cases where you need to define dependencies between the tasks to be executed. GREASY was originally developed at BSC. At CSC we use the GREASY version that includes the extensions developed at CSCS. For detailed documentation please check: Strengths of Greasy - Easy to use - OpenMP threading support Disadvantages of Greasy - Only simple dependencies can be handled - No error recovery strategies - Creates job steps - Integrates with Slurm, but subtasks must use identical resources Note You should not use GREASY to run MPI parallel tasks. GREASY is not able to manage MPI jobs effectively. GREASY Task lists Job scheduling with GREASY is based on task lists that have one task (command) in one row. In the simplest approach, the task list is just a file containing the commands to be executed. For example, analyzing 200 input files with program my_prog could be described with task list containing 200 rows: my_prog < input1.txt > output1.txt my_prog < input2.txt > output2.txt my_prog < input3.txt > output3.txt ... my_prog < input200.txt > output200.txt If needed, you can define dependencies between jobs with the [#line number#] syntax. For example, if you would like to merge the output files of the previous example into a one file, you could add one more row to the task list: my_prog < input1.txt > output1.txt my_prog < input2.txt > output2.txt [#1#] my_prog < input3.txt > output3.txt ... my_prog < input200.txt > output200.txt [#1-200#] cat output* > all_output By default, all tasks are executed in the directory where the task list processing is launched, but you can add task specific execution directories to the task list with: [@ /path/to/folder_task1/ @] my_prog < input1.txt > output1.txt [@ /path/to/folder_task2/ @] my_prog < input2.txt > output2.txt [@ /path/to/folder_task3/ @] my_prog < input3.txt > output3.txt ... [@ /path/to/folder_task200/ @] my_prog < input200.txt > output200.txt Note You should not include srun to your tasks as GREASY will add it when it executes the tasks. Please check the GREASY user guide for more detailed description of the task list syntax. Executing a task list To use GREASY in Puhti, load the GREASY module: module load greasy sbatch-greasy tasklist -c) 2. estimated average duration for one task ( -t) 3. number of nodes used to execute the tasks ( -N) 4. accounting project ( -A). 5. estimated memory usage for one task ( -m). Alternatively you can define part or all of these parameters in command line: sbatch-greasy tasklist -c 1 -t 15:00 -N 1 -A project_2012345 With the option -f filename you can make sbatch-greasy to save the GREASY batch file but not to send it to be executed. This batch job file can then be further edited according to your needs, e.g. if you need to set up additional SLURM parameters. Submit it normally with: sbatch filename Caveats Performance in threaded (OpenMP) jobs can be sensitive to the thread binding. If your job is parallelized via OpenMP, make sure the performance of individual subjobs has not suffered. A single subjob must fit in one node, but such a job could also be run as an array job. GREASY is thus better suited for jobs (much) smaller than one node. While Greasy only creates one batch job, it will create a job step for each task it runs. A huge number of job tasks in a batch job will be problematic. If you need to run hundreds or thousands of job steps, please contact [email protected] to look for alternatives.
https://docs.csc.fi/computing/running/greasy/
2022-09-25T01:00:28
CC-MAIN-2022-40
1664030334332.96
[]
docs.csc.fi
whether the PolygonCollider2D's shape is automatically updated based on a SpriteRenderer's tiling properties. When this is true, the Collider's shape is generated from a SpriteRenderer, if one exists as a component of the parent GameObject. The shape generated is dependent on the SpriteRenderer.drawMode. Regeneration happens when the autoTiling property is set to true, and subsequently every time a change is detected in the associated SpriteRenderer.
https://docs.unity3d.com/2019.2/Documentation/ScriptReference/PolygonCollider2D-autoTiling.html
2021-06-13T02:52:23
CC-MAIN-2021-25
1623487598213.5
[]
docs.unity3d.com
First, you'll want to ensure the NPS add-on is enabled. To do that, navigate to the NPS section of your Account Settings (or get there via Quick Jump) and click the switch to enable the add-on. Once you do, you'll see this configuration page. You have these options when configuring which users should be surveyed. 'Age' of the user (in days) Specify a non-zero number here if you'd like to only survey users who've used your product for a minimum amount of time. Important: By default, we compare each user against the first time we saw the user use your product. So if you are installing Vitally for the first time and you specify a value of 7 here, that means it'll take 7 days before any user sees a survey. However, you can tell us when a user started using your product by sending a createdAt user trait when you identify the user (using the Vitally.js library). If you send us a createdAt trait, we'll use that value when determining if a user should see a survey. Minimum sessions Specify a non-zero number here if you'd like to only survey users who have logged a minimum number of sessions in your product. Note that sessions are unique per day - i.e. a user has a maximum of one session on any given day. Thus, if you specify 5 here, that means a user must use your product over 5 unique days before seeing a survey. Important: Unlike the 'age' option above, there's no way to tell us how many sessions a user has had with your product programmatically. So if you are installing Vitally for the first time and you specify a value of 5 here, it'll take at least 5 days (and often more) before any user sees a survey. To avoid that, you can specify 0 here and set the createdAt user trait documented above to start collecting NPS survey responses immediately. Consecutive sessions to show a survey Users are busy, so often, they'll not want to submit an NPS survey. This option allows you to configure how many consecutive sessions a user should be shown a survey only if the survey is dismissed in each session. For example, if you specify a value of 3 here, that means we'll show the survey to the user for (up to) 3 consecutive sessions. If they dismiss the survey each time, then we'll stop showing the survey from their 4th session onward. Again, remember that sessions are unique per day. So if a user dismisses the survey at 12:01 AM on Tuesday, we won't show the user another survey until 12:01 AM Wednesday (at the earliest). 'Wait time' before showing a new survey Once a survey is submitted OR the user last dismisses a survey (according to the 'consecutive sessions' option specified above), this option lets you specify the number of months to wait before showing that user another survey. Only show surveys to users at accounts matching a Playbook's rules Playbooks give you the ability to perform automated actions on accounts that meet certain conditions. By default, we'll show the NPS survey to all users that meet the above requirements, but you can filter that dataset even further using Playbooks. The possibilities are endless here, but here are a few examples you may want to consider: Only show the survey to users at subscribed, paying customers (i.e. omit trials) Only show the survey to users at subscribed customers that are set to renew within the next 2 weeks Only show the survey to users at your 'Enterprise' accounts NPS surveys are potentially shown to your users once you install Vitally.js, our Javascript library. Here's how to do just that! Step-by-step details on how to install our Javascript snippet and use the Javascript API to identify each user & account can be found here as well as in the NPS configuration page in your account. Best practices when using Vitally.js for NPS Identify the date the user started an account with your product via the createdAt trait. If you send us this trait, we'll use it when determining if a user is 'old' enough to see an NPS survey (see the configuration options for more details). You should call Vitally.user and Vitally.account in each session, but you only need to do so once. If you have already installed Vitally.js to track your users and their product usage, you do not need to do so again for NPS. You can only use Vitally.track on paid plans - Vitally.track is an API that allows you to track interactions a user has with your product. It is only supported on paid plans with access to product analytics. If you try to use it on our "Free NPS" plan, we will discard your tracks. Once you've installed Vitally.js and have identified the logged-in user and account, call Vitally.nps('survey') to potentially show the survey to the logged-in user. The survey will only be shown if the user matches the criteria of your configuration while the add-on is enabled. Vitally.nps('survey') also optionally takes a second argument of options that lets you further customize the survey displayed to users. Putting this all together, if you want to overwrite all options (with static strings), you'd make an API call like this: Vitally.nps('survey', {productName: 'Pied Piper',delay: 1000,primaryColor: '#000000',npsQuestion: 'Hey yo! You like Pied Piper?',followUpTitle: 'Solid feedback my dude!',followUpSubtitle: 'Mind elaborating a bit though?',thanksTitle: 'You are amazing!',thanksSubtitle: 'Bye for now'}); For more advanced configuration, you can specify functions for arguments and dynamically change the copy users see based on their feedback: Vitally.nps('survey', {productName: 'Pied Piper',delay: 1000,primaryColor: '#000000',npsQuestion: ({ productName }) => `Hey yo! You like ${productName}?`,followUpTitle: ({ productName, score }) => {return score < 7 ? `Oh no! What can we do better?`: `Solid feedback my dude!`;},followUpSubtitle: ({ productName, score }) => {return score < 7 ? `How can we get that score up?`: `Mind elaborating a bit though?`;},thanksTitle: ({ productName, score, feedback }) => {return score < 7 ? `Thanks. We'll do better - promise!`: `You are amazing!`;},thanksSubtitle: ({ productName, score, feedback }) => {return score < 7 ? `Until next time.`: `Bye for now.`;},minLabel: 'No chance',maxLabel: 'Definitely!',placeholderText: ({ productName, score }) => {return score <= 7 ? 'I wish it could...' : 'I love it because...';},submitText: 'Submit my response',dismissText: 'Let me get back to work'}); Since Vitally.nps('survey') obeys your NPS configuration, testing the survey display could get a bit annoying. To force display the survey, replace survey with show. Vitally.nps('show', {productName: 'Pied Piper',delay: 0}); Important: Don't ship with show though - use survey in production. Otherwise, you'll be annoying your users a good bit :) show still sends submitted responses to your Vitally account though. If you'd rather that not happen (i.e. you only want to see what the survey experience is like), replace show with test. Vitally.nps('test', {productName: 'Pied Piper',delay: 0}); To summarize: Vitally.nps('survey') - Displays a survey to users only if your configuration says we should. Use this in production. Vitally.nps('show') - Force displays a survey to users and skips your configuration. Do not use this in production. Only use if you want to test the survey experience AND see the response display in your Vitally account. Vitally.nps('test') - Force displays a survey to users and skips your configuration. Do not use this in production. Only use if you want to test the survey experience WITHOUT seeing the response in your Vitally account.: <script type="text/javascript" src="" defer></script><script type="text/javascript">!function(n,t,r){for(var i=n[t]=n[t]||[],o=function(r){i[r]=i[r]||function(){for(var n=[],t=0;t<arguments.length;t++)n[t]=arguments[t];return i.push([r,n])}},u=0,c=["init","user","account","track","nps"];u<c.length;u++){o(c[u])}}(window,"Vitally");Vitally.init('YOUR_TOKEN_HERE');Vitally.nps('survey', {productName: 'Pied Piper',autoLoadSegment: true,delay: 1000});</script>
https://docs.vitally.io/using-vitallys-nps-surveys/installing-and-configuring-nps-surveys
2021-06-13T02:59:53
CC-MAIN-2021-25
1623487598213.5
[]
docs.vitally.io
Exchange Server post-installation tasks Read the following topics to help you configure your new Exchange 2016 or Exchange 2016 organization. Note: If you've enabled the Scripting Agent in your Exchange organization, and you keep a customized %ExchangeInstallPath%Bin\CmdletExtensionAgents\ScriptingAgentConfig.xml file on all of your Mailbox servers, you need to copy that file to every new Mailbox server that you deploy in your organization (the file isn't used on Edge Transport servers). The default value of %ExchangeInstallationPath% is %ProgramFiles%\Microsoft\Exchange Server\V15, but the actual value is wherever you installed Exchange on the server. The default name of the file on a new Exchange server is %ExchangeInstallPath%Bin\CmdletExtensionAgents\ScriptingAgentConfig.xml.sample. As part of enabling the Scripting Agent in your organization, you need to rename this file to ScriptingAgentConfig.xml and customize it or replace it with your existing ScriptingAgentConfig.xml file. For more information about the Scripting Agent in Exchange 2013 (which still applies to Exchange 2016 and 2019), see Scripting Agent.
https://docs.microsoft.com/en-us/Exchange/plan-and-deploy/post-installation-tasks/post-installation-tasks?view=exchserver-2019
2022-08-07T20:33:42
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
Simulated EEPROM 1 and 2 Simulated EEPROM 1 and 2 legacy storage. The Simulated EEPROM 1 and 2 system (typically referred to as SimEE) is designed to operate under the Token Manager API and provide a non-volatile storage system. For this reason, the SimEE1 and SimEE2 components do not have any public functions to call. - Note - SimEE is a legacy storage system so this component exists to enable the modern Token Manager to access existing devices that already have data stored in SimEE. Since the flash write cycles are finite, the Simulated EEPROM's primary purpose is to perform wear leveling across several hardware flash pages, ultimately increasing the number of times tokens may be written before a hardware failure. SimEE1 is designed to consume 8kB of upper flash within which it will perform wear leveling. SimEE2 is designed to consume 36kB of upper flash within which it will perform wear leveling. The Simulated EEPROM needs to periodically perform a page erase operation to recover storage area for future token writes. The page erase operation requires an ATOMIC block of typically 21ms. Since this is such a long time to not be able to service any interrupts, the page erase operation is under application control providing the application the opportunity to decide when to perform the operation and complete any special handling needed that might be needed. - Note - The best, safest, and recommended practice is for the application to regularly and always perfom a page erase when the application can expect and deal with the page erase delay. Page erase will immediately return if there is nothing to erase. If there is something that needs to be erased, doing so as regularly and as soon as possible will keep the SimEE in the healthiest state possible.
https://docs.silabs.com/gecko-platform/3.2/service/api/group-simeepromcom
2022-08-07T18:46:02
CC-MAIN-2022-33
1659882570692.22
[]
docs.silabs.com
We. Welcome to the SKOV A/S documentation download site. Here you find all available language versions of our product documentation.
https://docs.skov.com/
2022-08-07T18:15:35
CC-MAIN-2022-33
1659882570692.22
[]
docs.skov.com
TULIP DOCS (LTS6 REV2) Overview User Requirements Product Specs Test Cases Release Notes Bug Fixes QA-T103 Record Placeholders : 02 - Edit a record placeholder in an app OBJECTIVE Additional detail to help the purpose of the test be clear: This test makes sure that record placeholders in an app can be edited, using the records tab in the left sidebar of the app editor. critical criteria of test (CCT) A record placeholder in an app can be edited A record placeholder's table cannot be edited A record placeholder cannot be edited to have an empty. A record placeholder has already been created in this app. If you are doing this QA not on qa.tulip.co you will need to have different credentials and change all base urls from to https://<your instance>.tulip.co/ Covers models M_APP_VER_RECD routes R_APPE
https://gxp-docs.tulip.co/lts6-rev2/tests/QA-T103.html
2022-08-07T19:34:25
CC-MAIN-2022-33
1659882570692.22
[]
gxp-docs.tulip.co
csc_matrix¶ Wrapper of scipy.csc_matrix that ensures best compatibility with numpy.ndarray. The following methods have been overwritten to ensure that numpy.ndarray are returned instead of numpy.matrixlib.defmatrix.matrix. todense _add_dense Warning: this format is memory inefficient to allocate new sparse matrices. Consider using: - scipy.sparse.lil_matrix, which supports slicing, or - scipy.sparse.coo_matrix, though slicing is not supported :( As per scipy.spmatrix.todense but returns a numpy.ndarray.
https://ic-sharpy.readthedocs.io/en/latest/includes/linear/src/libsparse/csc_matrix.html
2022-08-07T18:54:13
CC-MAIN-2022-33
1659882570692.22
[]
ic-sharpy.readthedocs.io
AWS Fargate capacity providers Amazon ECS on AWS Fargate capacity providers allow. This is described in further detail below. Fargate capacity provider considerations The following should be considered when using Fargate capacity providers. The Fargate Spot capacity provider is not supported for Windows containers on Fargate. The Fargate Spot capacity provider is not supported for Linux tasks with the ARM64 architecture, Fargate Spot only supports Linux tasks with the X86_64 architecture. The Fargate and Fargate Spot capacity providers don. For more information, see Adding Fargate capacity providers to an existing cluster. The Fargate and Fargate Spot capacity providers are reserved and cannot be deleted. You can disassociate them from a cluster using the PutClusterCapacityProvidersAPI. When a new cluster is created using the Amazon ECS classic console along with the Networking only cluster template, the FARGATEand FARGATE_SPOTcapacity providers are associated with the new cluster automatically. Using Fargate Spot requires that your task use platform version 1.3.0 or later (for Linux). For more information, see AWS Fargate platform versions. When tasks using the Fargate and Fargate Spot capacity providers are stopped, a task state change event is sent to Amazon EventBridge. The stopped reason describes the cause. For more information, see Task state change events. A cluster may contain a mix of Fargate and Auto Scaling group capacity providers, however a capacity provider strategy may only contain either Fargate or Auto Scaling group capacity providers, but not both. For more information, see Auto Scaling Group Capacity Providers in the Amazon Elastic Container Service Developer Guide. Handling Fargate Spot termination notices When tasks using Fargate Spot capacity are stopped due to a Spot interruption, a two-minute warning is sent before a task is stopped. The warning is sent as a task state change event to Amazon EventBridge and a SIGTERM signal to the running task. When using Fargate Spot as part of a service, the service scheduler will receive the interruption signal and attempt to launch additional tasks on Fargate Spot if capacity is available. A service with only one task will be interrupted until capacity is available. To ensure that your containers exit gracefully before the task stops, the following can be configured: A stopTimeoutvalue of 120seconds or less can be specified in the container definition that the task is using. Specifying a stopTimeoutvalue gives you time between the moment the task state change event is received and the point at which the container is forcefully stopped. If you don't specify a stopTimeoutvalue, the default value of 30 seconds is used. For more information, see Container timeouts. The SIGTERM signal must be received from within the container to perform any cleanup actions. Failure to process this signal will result in the task receiving a SIGKILL signal after the configured stopTimeoutand may result in data loss or corruption. The following is a snippet of a task state change event displaying the stopped reason and stop code for a Fargate Spot interruption. { "version": "0", "id": "9bcdac79-b31f-4d3d-9410-fbd727c29fab", "detail-type": "ECS Task State Change", "source": "aws.ecs", "account": "111122223333", "resources": [ "arn:aws:ecs:us-east-1:111122223333:task/b99d40b3-5176-4f71-9a52-9dbd6f1cebef" ], "detail": { "clusterArn": "arn:aws:ecs:us-east-1:111122223333:cluster/default", "createdAt": "2016-12-06T16:41:05.702Z", "desiredStatus": "STOPPED", "lastStatus": "RUNNING", "stoppedReason": "Your Spot Task was interrupted.", "stopCode": "TerminationNotice", "taskArn": "arn:aws:ecs:us-east-1:111122223333:task/b99d40b3-5176-4f71-9a52-9dbd6fEXAMPLE", ... } } The following is an event pattern that is used to create an EventBridge rule for Amazon ECS task state change events. You can optionally specify a cluster in the detail field to receive task state change events for. For more information, see Creating an EventBridge Rule in the Amazon EventBridge User Guide. { "source": [ "aws.ecs" ], "detail-type": [ "ECS Task State Change" ], "detail": { "clusterArn": [ " arn:aws:ecs:us-west-2:111122223333:cluster/default" ] } } Creating a new cluster that uses Fargate capacity providers When a new Amazon ECS cluster is created, you can specify one or more capacity providers to associate with the cluster. The capacity providers are used to define a capacity provider strategy which determine the infrastructure your tasks run on. When using the AWS Management Console, the FARGATE and FARGATE_SPOT capacity providers are associated with the cluster automatically when using the Networking only cluster template. For more information, see Creating a cluster using the classic console. To create an Amazon ECS cluster using Fargate capacity providers (AWS CLI) Use the following command to create a new cluster and associate both the Fargate and Fargate Spot capacity providers with it. create-cluster (AWS CLI) aws ecs create-cluster \ --cluster-name FargateCluster\ --capacity-providers FARGATE FARGATE_SPOT \ --region us-west-2 Adding Fargate capacity providers to an existing cluster You can update the pool of available capacity providers for an existing Amazon ECS cluster by using the PutClusterCapacityProviders API. Adding either the Fargate or Fargate Spot capacity providers to an existing cluster is not supported in the AWS Management Console. You must either create a new Fargate cluster in the console or add the Fargate or Fargate Spot capacity providers to the existing cluster using the Amazon ECS API or AWS CLI. To add the Fargate capacity providers to an existing cluster (AWS CLI) Use the following command to add the Fargate and Fargate Spot capacity providers to an existing cluster. If the specified cluster has existing capacity providers associated with it, you must specify all existing capacity providers in addition to any new ones you want to add. Any existing capacity providers associated with a cluster that are omitted from a PutClusterCapacityProviders API call will be disassociated from the cluster. You can only disassociate an existing capacity provider from a cluster if it's not being used by any existing tasks. These same rules apply to the cluster's default capacity provider strategy. If the cluster has an existing default capacity provider strategy defined, it must be included in the PutClusterCapacityProviders API call. Otherwise, it will be overwritten. put-cluster-capacity-providers (AWS CLI) aws ecs put-cluster-capacity-providers \ --cluster FargateCluster\ --capacity-providers FARGATE FARGATE_SPOT existing_capacity_provider1 existing_capacity_provider2\ --default-capacity-provider-strategy existing_default_capacity_provider_strategy\ --region us-west-2 Running tasks using a Fargate capacity provider You can run a task or create a service using either the Fargate or Fargate Spot capacity providers by specifying a capacity provider strategy. If no capacity provider strategy is provided, the cluster's default capacity provider strategy is used. Running a task using the Fargate or Fargate Spot capacity providers is supported in the AWS Management Console. You must add the Fargate or Fargate Spot capacity providers to cluster's default capacity provider strategy if using the AWS Management Console. When using the Amazon ECS API or AWS CLI you can specify either a capacity provider strategy or use the cluster's default capacity provider strategy. To run a task using a Fargate capacity provider (AWS CLI) Use the following command to run a task using the Fargate and Fargate Spot capacity providers. aws ecs run-task \ --capacity-provider-strategy capacityProvider= FARGATE,weight= 1capacityProvider= FARGATE_SPOT,weight= 1\ --cluster FargateCluster\ --task-definition task-def-family:revision\ --network-configuration "awsvpcConfiguration={subnets=[ string, string],securityGroups=[ string, string],assignPublicIp= string}" \ --count integer\ --region us-west-2 When running standalone tasks using Fargate Spot it is important to note that the task may be interrupted before it is able to complete and exit. It is therefore important that you code your application to gracefully exit within 2 minutes when it receives a SIGTERM signal and be able to be resumed. For more information, see Handling Fargate Spot termination notices. Create a service using a Fargate capacity provider (AWS CLI) Use the following command to create a service using the Fargate and Fargate Spot capacity providers. create-service (AWS CLI) aws ecs create-service \ --capacity-provider-strategy capacityProvider= FARGATE,weight= 1capacityProvider= FARGATE_SPOT,weight= 1\ --cluster FargateCluster\ --service-name FargateService\ --task-definition task-def-family:revision\ --network-configuration "awsvpcConfiguration={subnets=[ string, string],securityGroups=[ string, string],assignPublicIp= string}" \ --desired-count integer\ --region us-west-2
https://docs.aws.amazon.com/AmazonECS/latest/userguide/fargate-capacity-providers.html
2022-08-07T20:46:27
CC-MAIN-2022-33
1659882570692.22
[]
docs.aws.amazon.com
Application UI# The application UI is designed to be as simple as possible and give the user a clear step by step instruction on how to setup and run a case. The application UI in dicehub has the following elements: - Global navigation: main navigation with most important global functions, information about the current application and the presence of other collaborators. - Toolbar: a group of menus and controls to allow quick access to functions that are commonly used in the application. - Side navigation: A vertical list of navigational links to global dicehub domains. In this navigation you can find: - Main steps: Functions of the applications, broken down into most most important individual steps. - Configurations: stack of different versions (configurations) for the application. - Storage: The place for all the files (configurations, snapshots, resources and results) for the application. - Configurations editor: Editor for all configuration files which describe the dicehub flow of the application. - Application information: All the information about the application such as namespace, project id, application id. (Intended for debugging) - Keyboard shortcuts: A list which describes combinations of keystrokes on a computer keyboard which invokes most common commands. - Footer: bottom section of the application which contains useful information about the application: - The loading state - The name of the active 3D visualization scene - The name of the active template - Main steps: A visual representation of progress through the steps to complete a necessary process, in most cases to configure and run a dicehub flow. Common categories for the tasks can include: - Geometry: A geometry is uploaded or imported from another application. - Setup: Main steps to configure the application, for example turbulence model or boundary condition selection. - Optional: Settings which are not required for a successful run. - Run: customizable settings for the machines where the run is deployed and executed. - Result: Result of a successful run can be inspected and downloaded here. - Step controls: content block which show settings for each step. - 3D scene view: Main 3D scene. (The name of the active scene view is shown in the bottom right corner in the footer, for example "Config" or "Result") - Logo: is a visual representation of dicehub to promote public identification. This logo redirects you to your dashboard. - Main application menu: global application menu with access to advanced options. - Application state indicator: includes the current state of the run and a menu to all current and previous run in the application - Application name: shows the namespace and title of the application. The dropdown menu allows switching between applications without going back to the projects page. - User avatars: Group of user avatars that are online at this moment. - Utilities: the global system-level utilities in the application. They open panels that provide access to other places in the application. Utilities include: - Settings: application main settings such as colormap or views. - Chat: online chat, which provide a real-time transmission of text messages. - Help: help information about the content in the application. - Objects view: panel with additional controls for the 3D scene. Last update: January 22, 2022
https://docs.dicehub.com/latest/guide/ui/application_ui/
2022-08-07T19:19:13
CC-MAIN-2022-33
1659882570692.22
[array(['../images/application_ui.png', 'Application UI'], dtype=object)]
docs.dicehub.com
Traffic Management TSB's traffic routing capabilities let you easily control traffic flow between services under its control. TSB simplifies the approach to structuring how you interact with applications and services, and by extension, simplifies the processes behind tasks such as traffic routing, staged rollouts, and migrations. This page will walk you through: ✓ Gateways in TSB ✓ Intelligent traffic routing ✓ Multi-cluster capabilities in TSB ✓ Traffic splitting, migrations, and canary deployments TSB controls Istio and Envoy in each cluster. All traffic inside the cluster is proxied by Envoy, while your local control plane enforces the rules defined within the management plane. Any changes that you make or policies that you apply are stored and distributed by the global control plane to local control planes. Gateways in TSB As traffic enters through your TSB environment, it passes through several gateways to reach the final application it needs. First, it passes through an Application Edge Gateway ("edge gateway" for short), which is a shared multi-tenant gateway responsible for cross-cluster load balancing. The Edge Gateway forwards traffic into an Application Ingress Gateway ("app gateway" for short), or potentially even the workload itself (common for VM workloads). Application Ingress Gateways are owned by a Workspace, and come in two flavors: shared and dedicated to a specific app. This means that workspace's owners remain in control of how traffic flows through the gateway -- whether that gateway is dedicated to their own application, or is shared by many. Tetrate Service Bridge uses this ownership to keep your mesh configuration isolated at runtime, but there are always classes of misconfiguration that TSB cannot prevent. For example, we cannot ensure that one regex based path match on a shared gateway does not shadow others. So, we recommend deploying many Ingress Gateways: one for every application (roughly, namespace), if you can. Envoy is cheap and lightweight, and auto scaling keeps overhead minimal. Multiple Ingress Gateways allow for isolation and control of your traffic. It keeps your applications safe and reduces risk of outages because teams cannot write configuration that conflicts by design. As your team's usage of the mesh matures and your confidence grows, we can consolidate most application traffic onto a set of shared Ingress Gateways for efficiency, and leave a few dedicated gateways for critical applications. TSB lets you make this decision in a smooth gradient, running one gateway or thousands. Intelligent Traffic Routing For TSB to manage application traffic, it needs to know where all your services are and whether they're available. TSB relies on the local control planes in each cluster to provide information on the locality, health and availability of the services in that cluster. Using that information, Istio and TSB will keep your traffic as local as possible, including in the same availability zone, for multi-availability zone clusters. Envoy's superpowers allow TSB to do this per-request, not per connection. This gives you fine grained control over how your application communicates: keeping all requests sticky to a specific backend instance, canarying a portion based on header or percentage to a new version, sending requests to any available backend no matter where it runs. TSB can do all of that, while keeping your traffic as local as possible, always. TSB advertises information about which services are available in each cluster to local control plane instances in every other cluster. This information is used at runtime to enable seamless traffic failover across clusters. But it's also communicated back to the management plane so that application developers, platform owners, and security admins can see near-real-time information on service health, connectivity, and controls. Multi-cluster Routing TSB makes multi-cluster management easy for organizations -- whether you need multiple clusters running active-active handling user load or active-passive waiting for failover, and even if you have many clusters across teams in your organization. Of course, an Ingress Gateway can be configured to expose a hostname (e.g. echo.tetrate.io) for an application (e.g. echo) allowing it to be accessed by external clients. TSB enables the same kind behavior for services in the mesh, exposing them to other applications in the mesh privately even across clusters. Applications in the mesh (or external clients via an Edge Gateway) can access that application by hostname without knowing any details about where the service is deployed. Combined with intelligent traffic routing, this gives us a super power: the mesh will route traffic to local instances of a service if they're available, or out into other clusters that expose the service if it's not local (or if it fails locally). And just because the application is exposed with a gateway doesn't mean we have to send our traffic through that gateway: when the application is local we'll communicate directly peer-to-peer, eliminating that hairpin traffic common in systems today. What goes on underneath (thanks to the global control plane) makes intelligent traffic routing even smarter. The global control plane allows the local control planes to advertise available services, and makes service discovery available across all clusters. This means your system can serve more requests and maintain a higher availability. API Gateway capabilities Everywhere In the Service Mesh Concepts section we discussed API Gateways and the service mesh and how TSB brings API gateway functionality everywhere in the application traffic platform. Using TSB, you can configure how traffic flows to your application by annotating your OpenAPI spec, and TSB will implement your intents across Gateways and - or - internally in the mesh: you have the flexibility to choose. But we don't force you to think about it either: the platform team can establish where rules are enforced and application developers can author configuration. You can easily implement the following across your entire infrastructure, or only on behalf of specific teams and services: - Authentication and authorization for applications - Rate limiting - WAF policy - Request transformation via WASM Traffic Splitting and Migrations Traffic management within TSB is designed for safety and simplicity across the entire infrastructure. As an application developer, you only need to push a configuration once to change or update how your services handle traffic. TSB makes the process of migrating to microservices, splitting traffic, and canarying deployments much easier to manage and safer to perform at scale. The configuration to enable migrations, traffic splitting and canary deployments are all similar and you, as the application or platform owner, define how traffic flows to different version of your application or different parts of your infrastructure. This gives you incredible control over your application's runtime environment. With the observability capabilities TSB brings, you can also have confidence that everything is working as you expected. If not -- you can rollback or deploy an older version in a cinch. One of TSB's biggest goals is keeping your applications available, even while you're migrating infrastructures or moving from monoliths to microservices.
https://docs.tetrate.io/service-bridge/1.4.x/en-us/concepts/traffic_management
2022-08-07T18:47:49
CC-MAIN-2022-33
1659882570692.22
[array(['/assets/images/tsb-traffic-flow-ef6e0f6b84a2edede99366a98ff11134.png', 'Basic application traffic flow in TSB.'], dtype=object) array(['/assets/images/tsb-bgp-0418b614be04627b54bb5a8a35d3cbd5.png', 'Creation of bar.com service in us-east propagates up to TSB then out to clusters via XCP.'], dtype=object) ]
docs.tetrate.io
VMware Telco Cloud Service Assurance is a real-time automated service assurance solution designed to holistically monitor and manage complex 5G NFV virtual and physical infrastructure and services end to end, from mobile core to the RAN to the edge. From single pane of glass, VMware Telco Cloud Service Assurance provides cross‑domain, multi‑layer, automated assurance in a multi‑vendor and multi‑cloud environment. It provides operational intelligence to reduce complexity, perform rapid root cause analysis and see how problems impacts services and customers across all layers lowering costs, and improved customer experience. For information about setting up and using VMware Telco Cloud Service Assurance, see the VMware Telco Cloud Service Assurance Documentation.
https://docs.vmware.com/en/VMware-Telco-Cloud-Service-Assurance/2.0.0/rn/vmware-telco-cloud-service-assurance-200-release-notes/index.html
2022-08-07T19:45:39
CC-MAIN-2022-33
1659882570692.22
[]
docs.vmware.com
TULIP DOCS (LTS6 REV2) Overview User Requirements Product Specs Test Cases Release Notes Bug Fixes QA-T377 Verifying Completions OBJECTIVE Test the following URS: 800 - All records shall be Contemporaneous. Data captured should include the date and time of the activity/action 801 - All records, including audit trail shall be Legible, ie. shall be human readable throughout the retention period. 802 - All records shall be Original; all originally recorded data shall be maintained. 803 - All records shall be Accurate. there must be the ability to build accuracy checks into the design of the system or configure verification for manually entered data as necessary. 804 - All records shall be Complete. Records shall include all data related to activity with no deletion or overwriting. 805 - All records shall be Consistent, ie capture and recorded in the same manner and in the correct sequence of the activities or action being recorded. 806 - All records shall be Enduring, ie. store, managed and unalterable for the full retention period. 807 - All records shall be Available and can be accessed for review, audit or inspection over the lifetime of the record (retention period). 808 - All records have to include the date, time, action or activity, user, and reason for change if applicable. 809 - All records and electronic signatures have to include an accurate date and time stamp. Date & time stamps shall be configurable with the possibility to include the day, month, year, hour, minutes, seconds and time zone. 810 - Provide managed authorized access to all records and electronic signatures including data, information, configurations, and data files. 811 - All records, electronic signatures, and audit trails must be protected to ensure they are readily retrievable throughout a pre-configured retention period. 813 - All data shall be Attributable; data must be identified to the person who did the data collection. Records shall include information about how the data was acquired, action/activity performed, where and and when" 815 - Provide an unalterable and enduring link between records and their associated electronic signatures; they cannot be removed, changed, copied, transferred or deleted. 816 - When capturing/acquiring an electronic signature the user must be able to see in human readable format the user's full name, date & time, meaning of signature; the record itself should contain these elements. 817 - Ability to require multiple electronic signatures for a record. Ie co-signer, verifier, etc. 818 - If more than one signature is required the electronic signature shall capture the role of each signatory. Eg. trainer, verifier, co-signer, etc. 819 - All electronic signatures have authenticate the signatory by two distinct elements (e.g. username and password; at least one being a private element), or a secure unambiguous biometrics system that cannot be used by anyone other than their genuine owner. 820 - Electronic signatures have to be secured and not allowed to be falsified. They can only be used by their genuine owners. 821 - Ability to define access security levels for records and electronic signatures. Ie. user groups and user roles and their associated privileges to system resources and data 852 - System must provide accurate time server synchronization and shall utilize the same time source. PRECONDITION The app "Verifying Completions" has already been created: The user has access to multiple credentials for the instance. The instance is setup as an LDAP instance. Covers urs 800 801 802 803 804 805 806 807 808 809 810 811 813 815 816 817 818 819 820 821 852
https://gxp-docs.tulip.co/lts6-rev2/tests/QA-T377.html
2022-08-07T18:39:14
CC-MAIN-2022-33
1659882570692.22
[]
gxp-docs.tulip.co
View, use, and share certificates and badges About digital certificates and badges Microsoft Certification certificates and badges are proof that you’ve passed the exams required for certification. But more than that, they’re a symbol of real-world skills and your commitment to keeping pace with technology. On your LinkedIn profile, career-related social media posts, or embedded in your email signature, your badge is recognized as a trusted validation of your achievement. Certification certificates can be downloaded and printed for your records (or framed on your wall so they’re visible in your next video call… and your mom might like a copy, too!) Badge details Badges are digital representations of your achievements consisting of an image and metadata uniquely linked to you. Access your certificates and badges on Learn Click on the photo avatar and select Profile from the dropdown menu Select Certifications from the menu inside your profile. Your first two certifications will be listed in the Certifications section. (If you have more than two certifications, select “View all” under the last visible certification.) Scroll to the certification representing the badge or certificate you’d like to access and select “View certification details” below the description. Under the certification title, there are two link options: “Print certificate” and “Share.” If you’d like to share your badge or certificate, select “Share” to be taken to the Credly dashboard. Manage Microsoft certifications and badges in Credly Microsoft partners with Credly so you can manage, share, and verify your Microsoft Certifications. The platform provides labor market insights. - Share your exams and certifications with your professional network. - Learn which employers are looking for your skills. - Discover how your salary relates to your skills - Search for new job opportunities associated with your certification and apply in just a few clicks. - Watch the video: Labor Market Insights to see how easy it is to discover opportunities in your market. Claiming badges via Credly When you earn a certification badge, Microsoft will notify you via email with a link to your badge on Credly. If you’re accessing Credly for the first time, create an account using the same email address you used to enroll in your certification exams. You’ll receive a confirmation email from Credly with one more link to complete enrollment. Tip: You can also enroll in, and access, Credly directly from your Learn profile. See Access your certificates and badges on Learn above. Automatically accept badges from Credly From the Credly platform, adjust your account settings to automatically accept badges. Watch this step-by-step process in the Credly video “How do I manage and share my digital badge?” Note that while the video shows badge email notifications coming from Credly, your email notifications will come from Microsoft. Sharing your badges from Credly - Go to the Credly platform, via your Learn profile or the link in your badge claim email from Microsoft. - Select “Share” to go to the “Share your badge” page. -: - When passing one exam results in a certification, a badge is issued for the certification rather than the exam. - Badges are not available for some legacy exams and certifications. If you don from Microsoft. interested in your skills. My newer badge looks different from the ones I’ve earned previously. What changed? We updated our badge designs to unite the look and feel of badges across all of Microsoft. The new design has the same functionality as the previous design. It’s just spiffier. What if I don’t want my Microsoft badge to be made public? You’re in control of how and when your badge is shared. Adjust the privacy settings in your account on the Credly platform any time to make your profile and badges private. You’re not required to accept your badges and may disregard the notification emails if you don’t wish to participate. If I have questions about my Microsoft badge, whom can I contact? If you have any questions about the Credly platform or about how to claim your badge, reference resources in the Credly Help Center or submit a help request. If you have questions about missing badges, visit: Certification support.. What happens to my badge if an exam or certification is retired? If you earned a badge for an exam or certification that was chosen to be part of the digital badge program but was later retired, you can still accept the badge via Credly’s platform.
https://docs.microsoft.com/en-us/certifications/view-use-share-certificates-badges
2022-08-07T20:12:41
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
Crate prctl [−] [src] Module provides safe abstraction over the prctl interface. Provided functions map to a single prctl() call, although some of them may be usable only on a specific architecture or only with root privileges. All known enums that may be used as parameters are provided in this crate. Each function provides result which will be Err(errno) in case the prctl() call fails. To run tests requiring root privileges, enable feature "root_test".
https://docs.rs/prctl/latest/prctl/
2022-08-07T19:04:36
CC-MAIN-2022-33
1659882570692.22
[]
docs.rs
VS Code Sessions# RStudio Workbench allows you to launch VS Code sessions from the home page via the Job Launcher, if configured. Users can start VS Code sessions that allow them to work with VS Code while still working within the administrative framework provided by RStudio, such as authentication, PAM session management, etc. For more information, please see the: - VS Code Sessions section of the Administration Guide - Using VS Code Sessions with RStudio Workbench
https://docs.rstudio.com/rsw/integration/vs-code/
2022-08-07T19:58:09
CC-MAIN-2022-33
1659882570692.22
[]
docs.rstudio.com
Topic: Scylla Seed Nodes Overview Learn: What a seed node is, and how they should be used in a Scylla Cluster Audience: Scylla Administrators Note Seed nodes function was changed in Scylla Open Source 4.3 and Scylla Enterprise 2021.1; if you are running an older version, see Older Version Of Scylla. A Scylla seed node is a node specified with the seeds configuration parameter in scylla.yaml. It is used by new node joining as the first contact point. It allows nodes to discover the cluster ring topology on startup (when joining the cluster). This means that any time a node is joining the cluster, it needs to learn the cluster ring topology, meaning: What are the IPs of the nodes in the cluster are Which token ranges are available Which nodes will own which tokens when a new node joins the cluster Once the nodes have joined the clutser, seed node has no function. The first node in a new cluster needs to be a seed node. In Scylla releases older than Scylla Open Source 4.3 and Scylla Enterprise 2021.1, seed node has one more function: it assists with gossip convergence. Gossiping with other nodes ensures that any update to the cluster is propagated across the cluster. This includes detecting and alerting whenever a node goes down, comes back, or is removed from the cluster. This functions was removed, as described in Seedless NoSQL: Getting Rid of Seed Nodes in ScyllaDB. If you run an older Scylla release, we recommend upgrading to version 4.3 (Scylla Open Source) or 2021.1 (Scylla Enterprise) or later. If you choose to run an older version, it is good practice to follow these guidelines: The first node in a new cluster needs to be a seed node. Ensure that all nodes in the cluster have the same seed nodes listed in each node’s scylla.yaml. To maintain resiliency of the cluster, it is recommended to have more than one seed node in the cluster. If you have more than one seed in a DC with multiple racks (or availability zones), make sure to put your seeds in different racks. You must have at least one node that is not a seed node. You cannot create a cluster where all nodes are seed nodes. You should have more than one seed node.
https://docs.scylladb.com/stable/kb/seed-nodes.html
2022-08-07T18:39:52
CC-MAIN-2022-33
1659882570692.22
[]
docs.scylladb.com
Multiple Financing Programs Overview Multiple Financing Programs (MFPs) enable you to selectively offer a specific custom financing program to consumers based on product or cart attributes that are defined in your e-commerce or marketing platform. This guide walks you through different ways to apply custom financing programs. Please note Merchants are responsible for the custom code that leverages Affirm's MFP functionality. Financing programs determine the following:. Once your financing program has been created, you will be provided with a financing program ID and Promo ID by the Affirm Client Success team for your financing program(s). These IDs are utilized later in the sections below regarding the implementation of MFP. Affirm Promo ID displays correct terms in the Affirm marketing messages, monthly payment pricing and modal, while financing program value provides these terms during the checkout process. Basis for financing programs The terms and conditions for any given custom financing program are based on various consumer protection laws as well as the capabilities of your e-commerce We've grouped MFP integrations into the following areas: - Affirm Direct integration - Platform integrations If you're new to Affirm and haven't started your integration yet, you may want to begin with our development quickstart or platform quickstart before setting up MFPs. Affirm Direct integration Once you've completed your Affirm Direct integration, you can apply custom financing programs at checkout, and determine when a particular financing program should be applied. Platform integrations You can integrate MFPs into the following platforms: Reporting In your settlement reports, we will include the financing program name that was active for a given transaction. Updated over 2 years ago
https://docs.affirm.com/developers/docs/multiple-financing-programs
2022-08-07T18:28:38
CC-MAIN-2022-33
1659882570692.22
[]
docs.affirm.com
See: Description You can use Amazon CloudWatch Logs to monitor, store, and access your log files from EC2 instances, CloudTrail, and other sources. You can then retrieve the associated log data from CloudWatch Logs using the CloudWatch console, CloudWatch Logs commands in the Amazon Web Services CLI, CloudWatch Logs API, or CloudWatch Logs SDK. You can use CloudWatch Logs to: Monitor logs from that. Monitor CloudTrail logged events: You can create alarms in CloudWatch and receive notifications of particular API activity as captured by CloudTrail. You can use the notification to perform troubleshooting..
https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/logs/package-summary.html
2022-08-07T18:59:29
CC-MAIN-2022-33
1659882570692.22
[]
docs.aws.amazon.com
set terminal Sets the behavior of the system terminal. - key query-help - Enables or disables help by using a question mark (?). The default option is enable - length - The number of rows for the display length on the terminal screen. - pager - The program to use as the terminal pager. If no pager is specified, the default (less) is used. - width - The number of columns for the display width on the terminal screen. Operational mode Use this command to set the behavior of the system terminal.
https://docs.vyatta.com/en/supported-platforms/vnf/configuration-vnf/system-and-services/basic-system-configuration/system-management-commands/set-terminal
2022-08-07T19:59:04
CC-MAIN-2022-33
1659882570692.22
[]
docs.vyatta.com
Dynamic¶ Class for dynamic linearised UVLM solution. Linearisation around steady-state are only supported. The class is built upon Static, and inherits all the methods contained there. - stored in self.ScalingFact. Note that while time, circulation, angular speeds) are scaled accordingly, FORCES ARE NOT. These scale by \(q_\infty b^2\), where \(b\) is the reference length and \(q_\infty\) is the dynamic pressure. UseSparse=False: builds the A and B matrices in sparse form. C and D are dense anyway so the sparse format cannot be applied to them. - -) - - solve_steady solves for the steady state. Several methods available. - - solve_step solves one time-step - - freqresp ad-hoc method for fast frequency response (only implemented) for remove_predictor=False Number of states - Type int Number of inputs - Type int Number of outputs - Type int Number of paneles \(K = MN\) - Type int Number of wake panels \(K^*=M^*N\) - Type int Number of panel vertices \(K_\zeta=(M+1)(N+1)\) - Type int Number of wake panel vertices \(K_{\zeta,w} = (M^*+1)(N+1)\) - Type int To do: Upgrade to linearise around unsteady snapshot (adjoint) Produces state-space model of the form\[}\in\mathbb{R}^{MN}\) being the vector of vortex circulations, \(\mathbf{\zeta}\in\mathbb{R}^{3(M+1)(N+1)}\) the vector of vortex lattice coordinates and \(\mathbf{f}\in\mathbb{R}^{3(M+1)(N Generate profiling report for assembly and save it in self.prof_out. - To read the report: import pstats p=pstats.Stats(self.prof: frequency: defines limit frequencies for balancing. The balanced model will be accurate in the range [0,F], where F is the value of this key. Note that F units must be consistent with the units specified in the self.ScalingFacts dictionary. method_low: [‘ga: truncation_tolerance: if get_frequency_responseis True, allows to truncate the balanced model so as to achieved a prescribed tolerance in the low-frequwncy range.. Example: The following dictionary DictBalFreq={ 'frequency': 1.2, 'method_low': 'trapz', 'options_low': {'points': 12}, 'method_high': 'gauss', 'options_high': {'partitions': 2, 'order': 8}, 'check_stability': True } balances the state-space model self.SS. Generate profiling report for balfreq function and saves it into self.prof_out.The function also returns a pstats.Statsobject. - To read the report: import pstats p=pstats.Stats(self.prof_out).sort_stats('cumtime') p.print_stats(20). Produces a sparse matrix\[\bar{\mathbf{C}}(z)\] where\[z = e^{k \Delta t}\] such that the wake circulation frequency response at \(z\) is\[\bar{\boldsymbol{\Gamma}}_w = \bar{\mathbf{C}}(z) \bar{\mathbf{\Gamma}}\] Scale state-space model based of self.ScalingFacts Steady state solution from state-space model. Warning: these methods are less efficient than the solver in Static class, Static.solve, and should be used only for verification purposes. The “minsize” method, however, guarantees the inversion of a K x K matrix only, similarly to what is done in Static.solve. - To speed-up the solution and use minimal memory: solve for bound vorticity (and) propagate the wake compute the output separately. Unpacks the state vector into physical constituents for full order models. The state vector \(\mathbf{x}\) of the form\[\mathbf{x}_n = \{ \delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, \Delta t\,\delta\mathbf{\Gamma}'_n,\, \delta\mathbf{\Gamma}_{n-1} \}\] Is unpacked into:\[{\delta \mathbf{\Gamma}_n,\, \delta \mathbf{\Gamma_{w_n}},\, \,\delta\mathbf{\Gamma}'_n}\] - Parameters xvec (np.ndarray) – State vector - Returns Column vectors for bound circulation, wake circulation and circulation derivative packed in a tuple. - Return type tuple
https://ic-sharpy.readthedocs.io/en/latest/includes/linear/src/linuvlm/Dynamic.html
2022-08-07T20:09:46
CC-MAIN-2022-33
1659882570692.22
[]
ic-sharpy.readthedocs.io
This method sets the position and size of the Camera viewport in a single call. If you're trying to change where the Camera is looking at in your game, then see the method Camera.setScroll instead. This method is for changing the viewport itself, not what the camera can see. By default a Camera is the same size as the game, but can be made smaller via this method, allowing you to create mini-cam style effects by creating and positioning a smaller Camera viewport within your game. This Camera instance.
https://newdocs.phaser.io/docs/3.55.1/focus/Phaser.Cameras.Scene2D.Camera-setViewport
2022-08-07T18:34:52
CC-MAIN-2022-33
1659882570692.22
[]
newdocs.phaser.io
Config: Mapping Files We released a stable version of a new Transifex Client, compatible with the API 3.0, and offered as single executable. Please start using the new Transifex CLI here, since this software is considered deprecated (as of January 2022) and will sunset on Nov 30, 2022. The tx config command allows you to update the .tx/config file that was created when you ran the tx init command. This configuration file is used to map files in a local repo/directory to resources in Transifex. There are different options for mapping files that are already on your local machine ( mapping and mapping-remote) or files that are remote on Transifex ( mapping-remote). Before v0.13.0 of the client, the tx config command was tx set. You can still use tx set; however, it’ll be deprecated in a future release of the client. Running tx config interactively If you run tx config without any parameters, the client will guide you through an interactive configuration experience to configure the .tx/config file. Here’s a sample run of tx config with the interactive experience: Running tx config command for you... The Transifex Client syncs files between your local directory and Transifex. The mapping configuration between the two is stored in a file called .tx/config in your current directory. For more information, visit. Enter the path to your local source file: video.srt Next, we’ll need a path expression pointing to the location of the translation files (whether they exist yet or not) associated with the source file ‘video.srt’. You should include <lang> as a wildcard for the language code. Enter a path expression: translations/<lang>/video.srt Here’s a list of the supported file formats. For more information, check our docs at. 1. SubRip subtitles - .srt Select the file format type [1]: 1 You’ll now choose a project in a Transifex organization to sync with your local files. You belong to these organizations in Transifex: 1. Dunder Mifflin 2. Bluth Company Select the organization to use [1-2]: 1 We found these projects in your organization. 1. Emails 2. iOS 3. Marketing Videos 4. Website 5. Zendesk 6. Create new project (show instructions)... Select the project to use [1-6]: 3 Updating source for resource marketing-videos.video-srt ( en -> video.srt ). Setting source file for resource marketing-videos.video-srt ( en -> video.srt ). Updating file expression for resource marketing-videos.video-srt ( translations/<lang>/video.srt ). Here’s the content of the .tx/config file that was created: [marketing-videos.video-srt] source_file = video.srt file_filter = translations/<lang>/video.srt source_lang = en type = SRT Configuring one local file (mapping) Setting up your .tx/config file using the mapping subcommand is the most common workflow. You should use this subcommand if you already have source/target language files on your local machine and you want to associate them with resources in Transifex. If you want to create mappings for multiple source files at once, consider using the mapping-bulk subcommand instead (next section). $ tx config mapping -r <project_slug.resource_slug> --source-lang <lang_code> \ --type <i18n_type> [--source-file <file>] --expression '<path_expression>' Note Before v0.13.0 of the client, the mapping subcommand was an option: --auto-local. Here’s what each argument in the command means: -ror --resource: Your project and resource slugs, separated by a dot. These slugs are found in your project's URL; together, they uniquely identify your translatable resources. For example, if you have a URL such as, the resource identifier will be myproject.myresource. . --expression: An expression which reflects the file/directory structure of your translation files. The expression must be in quotes and contain the <lang>keyword, denoting where the file paths differentiate per language, e.g. locale/<lang>/ui.po. -for --source-file: If your source file doesn't follow the naming schema specified in the expressionargument, you can specify a custom path for the source file using this option. For example, locale/ui.pot. By default, when you run tx config mapping, the client does a dry run of the command. To actually write the changes to the .tx/config file, include --execute in your command. Example Let’s say you have the following directory structure: └── locale ├── ui.pot ├── fr_CA │ └── ui.po └── pt_BR └── ui.po Here's a sample tx config mapping run: $ tx config mapping -r myproject.myresource --source-lang en --type PO \ --source-file locale/ui.pot --expression 'locale/<lang>/ui.po' Note Your source file name doesn’t have to match your resource name, but keeping them consistent might help others who come along later. You can run tx config mapping --help to learn about additional options that are available. Configuring multiple local files (mapping-bulk) The mapping-bulk subcommand can be used for mapping multiple local source files at once. $ tx config mapping-bulk -p myproject --source-language en --type MD -f '.md' \ --source-file-dir locale --expression '<path_expression>' Here’s what each argument in the command means: -por --project: Your project slug. This slug is found in your project's URL and uniquely identifies your project. For example, if you have a URL such as, the slug will be myproject. . -for --file-extension: File extension of the source files to be mapped. --source-file-dir: Directory where the source files with the specified file extension are. -ior --ignore-dir: Directories to ignore while looking for source files. This can be called multiple times, e.g. -i es -i fr. --expression: A path expression defining where translation files should be saved. The default value is locale/<lang>/{filepath}/{filename}{extension}. By default, when you run tx config mapping-bulk, the client does a dry run of the command. To actually write the changes to the .tx/config file, include --execute in your command. Example Let’s say you have the following directory structure: └── locale ├── pt_PT │ └── intro │ ├── get-started.md │ └── index.md ├── intro │ ├── get-started.md │ └── index.md └── index.md To run tx config for all the Markdown source files inside locale/: $ tx config mapping-bulk -p myproject --source-language en --type MD -f '.md' \ -d locale -i pt_PT --expression 'locale/<lang>/{filepath}/{filename}{extension}' Configuring files from Transifex (mapping-remote) If you already have a project in Transifex with resources that were previously uploaded — via the web UI for example — you can use the tx config mapping-remote command to quickly configure the client. This is useful when you want to pull files from Transifex using the client, make changes to the files, and push them back. $ tx config mapping-remote <transifex_url> Note Before v0.13.0 of the client, the mapping-remote subcommand was an option: --auto-remote. You can either configure all the files in a project: $ tx config mapping-remote<organization_slug>/<project_slug>/ Or a specific resource within a project: $ tx config mapping-remote<organization_slug>/<project_slug>/<resource_slug>/ Running tx config mapping-remote overwrites your existing .tx/config file. The following options will be deprecated in a future release of the client. There are a number of additional options available for the tx config command. Mapping the source file and source language for a resource If you want to associate a source file with a specific resource in Transifex and set its source language, use the following command: $ tx config --source -r <project_slug.resource_slug> -l <lang> <file> You can use this for initializing a new mapping or updating an existing mapping. Set individual translation files mapped to a language The following command can be used to set individual translation files mapped to a certain language. This is useful when your translation files don't use a common naming schema and the mapping subcommand cannot be used. $ tx config -r <project_slug.resource_slug> -l <lang> <file> Setting or changing the resource type Using the -t or --type option, you can set or change the i18n type of a resource: $ tx config -t <type> -r <project_slug.resource_slug> To set or change the i18n type for all resources in a project: $ tx config -t <type> Setting a completion threshold to download files You may want to download translations only for resources that have reached a certain translation threshold. For example, you only want to download when a resource is 80% translated or higher. To do this, use the minimum-perc option to specify the minimum translation percentage required in order to download. You can use this option on a project or resource level. As an example, this command sets the minimum translation percentage at 80% completion: $ tx config --minimum-perc=80 Similarly, it has a mode option to define the mode for the downloaded files.
https://docs.transifex.com/client/config
2022-08-07T19:13:18
CC-MAIN-2022-33
1659882570692.22
[]
docs.transifex.com
Setting up a line of business BMC Helix Business Workflows provides out-of-the box configurations to get started with the product. As a case business analyst, you can set up the product according to your line of business such as Facilities, Travel, and Legal. You can create templates, automate various processes and tasks, set service targets, define custom notifications, and so on. You can also modify the lifecycle of cases, tasks, and knowledge articles. Was this page helpful? Yes No Submitting... Thank you Log in or register to comment.
https://docs.bmc.com/docs/bwf2102/setting-up-a-line-of-business-988815225.html
2022-08-07T20:04:23
CC-MAIN-2022-33
1659882570692.22
[]
docs.bmc.com
Earn an advanced specialization to showcase your validated capabilities Appropriate roles: Global admin | Account admin Microsoft advanced specializations build on the related gold competencies that partners can earn. Earning gold competencies, along with their related advanced specializations, enables partners to further differentiate their capabilities to customers. To earn an advanced specialization, a partner often must fulfill demanding requirements, such as obtaining customer references, undergoing a third-party audit, proving attainment of a relevant skill set, and meeting certain, other performance measurements. By meeting these strict requirements, partners can then validate their deep knowledge, extensive experience, and proven success at delivering tailored, customer solutions for areas of high customer demand and relevance. Partners who earn an advanced specialization will obtain a customer-facing label they can display on their business profile in the Microsoft solution provider finder. This label further validates the partner's capabilities, while giving them access to associated benefits, expanded customer reach, and greater customer confidence. Note To learn more about advanced specializations, see the Microsoft Partner Network Advanced specializations page. Advanced specialization areas Each advanced specialization corresponds to a solution area: - Azure - Business Applications - Modern Work - Security Azure advanced specializations include: - Analytics on Microsoft Azure - Data Warehouse Migration to Microsoft Azure - Kubernetes on Microsoft Azure - Linux and Open Source Databases Migration to Microsoft Azure - Microsoft Windows Virtual Desktop - Modernization of Web Applications to Microsoft Azure - SAP on Microsoft Azure - Windows Server and SQL Server Migration to Microsoft Azure - AI and Machine Learning in Microsoft Azure - Hybrid Cloud Infrastructure with Microsoft Azure Stack HCI - Hybrid Operations and Management with Microsoft Azure Arc - Microsoft Azure VMware Solution - DevOps with GitHub on Microsoft Azure - Networking Services in Microsoft Azure Business Applications advanced specializations include: - Low Code Application Development - Small and Midsize Business Management Modern Work advanced specializations include: - Adoption and Change Management - Calling for Microsoft Teams - Custom Solutions for Microsoft Teams - Meetings and Meeting Rooms for Microsoft Teams - Teamwork Deployment - Modernize Endpoints Security advanced specializations include: - Identity and Access Management - Threat Protection - Information Protection and Governance - Cloud Security Note To learn about each advanced specialization, along with its prerequisites and requirements, see the Partner Network Advanced specializations page. When you're ready to apply for an advanced specialization, check your progress by signing into the Partner Center dashboard. To learn more about accessing this area of Partner Center, see Apply for an advanced specialization. Next steps Use Partner Center to apply for and check the status of advanced specializations Learn more about advanced specializations, their benefits, and unique requirements Learn more about Microsoft Partner Network competencies
https://docs.microsoft.com/en-gb/partner-center/advanced-specializations?utm_source=cc-mcpp
2022-08-07T19:47:46
CC-MAIN-2022-33
1659882570692.22
[]
docs.microsoft.com
This troubleshooting guide describes what to do when Scylla keeps using disk space after a table or keyspaces are dropped or truncated. When performing a DROP or TRUNCATE operation on a table or keyspace, disk usage is not seen to be reduced. Usually this is verified by using an external utility like the du Linux command. This is caused by the fact that by default, Scylla creates a snapshot of every dropped table. Space won’t be reclaimed until the snapshot is dropped. Locate /var/lib/scylla/data/<your_keyspace>/<your_table>. Inside that directory you will see a snapshots subdirectory with your dropped data. Follow the procedure to use nodetool to remove the snapshot. If you are deleting an entire keyspace, repeat the procedure above for every table inside the keyspace. This behavior is controlled by the auto_snapshot flag in /etc/scylla/scylla.yaml, which set to true by default. To stop taking snapshots on deletion, set that flag to false and restart all your scylla nodes. Note Alternatively you can use the``rm`` Linux utility to remove the files. If you do, keep in mind that the rm Linux utility is not aware if some snapshots are still associated with existing keyspaces, but nodetool is.
https://docs.scylladb.com/stable/troubleshooting/drop-table-space-up.html
2022-08-07T18:42:18
CC-MAIN-2022-33
1659882570692.22
[]
docs.scylladb.com
Warehouse In the "Warehouses" tab you can edit current warehouse details or add new warehouse. Having filled in your warehouse address, you will save your time while buying custom labels in the future, as you do not need to specify it manually every purchase. To add new warehouse, click the [Add location] button. Full instruction you can find here. Note! Currently you can use only one Warehouse. To edit details, click the [Edit] icon in the line with the chosen warehouse and go to the warehouse locations page. In the "Warehouse Locations" form you can change the following data: 1. Location Name*; 2. Address*; 3. City*; 4. Zip Code*; 5. Second Address; 6. Country*; 7. State*. * All fields marked with an asterisk [*] are required to be filled in. To confirm changed information, click the [Save] button in the bottom of the page. You get the notification if the process succeed. To abolish changes, click [Cancel].
https://docs.sellerskills.com/app-setting/warehouse
2022-08-07T18:55:28
CC-MAIN-2022-33
1659882570692.22
[]
docs.sellerskills.com
SpringBoard directory structure¶ SpringBoard directory ( springboard) is mentioned throughout this documentation, and is located in the Spring data directory. - The springboarddirectory contains user projects, assets and extensions: - Projects are maps and scenarios you are working on. - Assets are resources used during the project creation process. - Extensions are plugins that enhance the SpringBoard editor. See also:
https://springboard-core.readthedocs.io/en/latest/directory_structure.html
2022-08-07T19:52:02
CC-MAIN-2022-33
1659882570692.22
[]
springboard-core.readthedocs.io
BMC BMC Remedy Single Sign-On 9.1 All versions - 26 Feb This section provides information about what is new or changed in this space, including resolved issues, documentation updates, maintenance releases, service packs, and patches. It also provides license entitlement information for the release. Tip To stay informed of changes to this space, place a watch on this page. The following updates have been added since the release of the space: Related topics Log in or register to comment.
https://docs.bmc.com/docs/rsso91/home-799091192.html
2022-08-07T19:59:09
CC-MAIN-2022-33
1659882570692.22
[]
docs.bmc.com
Extensions Module Manager Statistics From Joomla! Documentation Description[edit] This Module shows information about your server and site information like the PHP and Database versions, OS used, number of articles, users, hits... How to Access[edit] Add a new module 'Statistics' - Select Extensions → Modules from the dropdown menu of the Administrator Panel. - Click the New button in the toolbar - Select the Module Type Statistics. Edit an existing module 'Statistics' Screenshot[edit] Form Fields[edit] - Title. The title of the module. This is also the title displayed in the Frontend for the module. Module[edit] - Server Information. (Yes/No) Display server information. - Site Information. (Yes/No) Display site information. - Hit Counter. (Yes/No) Display hit counter. - Increase Counter. Enter the amount of hits to increase the counter by. - Show Title. (Show/Hide) Show or hide module title on display. Note: Effect will depend on the module style (chrome) in the template. - Position. You may select a module position from the list of pre-defined positions or enter your own module position by typing the name in the field and pressing enter. - Status. (Published/Unpublished/Trashed) The published status of the module. -. Who has access to this module. - Public: Everyone has access. - Guest: Everyone has access. - Registered: Only registered users have access. - Special: Only users with author status or higher have access. - Super Users: Only super users have access. - Ordering. This shows a dropdown of every module in the position that the current module is in. This is the order that the modules will display in when displayed on in the. Advanced[edit] -. "Use Global" will use the settings from Global Configuration. - Cache Time. The number of minutes for which to cache the module locally. It can safely be left at the default. - Module Tag. The HTML tag for the module to be placed in. By default this is a div tag but other HTML5 elements (address, article, …) can also be used. - Bootstrap Size. (0/…/12) This allows you to choose the width of the module via the span element built into bootstrap. - Header Tag. (h1/h2/h3/h4/h5/h6/p/div) The HTML tag to use for the modules header or title. Note: You must use a module style (chrome) of html5 or add your custom module styles in [yourtemplate]/html/modules.php. - Header Class. Add optional CSS classes to the modules header or title element. - Module Style. Override the templates style for its position. Permissions[edit] Manage the permission settings for user groups. To change the permissions for this module, do the following. - 1. Select the Group by clicking its title located on the left. - 2. Find the desired Action. Possible Actions are: - Delete. Users can delete content of this module. - Edit. Users can edit content of this module. - Edit State. Users can change the published state and related information for content of this module. - Frontend Editing. Allows users in the group to edit in Frontend. - 3. Select the desired Permission for the action you wish to change. Possible settings are: - Inherited: Inherited for users in this Group from the module options permissions of this site. - Allowed: Allowed for users in this Group. Note that, if this action is Denied at one of the higher levels, the Allowed permission here will not take effect. A Denied setting cannot be overridden. - Denied: Denied for users in this Group. - 4. Click Save in Toolbar at top. When the screen refreshes, the Calculated Setting column will show the effective permission for this Group and Action. Toolbar[edit].Note: This toolbar icon is only shown if you edit an existing module. - Close. Closes the current screen and returns to the previous screen without saving any modifications you may have made. - Help. Opens this help screen. Quick Tips[edit] Site- and Administrator Module have same Parameter beside there is no Menu assigment tab in Administrator module and also another way to access the module. Related Information[edit] - More about Modules: what is a module position, Description of the default Site and Administrator Modules.
https://docs.joomla.org/Special:MyLanguage/Help310:Extensions_Module_Manager_Statistics/pt
2022-08-07T18:54:41
CC-MAIN-2022-33
1659882570692.22
[]
docs.joomla.org
PID Failure Modes & Responses How to react in different scenarios where the redemption rate is ineffective Failure Scenarios The following is a list of known PID failure modes and possible responses or fixes for each one of them. Note that in order to minimize the risk of the PID failing, governance should activate it only after the stablecoin has a minimum, mandatory liquidity level on exchanges as well as plenty of users interacting with the system. Market Manipulation Although improbable (in case the PID is fed a TWAP feed for the stablecoin market price) after a stablecoin gets to scale, market manipulation is always a concern that can make the PID controller destabilize the system. In this scenario, governance has two options: pause the controller until there is more liquidity on exchanges or globally settle the system. Lack of Liquidity on Exchanges Governance must ensure at all times that there is enough stablecoins on exchanges vs stablecoins locked in other applications. This is why governance must specify two KPIs: 1. An absolute minimum amount of liquidity that must be on the exchanges from which the PID is pulling market price data 2. A minimum percentage of stablecoins out of the total oustanding supply that must be at all times on exchanges vs the percentage of stablecoins locked/used in other applications In case the stablecoin liquidity drops below any of the two limits specied above, governance is advised to pause the PID and restart it only after the liquidity improves. NOTE : lack of liquidity will increase the risk of market manipulation (as seen in Proto RAI). Skewed Incentives In case governance sets up a system to incentivize the growth of a stablecoin, these incentives may interfere with the PID's and cause the system to become unstable. In this scenario, governance should look at the following solutions: 1. Offer less growth incentives over a longer period of time 2. Pause the controller until the growth campaign/s end 3. Completely stop growth campaigns 4. Find a way to incentivize market making alongside growth Negative Feedback Turning Positive There are cases when, even if there is no market manipulation, no skewed incentives and there's plenty of liquidity on exchanges, the market might not react to the redemption rate incentives and the redemption price would continue to go in a single direction for a long period of time. In this scenario there are three possible solutions: 1. Temporarily pause the PID and wait for the market to come closer toward redemption 2. Temporarily pause the PID and build a second controller that modifies the stability fee. In this scenario the redemption rate controller would only be used when the market price is consistently above redemption and the stability fee controller would be used when the market price is below redemption. Choosing this option means that governance may need to have long term control over the TaxCollector and there will need to be more governance over rate setting in general. 3. Trigger global settlement and allow the system to shut down using the redemption price. Failure Prevention In order to make market manipulation as expensive as possible, we propose the following liquidity thresholds for the RAI stablecoin: There must be at least $2M worth of liquidity on the exchange/s that the RAI oracle is pulling a price feed from At least 3% of the RAI supply must be on the exchange/s from which the system is pulling a price feed from In addition to this, the PID controller should not be set to its full force right when RAI is launched as it may destabilize the system. Risk - Previous GEB Risks Next - Incentives RAI Uniswap V2 Mint + LP Incentives Program Last modified 1yr ago Copy link Outline Failure Scenarios Market Manipulation Lack of Liquidity on Exchanges Skewed Incentives Negative Feedback Turning Positive Failure Prevention
https://docs.reflexer.finance/risk/pid-failure-modes-and-responses
2022-08-07T18:28:29
CC-MAIN-2022-33
1659882570692.22
[]
docs.reflexer.finance
Track and analyze Customer Service case data Use the Service Manager homepage to track and analyze customer service case data and agent group activities. The Service Manager homepage displays several case-related reports, which are created using the Reports application. The customer service agent manager can drill down into these reports for more information about the related cases. To view the Service Manager homepage, navigate to Customer Service > Overview. The Customer Service Performance Analytics feature adds two more reports to the Service Manager homepage: Case Average Response Time and Number of Open Cases. These reports use indicators as a way to collect and measure data and breakdowns to show different views of this data. Note: Customer Service Performance Analytics is an optional feature available with the Customer Service Management application. To use this feature, activate the Performance Analytics - Content Pack - Customer Service plugin (com.sn_customerservice_pa). You can activate Performance Analytics content packs on instances that do not have Performance Analytics to evaluate the functionality. However, to collect scores for content pack indicators you must license Performance Analytics. Report Figure Description Case Average Response Time Displays the average case response time in a line graph for the selected time period. The default time period is one month. Point to any location along the line to display a summary for a specific date. Click any location along the line to the line to drill down and see additional information for a specific date. Number of Open Cases Displays the number of open cases by day in a trend graph for the selected time period. Point to any bar within the graph to display a summary for the specific date. Click any bar within the graph to drill down and see additional information for a specific date. Open Cases by Assignment Group Displays the number of open cases by customer service agent group. Open Cases by Company Displays the number of open cases by company. Cases by SLA Stage Displays the number of cases by SLA stage. Open Cases by Priority Displays the number of open cases by priority. Click a priority to show the case list. Click a case from the list to view details. Customer Satisfaction Displays the results of the customer satisfaction survey that a customer is asked to take after a case is closed. Cases by Product Displays the number of cases for each product. Use Customer Service Performance Analytics reportsThe Customer Service Performance Analytics feature adds reports to the Service Manager homepage.Related TopicsPerformance Analytics indicators
https://docs.servicenow.com/bundle/jakarta-customer-service-management/page/product/customer-service-management/concept/c_CustomerServiceDashboard.html
2018-01-16T13:39:42
CC-MAIN-2018-05
1516084886436.25
[]
docs.servicenow.com
Exporting as a CSV file from Excel/ In order to verify your file with BriteVerify, you will need to have your email list saved as either a .txt or .csv file. Even if you have already received" dialog window, shown below. Select "Comma Separated (.csv)" from the Format drop-down menu and save your file. Ignore this message. Much like Excel and Powerpoint, this exists only to confuse and torment the average person. A warning message will appear as you save your file, this is normal and will not affect your file in any way. Just press "Continue" and move on with your life. Congratulations, you are now the proud owner of a properly formatted data file, which is a pretty awesome thing to have. Now go get that sucker verified and start sending some emails!
http://docs.briteverify.com/blog/2014/1/22/exporting-as-a-csv-file-from-excel
2018-01-16T13:15:29
CC-MAIN-2018-05
1516084886436.25
[array(['https://static1.squarespace.com/static/52a21a70e4b099b82e2005c1/t/52e01754e4b0e41bb17cab68/1390417749754/ExportWindow.jpg', 'ExportWindow.jpg'], dtype=object) array(['https://static1.squarespace.com/static/52a21a70e4b099b82e2005c1/t/52e81d4fe4b047367ade900d/1390943569257/SelectCSV.jpg', 'SelectCSV.jpg'], dtype=object) array(['https://static1.squarespace.com/static/52a21a70e4b099b82e2005c1/t/52e01928e4b05bc06d22af97/1390418216704/Warning.jpg', 'Ignore this message. Much like Excel and Powerpoint, this exists only to confuse and torment the average person.'], dtype=object) ]
docs.briteverify.com
Porting guide for Python C Extensions¶ This guide is written for authors of C extensions for Python, who want to make their extension compatible with Python 3. It provides comprehensive, step-by-step porting instructions. Before you start adding Python 3 compatibility to your C extension, consider your options: If you are writing a wrapper for a C library, take a look at CFFI, a C Foreign Function Interface for Python. This lets you call C from Python 2.6+ and 3.3+, as well as PyPy. A C compiler is required for development, but not for installation. For more complex code, consider Cython, which compiles a Python-like language to C, has great support for interfacing with C libraries, and generates code that works on Python 2.6+ and 3.3+. Using CFFI or Cython will make your code more maintainable in the long run, at the cost of rewriting the entire extension. If that’s not an option, you will need to update the extension to use Python 3 APIs. This is where py3c can help. This is an opinionated guide to porting. It does not enumerate your options, but rather provides one tried way of doing things. This doesn’t mean you can’t do things your way – for example, you can cherry-pick the macros you need and put them directly in your files. However, dedicated headers for backwards compatibility will make them easier to find when the time comes to remove them. If you want more details, consult the “Migrating C extensions” chapter from Lennart Regebro’s book “Porting to Python 3”, the C porting guide from Python documentation, and the py3c headers for macros to use. The py3c library lives at Github. See the README for installation instructions. Overview Porting a C extension to Python 3 involves three phases: - Modernization, where the code is migrated to the latest Python 2 features, and tests are added to prevent bugs from creeping in later. After this phase, the project will support Python 2.6+. - Porting, where support for Python 3 is introduced, but Python 2 compatibility is kept. After this phase, the project will support Python 2.6+ and 3.3+. - Cleanup, where support for Python 2 is removed, and you can start using Python 3-only features. After this phase, the project will support Python 3.3+. The first two phases can be done simultaneously; I separate them here because the porting might require involved discussions/decisions about longer-term strategy, while modernization can be done immediately (as soon as support for Python 2.5 is dropped). But do not let the last two stages overlap, unless the port is trivial enough to be done in a single patch. This way you will have working code at all time. Generally, libraries, on which other projects depend, will support both Python 2 and 3 for a longer time, to allow dependent code to make the switch. For libraries, the start of phase 3 might be delayed for many years. On the other hand, applications can often switch at once, dropping Python 2 support as soon as the porting is done. Ready? The Modernization section is waiting!
http://py3c.readthedocs.io/en/latest/guide.html
2018-01-16T12:56:12
CC-MAIN-2018-05
1516084886436.25
[]
py3c.readthedocs.io
Upgrade-SPSingle Syntax Upgrade-SPSingleSignOnDatabase -SecureStoreConnectionString <String> -SecureStorePassphrase <SecureString> -SSOConnectionString <String> [-AssignmentCollection <SPAssignmentCollection>] [<CommonParameters>] Description. For permissions and the most current information about Windows PowerShell for SharePoint Products, see the online documentation at (). Examples ------------------EXAMPLE------------------ C:\PS>Upgrade-SPSingleSignOnDatabase -SSOConnectionString "Data Source=oldServer;Database=SSO;Trusted_Connection=yes;" -SecureStoreConnectionString "Data Source=CONTOSO\SQLDatabase;Database=ContosoSSDatabase;Trusted_Connection=yes;" -SecureStorePassphrase "abcDEF123!@#" This example migrates the SSO database at the SSO connection to a Secure Store database at the Secure Store connection. Required Parameters Specifies the SQL Server connection string for the SSO database. Specifies the SQL Server connection string for the Secure Store database. Specifies the passphrase used for the Secure Store.
https://docs.microsoft.com/en-us/powershell/module/sharepoint-server/Upgrade-SPSingleSignOnDatabase?view=sharepoint-ps
2018-01-16T14:07:03
CC-MAIN-2018-05
1516084886436.25
[]
docs.microsoft.com
Changes related to "J1.5:Customising the JA Purity template/footer/syndication" ← J1.5:Customising the JA Purity template/footer/syndication This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&hidebots=0&target=J1.5%3ACustomising_the_JA_Purity_template%2Ffooter%2Fsyndication
2016-02-06T08:24:26
CC-MAIN-2016-07
1454701146196.88
[]
docs.joomla.org
JDatabaseMySQLi::fetchAssoc From Joomla! Documentation Revision as of 15::fetchAssoc Description Method to fetch a row from the result set cursor as an associative array. Description:JDatabaseMySQLi::fetchAssoc [Edit Descripton] SeeAlso:JDatabaseMySQLi::fetchAssoc [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JDatabaseMySQLi::fetchAssoc&direction=next&oldid=56341
2016-02-06T07:27:55
CC-MAIN-2016-07
1454701146196.88
[]
docs.joomla.org
All public logs Combined display of all available logs of Joomla! Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive). - 15:41, 10 May 2013 JoomlaWikiBot (Talk | contribs) moved page JDatabaseQuerySQLSrv::join/11.1 to API17:JDatabaseQuerySQLSrv::join without leaving a redirect (Robot: Moved page) - 19:40, 27 April 2011 Doxiki2 (Talk | contribs) automatically marked revision 56503 of page JDatabaseQuerySQLSrv::join/11.1 patrolled
https://docs.joomla.org/index.php?title=Special:Log&page=JDatabaseQuerySQLSrv%3A%3Ajoin%2F11.1
2016-02-06T08:10:00
CC-MAIN-2016-07
1454701146196.88
[]
docs.joomla.org
California Public Utilities Commission 505 Van Ness Ave., San Francisco ________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Contact: Terrie Prosper, 415.703.1366, [email protected] Docket #: A.07-05-020 SAN FRANCISCO, April 10, 2008 - The California Public Utilities Commission (CPUC), in its ongoing efforts to decrease greenhouse gas emissions to reduce the impact of climate change, today approved a plan to evaluate the feasibility of a Clean Hydrogen Power Generation plant to advance clean coal technology. "As an emerging advanced clean coal technology, a Clean Hydrogen Power Generation plant would produce electric power from coal, the largest domestic fossil fuel source in the U.S., with minimal greenhouse gas emissions," said CPUC President Michael R. Peevey. "Coal gasification offers one of the most versatile and clean ways to convert coal into electricity. As the largest customer for energy in the western U.S., California can and should play a role in helping to commercialize this essential technology." The technology under consideration would convert coal through a gasification process into predominantly hydrogen and carbon monoxide gases. The hydrogen would be used as the fuel source for a combined cycle power plant while the carbon monoxide gas would be removed prior to combustion and sequestered underground. "Large scale deployment of carbon capture and storage in electricity generation will play a key role in meeting our greenhouse gas reduction targets as outlined in Assembly Bill 32," said President Peevey. "Today the focus is on applying this technology to coal-fired power plants, but eventually it may have to be deployed on all fossil-fueled generation. This means deploying it in production-scale facilities in a variety of settings." Southern California Edison proposed the study of a Clean Hydrogen Power Generation facility in conjunction with the Department of Energy and the Southwest Partnership and the CPUC approved $4.6 million for the first stages of the project. For more information on the CPUC, please visit. ### Statement from CPUC President Michael R. Peevey, presented at the April 10, 2008, CPUC Meeting _ Much has happened in California and across the west to advance the climate policy agenda. _ I'm proud that our state is not just standing by and watching from the sidelines. We are actively engaged. ¬ Working together, the public and private sectors can and will reduce California's direct contribution to climate change by achieving the emissions reductions mandated by AB32. ¬ But our impact will be much greater than that. By dedicating our financial, human and intellectual capital to this challenge we will also: _ provide an example for others to follow; and _ spur the development and full-scale deployment of new low- and no-carbon technologies. _ An example of California continuing to take a leadership role in the fight against climate change is evidenced by Governor Schwarzenegger, back in April 2006, signing a Memorandum of Understanding with Governor Freudenthal of Wyoming. ¬ One of the main goals of this MOU was to advance the development of advanced coal generation technologies. _ And this is why I am very happy to introduce item 50. _ This decision approves, as modified, Southern California Edison's request to conduct a study to evaluate the feasibility of a Clean Hydrogen Power Generation facility (CHPG). _ The decision authorizes SCE upfront cost recovery of up to $4.6 million to participate in the Southwest Partnership. ¬ For those unfamiliar, the Southwest Regional Partnership on Carbon Sequestration was developed as a part of the U.S. Department of Energy's effort to respond to global climate change. The SWP has been challenged to evaluate available technologies that capture and store CO2. _ The partnership consists of roughly 30 entities including ConocoPhillips, Idaho National Laboratory, Idaho Power, Navajo Nation Oil and Gas Company, PacifiCorp, Pinnacle - the parent to Arizona Public Service Company, Shell, SCE, Xcel Energy, and many others. ¬ Clearly there is significant interest in advancing carbon sequestration. In fact, DOE has committed over $65 million to the project. ¬ By joining the SWP, SCE intends to leverage ratepayer funds and achieve a greater result than it could if it went it alone. _ The decision also denies SCE upfront cost recovery for work referred to as plant feasibility costs. Instead, it directs SCE to first seek out other sources of funding - public or private - before requesting ratepayer funding. ¬ In that regard, the decision allows SCE to record up to $26.3 million in a memorandum account. In order to seek recovery of these costs, SCE must file an Advice Letter demonstrating that it has secured co-funding and that the costs were reasonably incurred and necessary to implement the feasibility study. _ Also - and this is very important - the decision before us today is not giving SCE exclusive rights to construct and operate the facility. The decision effectively bifurcates the study necessary to determine if a CHPG facility is feasible from the construction and operation of the plant itself. ¬ The decision requires that, upon determination that a CHPG plant with carbon sequestration is commercially and technically feasible and will benefit California ratepayers, SCE shall conduct a competitive solicitation to construct the facility. _ The competitive solicitation must be consistent with the directives of D.07-12-052 and any other then-applicable procurement decisions. _ I believe that we are at a critical juncture in the effort to transform the way we produce and use energy. True resource diversity will emerge through the long term development of environmentally sound technologies. ¬ Widespread adoption of emerging environmentally preferable generating technologies is a win in my book. _ As an emerging advanced clean coal technology, a CHPG plant would produce baseload electric power from coal - the largest domestic fossil fuel source in the U.S. - with minimal GHG emissions. ¬ Coal gasification offers one of the most versatile and clean ways to convert coal into electricity. ¬ Large scale deployment of carbon capture and storage in electricity generation will play a key role in meeting our AB32 targets. _ Today the focus is on applying this technology to coal-fired power plants, but eventually it may have to be deployed on all fossil-fueled generation. _ Recent studies have shown that while carbon capture and storage is technically and economically feasible, it still needs to be proven on a commercial scale. _ This means deploying it in production-scale facilities in a variety of settings. ¬ As the largest customer for energy in the western U.S., California can and should play a role in helping to commercialize this essential technology. _ In closing, I want to reiterate that the most important development in California energy policy in the past two years, if not the past several decades, is reaching consensus that California must act to decrease its greenhouse gas emissions to reduce the impact of climate change. ¬ The reality of climate change is not in doubt, and the consequences of inaction could not be more extreme. ¬ California is past the talking stage. We have been acting and we will continue to act.
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/81153.htm
2008-05-11T18:32:19
crawl-001
crawl-001-007
[]
docs.cpuc.ca.gov
COM/MP1/rbg** DRAFT Agenda ID #7364 (Rev. 2) Ratesetting 4/10/2008 Item 51 Decision PROPOSED DECISION OF COMMISSIONER PEEVEY (Mailed 2/11/2008) BEFORE THE PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA OPINION ESTABLISHING CALIFORNIA INSTITUTE FOR CLIMATE SOLUTIONS TABLE OF CONTENTS Page OPINION ESTABLISHING CALIFORNIA INSTITUTE FOR CLIMATE SOLUTIONS 22 3. The California Institute for Climate Solutions 1111 3.1.1. The CICS Mission will Help California Achieve the Goals Established in AB 32 and SB 1368 1616 3.2. Funding and Budget 1818 3.2.3. Equipment Purchases 3434 3.3. Governance and Organization 3535 3.3.1. Governing Board 3737 3.3.2. Institute Director 4141 3.3.3. Strategic Research Committee 4242 3.3.4. Strategic Plan 4343 3.3.5. Short-Term and Long-Term Goals 4545 3.4. Grant Administration and the RFA Process 4646 3.5. Oversight and Accountability 5050 3.5.1. Annual Financial and Progress Report 5252 3.6. Biennial External Performance Review 5454 4. Intellectual Property 5656 5. Comments on Proposed Decision 6060 6. Assignment of Proceeding 6060 OPINION ESTABLISHING CALIFORNIA INSTITUTE FOR CLIMATE SOLUTIONS Confronting climate change is the preeminent environmental challenge of our time. As we noted in the Order Instituting Rulemaking (OIR), stabilizing greenhouse gas (GHG) emissions will require an economic and technological transformation on a scale equivalent to the Industrial Revolution.1 This decision, by creating the California Institute for Climate Solutions, (CICS or Institute) adopts a bold and innovative approach to expanding California's leadership on this most pressing of environmental issues. The mission of the CICS is consistent with the purpose and findings contained in Assembly Bill (AB) 32, The Global Warming Solutions Act of 2006,2 and Senate Bill (SB) 1368, regulating emissions of GHG from electric utilities.3 In AB 32, the Legislature found that global warming "poses a serious threat to the economic well-being, public health, natural resources, and the environment of California." (Section 38501(a).) In SB 1368, the Legislature determined that "it is vital to ensure all electricity load-serving entities [LSE] internalize the significant and underrecognized cost of emissions recognized by the PUC with respect to the investor-owned electric utilities (IOU), and to reduce California's exposure to costs associated with future federal regulation of these emissions." (SB 1368, Section 1(g).) The Institute will provide significant benefit to ratepayers by accelerating applied research and development (R&D) of practical and commercially viable technologies that will reduce GHG in order to slow global warming, as well as technologies that will allow California to adapt to those impacts of climate change that may now be inevitable. The Institute will have a particular focus on speeding the transfer of these technologies from the laboratory to market place. The funding for the CICS, $60 million per year for 10 years via a new surcharge on customer bills, is an investment in California's future that we expect will benefit all Californians. We think it is appropriate to take steps to ensure that the benefits that flow from the Institute's research benefits all Californians, regardless of socioeconomic status. To this end, we form a Workforce Transition Subcommittee (WTS) of the Governing Board that will study ways to support the energy sector's transition to a carbon-constrained future through anticipating and preparing for the resultant changes in its workforce needs. This subcommittee will submit a report to the Governing Board and the Commission in one year from its initiation, and the Commission shall act on its recommendations within 120 days from receipt. Further, the Commission expects that the practices and policies of the hub and the resources of the host institution will be used to support participation that is broadly representative of the population of California in the projects funded by the CICS. The investment in the Institute will leverage the State's considerable intellectual capital for the purpose of accomplishing the following mission: (1) To administer grants for mission-oriented, applied and directed research that results in practical technological solutions and supports development of policies likely to reduce GHG emissions or help California's electricity and natural gas sectors adapt to the impacts of climate change. (2) To speed the transfer, deployment, and commercialization of technologies that have the potential to reduce GHG emissions in the electric and gas sectors or otherwise mitigate the impacts of climate change in California. (3) To facilitate coordination and cooperation among relevant institutions, including private, state, and federal entities, in order to most efficiently achieve mission-oriented, applied and directed research. These pillars of the Institute's mission will be supported by the formation of new channels of communication between academics, utilities, business, environmentalists, researchers, policy-makers, investors, and the public. In order to provide direction to the Institute's mission, maximize ratepayer benefit, and minimize unnecessary redundancy, the Institute's Strategic Research Committee (SRC) shall first develop a Strategic Plan that will identify those areas of research and technological innovation that are most likely to achieve the greatest GHG reductions in the energy sector at the lowest cost. The Strategic Plan will be the framework from which the Institute will formulate its budget, short-term and long-term goals and grant administration process. The Strategic Planning process is to be structured in a way to maximize ratepayer benefit and cost-effectiveness, while avoiding redundancy. The Institute will fund mission-oriented applied and directed research with an emphasis on the development and rapid transfer of the knowledge gained to the electric and gas sectors for implementation. The Institute will reduce GHG emissions within the state both by transferring technology for cleaner energy and improved energy efficiency (EE) that has already been developed and by formulating new commercially viable technology. In order to maximize the intellectual resources available within the State, the Institute will work collaboratively with California's academic institutions, including the University of California (UC), the California State University and Community College systems (CSU/CC), Stanford University (Stanford), the California Institute of Technology (CalTech), the University of Southern California (USC) as well as California's national research laboratories: Lawrence Berkeley National Laboratory, Lawrence Livermore National Laboratory, Sandia National Laboratory, the Jet Propulsion Laboratory, and the National Aeronautics & Space Administration Ames Research Center (National Laboratories). These institutions will be a critical link in developing and commercializing new technologies through the CICS grant process. The location of the hub or headquarters of the Institute will be determined through a competitive, peer-reviewed process. Finally, in today's decision, we take care to ensure that the Institute will remain accountable to the Commission and the ratepayers. First, one Commissioner and the Director of the Division of Ratepayer Advocates (DRA) have seats on the Governing Board. Second, the Commission will retain oversight over the Institute by having the following decisions placed on a Commission agenda for Commission approval: all non ex officio appointments to the Governing Board; appointments to the Executive Committee; the Strategic Plan; short-term and long-term strategic plans; slate of grant recipients; annual budget; and annual report. And most importantly, this decision is not a contract and may be modified by the Commission at any time pursuant to its authority under Pub. Util. Code § 1708.4 We also clarify that the Charter for the Institute may not be modified or changed without Commission approval. In addition, we recognize that the Legislature could initiate efforts to address climate change and replace ratepayer funding with funding from other revenue sources. The Institute Charter, the Governing Board Conflict of Interest Policy, and the Governing Board composition chart are attached to this decision in Appendices A-C. On September 20, 2007, the Commission issued an Order Instituting Rulemaking (OIR) as part of its continuing effort to aggressively pursue creative and cost effective ways to reduce GHG emissions in the energy sector within California. The OIR included a proposal from UC to establish the CICS, hosted at UC, funded by ratepayers at a proposed level of $60 million per year for 10 years, dedicated to supporting California's research institutions in initiating applied and directed research with an emphasis on the development and rapid transfer of the knowledge gained to the electric and gas sectors for implementation. The OIR established that the proceeding would focus on the appropriate governance structure for the institute, on priorities for research and technology development that would benefit utility ratepayers by reducing GHG emissions, and on establishing a funding mechanism for the Institute. The OIR invited parties to comment on UC's proposal. The California Council on Science and Technology (CCST), California Farm Bureau Federation (CFBF), CSU, CalTech, the Community Environmental Council (CE Council), Consumer Federation of California (CFC), DRA, the Energy Producers and Users Coalition (EPUC), the Indicated Producers (IP) and the Western States Petroleum Association (WSPA), Environmental Defense, Greenlining Institute (Greenlining), Independent Energy Producers (IEP), Morrison and Foerster, LLP (Morrison and Foerster), Natural Resources Defense Council (NRDC), PacifiCorp, Pacific Gas and Electric Company (PG&E), Southern California Edison Company (SCE), San Diego Gas & Electric Company and Southern California Gas Company (SDG&E/SoCalGas) Southern California Generation Coalition (SCGC), Stanford, USC, The Utility Reform Network (TURN), and UC filed comments on the proposal in the OIR. UC's comments included a refined proposal that incorporated many changes in response to questions and concerns that parties raised in response to the initial UC proposal. Alliance for Retail Energy Markets (AReM), CCST, CalTech, DRA, Greenlining, IEP, Merced Irrigation District and Modesto Irrigation District (MID/MID), Morrison and Foerster, NRDC, PacifiCorp, PG&E, SCE, SDG&E/SoCalGas, Stanford, USC, and UC filed response comments. A workshop was held on December 12, 2007 and presentations were made by numerous stakeholders, including UC, Stanford, USC, CSU/CC, the California Institute for Energy and the Environment, the Commission's Energy Division (ED), PG&E and SCE, other state agency programs, including the Public Interest Energy Research (PIER) Program under the California Energy Commission (CEC) and the California Air Resources Board (CARB), environmental groups, including NRDC, and ratepayer and consumer groups. Post-workshop comments were received from SDG&E/SoCalGas, MID/MID, NRDC, DRA, Greenlining, and the CE Council. We greatly value the input and comments received by all parties. Opening, reply, and workshop comments are summarized and attached in Appendix D. On February 11, 2008, the proposed decision (PD) was issued. Comments were received by March 3, 2008 from CCST, CFC, CE Environmental Council, DRA, EPUC/IP/WSPA, Greenling, IEP, NRDC, PG&E, SCE, SDG&E/SoCalGas, SCGC, Utility Consumers' Action Network (UCAN), California Manufacturers and Technology Association (CMTA), and California Hydropower Reform Coalition (CHRC).5 Replies were received from CFC, DRA, SCE, TURN, and UC. Overall, parties who had previously participated in this Rulemaking were generally supportive in their comments of the Institute and its mission and recognized the efforts that the PD made to incorporate many of the suggestions posited during the comment process. We appreciate, however, parties continued attempts to suggest ways we could clarify issues, correct errors, and make the Institute a reflection of the Commission's leadership in addressing global warming and climate change, while being mindful of our obligations to ratepayers. We acknowledge that not all comments were supportive and we seriously considered the arguments raised against the establishment of the Institute, either in toto, or because it is funded by a surcharge on ratepayers of the ratepayer funding issue. In response to both the supporting and opposing comments, we made many minor corrections and changes amendments, and we incorporated suggested modifications in the following areas: ¬ We amend the Charter to function as a stand-alone document without reference to the decision. We clarified that the Charter cannot be changed without approval of the Commission; ¬ We specify steps and procedures that ensure more oversight, governance and involvement by the Commission with the Institute and in particular with the selection of the non-ex officio appointments to the Governing Board, and appointments to the Executive Committee, the Strategic Plan with the approval of the long-term and short-term strategic plans, as well as the annual proposed budget, and annual reports and intellectual property (IP) policies and protocols specific to the Institute. ¬ We state that customers whose rates are frozen under AB 1X and customers eligible for California's Alternative Rates for Energy (CARE) program and customers eligible for the California's Alternative Rates for Energy (CARE) program shouldwill be exempt from paying for the electric and gas surcharge to fund the Institute; ¬ We now require more of an on-going consultation and collaborative process between the Institute Executive Director and the Commission on the preparation of the annual report, budget and Strategic Plan. Once these reports are submitted to the Commission, the Executive Director is to be available for a question and answer session at a public meeting of the Commission; ¬ We clarify that the Institute, its funding and its functions, are to work in concert with, but not duplicate, the programs implemented pursuant to AB 32, as well as the Commission's ongoing efforts in the areas of EE and clean energy; ¬ We added a requirement that 100% of the $600 million ratepayer investment will be matched with non-ratepayer funds over the 10-year life of the Institute; ¬ A ratepayer benefit index is to be a key component of the Strategic Plan that will then inform the grant selection process from solicitation through selection. Grant applications are to include a discussion of the ratepayer benefits the specific research project is expected to produce and the grant selection process must rank proposals based upon a ratepayer benefit index. Proposals with no discernable ratepayer benefit will not be chosen for CICS grant funding; ¬ Ratepayer funding is to be used as a catalyst for matching funds from other sources, and the Annual Report submitted by the Institute Executive Director will describe efforts each year to secure non-ratepayer funding, with a 100% match expected annually beginning in Year 5; and each year the Institute Executive Director will report, in the Annual Report, on efforts to secure non-ratepayer funding; and ¬ The Intellectual Property (IP) discussion now reflects is clarified changes so that the Technology Transfer Subcommittee (TTS) will establish IP policies and protocols specific to the Institute and submit them to the Commission for approval. We direct the TTS to return at least 10% of net revenues to ratepayers unless violative of any laws. ¬ The Workforce Training and Education section has been modified. Instead of the Commission making a determination in this decision about what kind of workforce development and education may be needed, the Workforce Transition Subcommittee (WTS) will study wether there is a need to support the energy sector's transition to a carbon-constrained future through anticipating and preparing for the resultant changes through workforce development and report back to the Commission on the study within six months of the Institute's inception. If the study supports having the Institute fund grants for the emerging workforce development, the Commission can allocate an appropriate percentage of Institute funds for that purpose. The Commission must act on the WTS report within three months. 3.1. Need In the OIR, we asked parties to comment on whether there was a need for the types of research and educational programs outlined in the original UC proposal and whether there was a need for the scale of research contemplated by the $60 million per year funding proposal. NRDC illustrates in its Opening Comments that public investment in energy R&D nationally has been declining for decades. Public interest energy R&D in California hit a high of $150 million in 1991, declined to $63 million in 1994, and, thanks to system benefits charge contributions to the PIER program, has only recently risen back up to its previous levels. Unfortunately, the $62.5 million approved for the PIER program is scheduled to sunset in 2011, at which point public funding of energy R&D may return to its 1994 level. Both NRDC and the CE Council argue that, at least on a national scale, a five to ten fold increase in spending on energy and climate-related R&D may be needed to meet the problems of climate change and that such investment would be repaid in technological innovation, business opportunities, and job growth.6 SDG&E/SoCalGas contend that while there is a great deal being spent on climate-related research, there is little being done to bridge the "gap between the scientific frontier and practical technology." Similarly, Morrison and Foerster argue that there is a strong need for an organization, such as the CICS, that can evaluate climate change issues from a broader perspective than a pure grant-making body. Several parties, including EPUC/IP/WSPA, maintain that while climate change is clearly an important issue, California is already spending a great deal on it and the Commission should first conduct an inventory of current state spending on climate change related research to avoid funding redundant programs. CFC, among others, contends that the PIER program is already doing much of what the proposed Institute would do and that creation of the Institute would, therefore, interfere with the coordination of state policy. Other programs and research efforts that some parties claim may overlap with some the Institute's functions are Helios, the Energy Biosciences Institute (EBI) and the Commission's proposed Emerging Renewable Resource Program (ERRP). The Commission agrees that redundancy in research is not desirable because it may result in unnecessary ratepayer and taxpayer expenditures. To ensure that this does not occur, we have as the first priority of the Institute and the SRC, the development of an inventory that will catalog publicly and privately-funded climate change-related research. When the inventory is complete, it should be submitted to the Commission as a status report. This inventory, which should be informed by the listing prepared in response to a motion filed in this proceeding by Joint Parties, will ensure that there is no duplication of efforts or unnecessary expenditure of ratepayer or public funds.7 In preparing this inventory, the Institute shall draw directly on the results of any previous inventory efforts and consult appropriate staff from relevant public agencies. The process of creating an inventory will promote efficiency and facilitate coordination and collaboration among affected agencies, academic institutions and the private sector. Other parties indicate that there are specific areas of need that the Institute will be well positioned to address, such as: energy storage, the development of "second generation" EE and renewable technologies in the electric and natural gas sectors, smart technologies in the distribution and transmission of electricity and gas, and strategies for mitigating the physical impacts of climate change on California ratepayers. While these may all indeed be areas of great need, the Commission cannot determine at this time whether they are a better and more cost-effective investment of ratepayer funding than other possible areas of research. Accordingly, we do not, in this decision, prescribe any specific areas of research. Instead we require that the Institute engage in a comprehensive Strategic Planning process, through the SRC, prior to funding any grants in order to identify what areas of study can achieve the greatest reductions at the lowest cost, within appropriate time frames, and to the greatest ratepayer benefit. Several parties question whether and how ratepayers will benefit from funding research and development of the kinds of technologies described in the UC proposal. Others, including CFC are concerned with burdening California ratepayers with the cost of the CICS.8 This line of reasoning brushes aside the near certainty that Californians will face higher electric bills and other expenses if global climate change continues unabated or efforts to reduce GHG emissions are deferred. A national leader, the State of California, through passage of AB 32, set aggressive goals to reduce GHG emissions in the coming years. While the specific source of these reductions and how they will be achieved is far from certain, what is clear is that the electricity sector, which accounts for approximately 20% of all GHGs released within California each year, will play a central role in meeting targeted reductions. Since it is not possible to precisely predict what technologies Institute-funded research will yield, any effort to calculate the total monetary benefits, much less the portion of those benefits flowing to California ratepayers would be highly speculative. However, we believe, as the Stern Review9 concludes, that the benefits of early action on climate change are likely to outweigh the cost of delaying action. While it is difficult to quantify the benefits that the CICS will provide to California ratepayers, we can identify the likely sources of those benefits: 1. Technologies that improve efficiency in generating and using electricity and natural gas will provide a direct benefit to California ratepayers by reducing their utility costs and improving reliability of the electric system. 2. Given the high likelihood of a multi-sector state-wide, regional or national cap-and-trade program for GHGs, even technologies that contribute to cost-effective GHG reductions in other sectors of the economy will help to relieve demand for GHG allowances and thereby contribute to lower allowance costs for the consumption of electricity and natural gas. 3. To the extent that the CICS produces technologies that contribute to reductions of GHG emissions in California or elsewhere, California ratepayers will benefit from mitigation of the real costs of climate change. Again, precisely quantifying these benefits is difficult. There is, however, convincing evidence that increased R&D in the energy sector saves ratepayers' money. Looking to data from other ratepayer-funded investments in R&D, such as a 1998 to 2003 review of the electric and natural gas PIER program, "[ratepayer] benefits from these investments are projected to be between $1.60 and $4.10 for every dollar contributed."10 Preceding PIER, the utilities' investments in R&D via the Electric Power Research Institute (EPRI) also provided demonstrated return on investment to ratepayers.11 Since we cannot know what specific kinds of research will be conducted until after the Strategic Plan Roadmapping process has been completed, we cannot precisely determine the potential return for ratepayers at this time. Cost-effectiveness and potential return to ratepayers will be critical factors both at the Strategic Planning stage and in evaluating research proposals. Ratepayer benefit, in terms of dollar-for-dollar return on investment, will only be bolstered by the Institute's commitment to collect additional funding from private sources. These matching funds will stretch the value of each ratepayer dollar contributed. 3.1.1. The CICS Mission will Help California Achieve the Goals Established in AB 32 and SB 1368 As discussed earlier, the mission of the CICS is consistent with the purpose and findings contained in Assembly Bill (AB) 32. In AB 32, the Legislature found that global warming "poses a serious threat to the economic well-being, public health, natural resources, and the environment of California." (Section 38501(a).) The Legislature further found that global warming would have a particular impact on the electricity sector by increasing "the strain on electricity supplies necessary to meet the demand for summer air-conditioning in the hottest parts of the state" while at the same time decreasing the "supply of water to the state from the Sierra snowpack." Investing in the development of innovative and pioneering technologies, the Legislature found, will assist California in achieving its GHG emission reduction goals and "will also help position its economy and businesses to benefit from future national and international efforts to reduce GHG emissions efforts world wide." (Section 38501(e).) SB 1368 addresses the Legislature's concern that there is a future financial risk to California consumers for pollution-control costs once there is federal regulation of GHG emissions. In response to this legislation, the CPUC and CEC have developed Emissions Performance Standards which limit the rate of emissions from new long-term financial commitments to base-load generation from energy sources, acquired by investor and publicly owned utilities, whose emissions are to no more than the emissions of a combined-cycle gas turbine plant. Today, Coal-fired generation cannot meet this standard. Perhaps with directed and applied research into without the deployment of new carbon capture and storage technologies, coal could provide both fuel diversity and a source of electricity without an increase in GHG emissions. That today are still largely in the research and development stages. Commercialization of carbon capture and storage technologies would allow for fuel diversification while making possible significant further emissions reductions from gas-fired power plants. CFC raisesd a novel argument that since the Legislature enacted AB 32 and directed CARB to undertake certain actions that the Legislature intended to "occupy the field of greenhouse gas emission control under the umbrella of another state agency."12 By implication and argument, CFC contends that Commission funding of the Institute is preempted─however, CFC cites no support for their argument that the doctrine of preemption applies to state agencies in a lateral fashion in the same manner that it applies to lower levels of government. We find no legal support for extending the concept of field preemption that addresses whether a lower level of government can regulate in an area in which a higher level of government has already regulated to co-equal state agencies. Pursuant to the AB 32 legislation, CARB is preparing to prepare a Scoping Plan that identifies how to achieve technologically feasible and cost-effective reductions in GHG emissions based on existing and projected technological capabilities. It is the intent of this decision to have the CICS Strategic Plan build off of the AB 32 Scoping Plan and work in concert with, but not duplicate the work and funding of AB 32. In addition, CARB established an Economic and Technology Advisory Committee (ETAAC) to recommend advise it on directions CARB and other state agencies can take in the research and development arenas to implement the emission reduction goals from AB 32. The Institute's Strategic Research Committee (SRC) is to utilize the advice from ETAAC in developing the Strategic Plan Roadmap and in designing the short-term and long-term strategic plans for CICS so the grant administration process complements, and does not duplicate, CARB's efforts pursuant to AB 32. 3.2. Funding and Budget The OIR asked parties to comment on a number of issues, including whether the proposed budget was reasonable and a how the budget should be funded. Parties' comments on budget and funding can be organized into four separate issues: 1) Who should pay for the work of the proposed Institute? 2) What is the appropriate level of funding? 3) How should the costs be assessed? 4) How should the budget be allocated among the functions and tasks of the Institute? The answer to each question informs the next. We will address each question in order, taking into consideration the comments submitted. 3.2.1. Funding Who should pay for the work of the proposed Institute? The OIR proposed that CICS be funded by ratepayers through a surcharge on electricity and natural gas consumption. The generation and consumption of electricity and natural gas accounts for approximately 20% of all GHG emissions released in California so there is a correlation between these industries and the climate change problem. Parties' comments ranged from strong support by NRDC for ratepayer funding to strong opposition from DRA and CFC. Many parties echoed NRDC's comments supporting the proposed budget and funding mechanism as adequate and appropriate to the task, but recommended narrowing the scope of the research to ensure that the costs borne by ratepayers are used to fund relevant and appropriate activities. We have amended the PD to specifically ensure, through the utilization of the ratepayer benefit index in the grant solicitation and selection process, that there is a nexus between ratepayer benefit and the work of the Institute. UC, CSU and the private research institutions declined to comment on the appropriateness of ratepayer funding, but did make the argument that benefits could and would flow to the ratepayers as a result of the proposed activities, particularly with the narrowed focus offered in UC's revised proposal. On the other hand, numerous parties shared DRA's concern that "[d]espite broad support for CICS among the parties, no one has provided sufficient justification for ratepayer funding," and "[t]here is, at best, a limited connection between [investor owned utilities] (IOU) ratepayers and the obligation to fund the wide scope of the Institute's activities."13 Other parties observe that since climate change is a global problem with global impacts, the benefits of the Institute will fall to a far broader population of beneficiaries than just IOU ratepayers.14 TURN, IEP, Greenlining, and CE Council all argued the scope of the CICS is broad enough that it should be funded "through legislative action and that public funding should be provided through taxes, rather than enacted by the CPUC and funded by ratepayers."15 Ratepayers, they argue, are already overburdened by public programs, such as the CEC's PIER program, and should not bear this cost alone. Some parties, and in particular CFC argue that the PD proposes "an unlawful levy of a special tax on ratepayers."16 We dismiss CFC's argument since the Institute will be funded through a surcharge, and not a tax. In addition to arguing that a tax would be more appropriate than a rate surcharge, parties, including TURN, Greenlining and CEC maintain that utility shareholders should bear a portion of the costs. The utilities, including PG&E, SCE, SDG&E/SOCALGAS, and PacifiCorp all reject that proposal, but echo the concern that their customers not bear an undue portion of the costs. PG&E proposes including California's publicly-owned utilities (POUs) in both the funding and participation in the CICS programs. They note that one-third of California's consumers and businesses are served by POUs.17 While DRA objects to ratepayer funding for the Institute, DRA does offer constructive suggestions for protecting ratepayer monies if the Commission does go forward. DRA proposes that any funding approved by the Commission should be limited to initial seed funding and that private donors should provide the balance of funding going forward. DRA further suggests that the Commission should limit ratepayer funding to technology and policy research and should prohibit the use of ratepayer funds for administrative expenses.18 Greenlining contends that since ratepayer money is drawn from all segments of society, the Commission must ensure that the benefits are realized by all segments of society, including low-income and minority communities. Greenlining further argues that UC has historically been ineffective at reaching diverse and disadvantaged communities. Discussion Taxpayer funding may indeed be a preferred means of financing the Institute, as some parties have argued. We are concerned, however, that waiting for collective state-wide action to establish the framework for the Institute and authorize funding will incur undue delay. Put simply, given the urgency of the climate change issue as recognized by the Legislature and authorities such as the Stern Report, the time for action is now. We find, in the absence of statewide legislation authorizing a tax to fund the Institute, it is appropriate to use ratepayer monies. While we are mindful of the Commission's responsibility to ratepayers and of the growing number of public programs they support financially, as we discussed above, we believe that the benefits of these programs will flow back to ratepayers and inaction now will likely result in higher costs for ratepayers in the future. Following the mandate of AB 32, if the electric and gas utilities are not able to reduce their emission levels, ratepayers will be paying more. Furthermore, today's decision does not approve funding for unfocused, exploratory academic research, as asserted by some concerned parties. The primary mission of the CICS is to develop technologies and mechanisms that are practical, ready for implementation and will result in actual and cost-effective GHG reductions. The causes and cures for climate change cannot be segregated on a sector by sector or industry by industry basis. Indeed, interconnection is the baseline premise on which various carbon reduction strategies are based. We agree with UC that the primary benefit to be gained as a result of the CICS is not revenue generated from IP or licensing agreements but a stream of commercially deployable technologies that will reduce GHG emissions or help California adapt to the impacts of climate change. Nonetheless, we agree that there should be a direct tie between funded projects and benefits to ratepayers. Accordingly, a ratepayer-benefit index that ranks proposed projects from high ratepayer benefit to low, or no ratepayer benefit, will be an integral component that informs the entire grant process from the solicitations through selection. The high to low continuum would give a high index score to a proposal that is expected to produce a cost-effective measurable reduction in GHG emissions in the electric and gas sectors, and the index would go down as the measures were less cost-effective, had lower levels of measurable reductions, or promised significant reductions but in another sector with no nexus to the electricity or gas industries. The ratepayer benefit index must be a key segment of the Strategic Plan so that there is a consistent thread of ratepayer benefit running through the grant process. Requests for applications (RFA) for grants must include the ratepayer benefits index score, grant applicants must include a ratepayer benefit analysis in their proposals, and the grant selection process will asses the applicant's score and employ it in weighing the ratepayer benefit component in comparing competing applications. While we recognize that the index is more qualitative than quantitative, and the scores are not exact measurements, its use will ensure that the focus of the funded proposals is consistent with the Commission's goals of having the Institute and its funding produce used and useful solutions that have the potential for a return on the ratepayer's investment. Any proposal that does not show that it will benefit the ratepayers will not be chosen for CICS funding. The Strategic Plan, including the ratepayer benefit index, shall be submitted to the Commission for approval. We also agree with parties that ratepayers should not be the sole source of funding for the Institute. In a perfect world, we agree that POU ratepayers should also contribute funding to the Institute, and we urge the POUs to do so voluntarily. Other sectors, most notably transportation, should also contribute, expanding both the scale and scope of the Institute. In order to leverage the initial funding and to spread the burden of the costs associated with funding the Institute, we include among the central duties of both the Institute Executive Director (Executive Director) and the Executive Committee of the Governing Board, the solicitation of additional funds from non-ratepayer sources. Ratepayer funds should be a catalyst for other public and private funding and we require that over the 10-year life of the Institute,, 100% of the $600 million ratepayer investment is matched with non-ratepayer funding. We expect a 100% match annually beginning in Year 5. Each year's Annual Report is to include data on the amount of matching funds raised and the status of the Executive Director's fund-raising efforts. What is the appropriate level of funding? Climate change is a global problem and the total costs of mitigating its impacts and adapting to its consequences puts in perspective the $60 million per year expenditure proposed by UC. As noted by UC, the Stern Review on the Economics of Climate Change19 suggests that California alone will ultimately pay many times this amount to combat the worst effects of climate change. DRA and others argue that there is insufficient detail to properly assess the level of funding, but many parties agree with CSU's comments that the budget is relatively modest given the scope of programs proposed.20 USC suggests that funding levels be adjusted for inflation, "which would place the total 10 year budget on the order of $700 million."21 Many parties emphasize the need to leverage additional funds, including federal and private monies, a concept we strongly endorse and specify as a duty of the Executive Director. The revised UC proposal provides additional detail about the proposed budget and the relative size of the need. Comments by several parties and the presentations at the workshop support the proposed budget. Professor John Weyant of Stanford University's Department of Management, Science and Engineering praised the collaborative nature of the UC proposal and the ability of academic research to mitigate risk and speed technologies and innovation to the market, as well as producing significant "spillover benefits."22 Leah Fletcher of NRDC endorsed the proposed budget, citing concern about declining investment historically, and stressing the need for CICS funding not to replace but to complement existing funding.23 Meeting the goals of California's Energy Action Plan (EAP)24 and AB 32 will be challenging. $60 million per year could be viewed as a down-payment on meeting the commitments that have been set by the Legislature. We do not suggest that this annual budget is adequate on its own, nor do we intend to shirk our responsibility to ratepayers to make sound investments for the future. Given the likely costs of inaction on climate change solutions and the limited resources currently available, we find that the proposed budget is appropriate and reasonable. How should the costs be assessed? In response to this question posed in the OIR, parties agree that the costs should be spread as equitably as possible across both electric and gas customers in the IOU service territories. The environmental and consumer groups, including DRA, argue that, if ratepayers must pay, it should be on an equal cents per therm or kWh basis, allocating costs based on the use of energy. Climate change, as discussed throughout this proceeding, is a global problem driven in large part by our consumption of energy, so energy use is a logical and equitable means of apportioning the costs of mitigation. On the electricity side, the three largest IOUs argue for an equal percentage of revenue basis, similar to the methodology used for EE and distributed generation (DG) incentive programs. This would have the effect of slightly shifting costs onto residential and small commercial customers, who are proportionally the greater beneficiaries of those programs. PacifiCorp is a notable exception, agreeing that an equals cents per unit charge is the most equitable.25 On the gas side, PG&E, SCE, SDG&E and SoCalGas join SCGC in recommending that CICS costs "should be recovered from gas ratepayers through the natural gas public purpose surcharge," which would de facto exempt natural gas-fired electricity generators from bearing CICS costs. If, on the other hand, the Commission recovers costs from the base rate of the gas utilities, SCGC argues that the gas-fired electricity generators should be explicitly exempted. As precedent for their exemption, SCGC cites the precedent established by the Legislature in creating the natural gas public purpose surcharge, the California Solar Initiative (CSI) and the Solar Water Heating and Efficiency Act of 2007.26 They argue that if CICS costs are assigned to gas-fired electricity generation, California electricity consumers would potentially have to pay the direct costs on a cents per kilowatt (kWh) basis, the indirect costs of the equal cents per therm charge, and the higher price that "would be charged by non-gas-fired generators as a result of the wholesale spot price of electricity being inflated by the imposition of the new CICS charge on marginal gas-fired electricity generators."27 We agree that the costs of the Institute should be born by both electricity and gas customers, and that it should be on an "equal cents per unit" basis. We find that double-charging electricity consumers is an inequitable outcome and so gas used for electricity generation supplied to IOU customers should be exempted. The costs should be apportioned among the utilities and between gas and electric customers based on the percentage of total 2007 state revenues once wholesale sales of gas and electricity, sales of gas for electric generation and DWR revenues are excluded. Other exemptions, established consistently across all participating utilities and in consultation with the Energy Division, may be included. This should result in an approximately 70-30 split between electric and gas ratepayers respectively. The allocation across electric utilities will be based on recorded data from 2007 for kWh subject to this charge, and the allocation across gas utilities will be based on the recorded data from 2007 for therms. AB 1X, enacted during the peak of the 2000-2001 electricity crisis, places additional restrictions on who will ultimately pay the costs of funding the Institute by freezing rates for residential ratepayers who consume less than 130% of baseline. To the extent that some low-income and small users will be exempted, the costs will be borne by a smaller percentage of IOU customers. Utilities are hereby ordered to submit advice letters detailing the rate impacts of an equal cents per therm and an equal cents per kWh charge, with all the exemptions detailed above and compliant with the restrictions of AB1X, and apportioned according to the percentage of revenues from gas and electricity consumption. The total budget for the Institute will be $600 million over ten years, with an annual budget of $60 million. Revenue collection for the funds for the Institute, should begin as early as feasible, consistent with standard practice and the Public Utilities Code. 3.2.2. Budget How should the budget be allocated among the functions and tasks of the Institute? Many parties commented on how the budget should be apportioned among the many tasks and priorities of the Institute. In response to parties' comments, we amend the Charter and the duties of the Executive Director to include the obligation to prepare and submit to the Commission for approval a yearly proposed budget. With this added layer of Commission oversight and control we do not find it necessary to be overly prescriptive with regards to the allocation of funds. The Institute's specific priorities will be established through the Strategic Planning process, and then yearly, the Executive Director, in consultation with the Governing Board can propose an allocation of the money among the Institute's mission objectives, and present it to the Commission for approval. Nonetheless, we provide general directives in order to ensure that ratepayers' needs are met and funds are used effectively. First, the Institute will not be the repository of the CICS funds; the utilities will collect and hold ratepayer funds until the appropriate time for allocation. Administrative and hub expenses shall be paid out on a monthly basis. Grant awards shall be paid directly by the IOUs to the grantee. One IOU shall be designated as the collecting agent. The Institute will only be given the funds it needs to run the hub and carry out the necessary administrative functions of the Institute. However, funds held by the Institute should be strictly segregated from other funds for accounting purposes. Any and all funds paid by California IOUs on behalf of their ratepayers should be kept in an interest bearing account so that both the principal deposits and any interest generated by those deposits are reserved for the purposes of the CICS. No ratepayer money, or the interest generated by it, may be used for non-CICS purposes. Next, we have identified "cost centers" or functions for the Institute: 1. Hub expenses - including administrative costs, staff salaries, development of the Strategic Plan, grant administration, and dissemination of research findings. 2. Money for grants and programmatic grants issued for the purpose of research, development, and commercialization of technology. And finally, we clarify that any unspent monies from any yearly budget are to be rolled-over to the next budget year of the Institute. Any unspent funds remaining at the end of the 10th year are to be returned to ratepayers, unless the Commission acts to continue ratepayer funding of the Institute. Parties' Comments All parties agree that administrative expenses should be kept to a minimum. Since the Institute hub will be responsible for overseeing and coordinating the Strategic Planning function as well as developing RFAs and awarding grants, we include these, along with more traditional administrative costs, as hub expenses. The parties also foresee relatively higher up-front costs for hub expenses, including staffing and leasing office space, and especially the initial Strategic Planning exercise, which must be completed before work in the other areas can begin. This means that the first year hub expenses may exceed the hub expenses incurred in following years. Parties estimated that the total amount needed to run the hub range from a low of 5% of the total budget up to 15% of the total budget. Discussion Mission-based applied and directed technological R&D, as facilitated by the grant administration process, is the primary purpose of the Institute. As such, the majority of the funds provided to the Institute should support such projects. Therefore, we establish that a minimum of 85% of the total CICS funding must be allocated to competitively awarded grants for applied and directed R&D. A maximum of 10% of the total CICS funding may be allocated to the hub and administrative functions, including the Strategic Planning process. The Executive Director, in consultation with the Governing Board, may exercise discretion with any unallocated funds, and make recommendations to the Commission in each year's proposed budget for those funds. Hub Expenses The costs and expenses for activities that will occur within the Institute's hub or headquarters include the cost of leasing physical space, the salaries of the Institute's officers and staff, support and per diems for the SRC, grant administration, hosting relevant conferences and workshops directly related to R&D activities of the Institute, and the cost of necessary office equipment, computers and supplies. The amount set aside for hub expenses should also cover all costs related to developing and updating the Strategic Plan. In the first two years, we recognize that there may be high start-up costs, and thus we grant some latitude to the Institute to spend more on administrative fees during those first two years so long as this extra spending is justified in the audit and is reduced in later years to not overspend on administration.28 Applied R&D In order to meet California's aggressive clean energy and GHG emission reduction goals, a broad array of technology must be developed, much of which is far from market ready and some of which is still in early conceptual or design stages. Conducting R&D requires considerable resources. Technologies and innovations from CICS funding that are developed into useful products and services that can benefit the public are also likely to yield the highest direct ratepayer benefit. As such, it is reasonable to require that the bulk of Institute funding be used for this purpose. The Commission expects a minimum of 85% of the annual budget to be spent on grants for applied research intended to support the goals of AB 32, the state's EAP, and other policy directives. We expect the Institute to coordinate its IP technology transfer and commercialization efforts with the proposed Emerging Renewable Resource Program (ERRP), which, if approved, will consider applications for the use of emerging, commercially immature technologies in utility-scale renewable generation projects. The relatively small amount of Institute money available for commercialization is insufficient for utility-scale demonstration projects. Accordingly, we require that the Institute staff and the TTS coordinate with the Commission's Energy Division where appropriate. 3.2.3. Equipment Purchases UC proposed that part of the budget should be set aside for the purpose of acquiring equipment to support complex monitoring systems, servers and databases for measurement and informatics. We decline to dedicate any ratepayer funds for this purpose. UC has not sufficiently demonstrated how this equipment is necessary to support the Institute's other functions. While the Institute will certainly have to purchase or lease hardware and develop databases to construct the Strategic Plan and carry out its business, the Institute is primarily a grant-making body and not tasked with doing any original research. We therefore cannot support an equipment expenditure as large as UC requests. However, we do not restrict the acquisition of equipment by recipients of grants who have identified in their grant application the need for specific equipment as a necessary component of their research project. 3.3. Governance and Organization The Institute will have a Governing Board with an Executive Committee, an Institute Executive Director, a Managing Director, necessary staff, SRC and subcommittees. The geographical location of the Institute's headquarters or hub, at which the Institute's staff maintains offices, shall be determined by the Governing Board through a competitive solicitation. In its comments, UC offered its assistance to the Commission to run the competitive solicitation process for the hub. While we acknowledge UC's experience in running competitive solicitations, we do not want to create a potential conflict of interest if UC submits a proposal to host the hub. Therefore, we direct the Governing Board to conduct the solicitation for the hub by issuing a Request for Proposals to which all non-profit California-based entities, including, but not limited to, public and private universities, interested in hosting the hub may respond. A peer review committee will rank proposals and present the rankings to the Governing Board for selection. We reiterate here that all members of the Governing Board are subject to the Conflict of Interest provisions set forth in Attachment B, and therefore no member that is affiliated with an institution that submits a proposal to host the hub, may vote on the hub selection. We adopt many of UC's suggestions as to what applicants should include in their proposals and add some additional ones. Specifically, institutions are to provide the following information that will be considered in the hub selection process: ¬ A detailed description of how they would host the Institute in a way that would advance the Institute's mission─applied and directed R&D and commercialization of technologies; ¬ How the hub would be structured to utilize the existing or planned resources of the institution; ¬ How the infrastructure and existing systems of the host institution can serve the Institute; ¬ Describe the physical space; ¬ What intellectual and other resources does the proposed hub have that could enhance the Institute; ¬ Whether the host institution would house the Institute wholly within; ¬ How they would control and manage the administrative costs of the hub,29 ¬ How they would maintain a web portal; ¬ How they would approach the Strategic Planning process to ensure that the SRC fulfills its duties to create an inventory of existing programs, identify uncharted areas of R&D, and focus on R&D that has a ratepayer benefit; ¬ A description of practices and policies it intends to use to support participation of members who are broadly representative of the population of California in the projects funded by the CICS; ¬ How much matching funding they will commit to raise; and ¬ How they could ensure that whether its geographic location was northern or southern California, it could serve the interests of the entire state. While the PD proposed that the Institute was to have a physical presence in both northern and southern California, we now defer to the Governing Board the decision on how best to ensure this presence, depending on what institution is chosen as the hub and how it plans to serve the needs of all of California. 3.3.1. Governing Board The Governing Board will be responsible for ensuring that the CICS fulfills its mission and complies with the requirements set out in this decision. The comments presented numerous suggestions for the Institute's Governing Board. In general, the parties advocated diverse and broad representation including stakeholders from all different arenas including the Senate and Assembly, utilities/ratepayers, experts in the scientific and academic fields, the California university community, consumer groups, the CPUC and other energy-related state agencies such as the CEC and CARB, the UC system, the environmental community, and private industry. We find that the Institute would benefit from a broad-based Governing Board. (See Attachment C.) In response to a consensus of opinion among the parties to this proceeding, no single organization or interest holds a majority of seats on the Governing Board. The Governing Board will be co-chaired by the President of the Commission, or his/her designee, and the President of UC, or his/her designee for the first three years of the Institute's existence. Thereafter the co-chairs shall be chosen by majority vote of the board. The Governing Board shall have an Executive Committee of nine, which shall also be co-chaired by the President of the Commission and the President of UC. The President of the Commission shall select four members from the Governing Board to serve on the Executive Committee and the President of UC shall select three members from the Governing Board for the Executive Committee. Members of the Governing Board shall serve for staggered three-year terms.30 At all times between meetings of the Governing Board, the Executive Committee shall have all the duties and authorities of the Governing Board, except for the limitations set forth in the Charter. A two-third's vote of the Governing Board is necessary to remove any members. All members of the Governing Board, including the Executive Committee and other subcommittees, will serve without compensation and shall be subject to the Governing Board conflict of interest policy, see Attachment B. The Governing Board will select a location and host for the Institute's hub following the solicitation protocols set forth herein; appoint an Institute Executive Director and a Managing Director; appoint members to the SRC; review and approve the Strategic Plan, including short-term and long-term goals; review and approve the annual budget prepared by the Institute's staff;31 and review and, if appropriate, approve aggregated lists of proposed grants compiled by the Institute's staff for each RFA cycle. The Governing Board shall have the power to establish any subcommittees necessary to perform its duties and responsibilities. At a minimum there will be a Technology Transfer Subcommittee (TTS), a Conflicts of Interest Subcommittee, and a Workforce Transition Subcommittee (WTS). The TTS will be responsible for reviewing existing UC IP and technology transfer policies and developing IP and technology transfer policies and protocols specific to the Institute. The Conflicts of Interest Subcommittee will be responsible for reviewing existing UC conflict of interest policies, and developing conflict of interest protocols that will apply to CICS staff and the SRC, and be submitted to the Governing Board as a whole for adoption. The Workforce Transition Subcommittee of the Governing Board will study ways to support the energy sector's transition to a carbon-constrained future through anticipating and preparing for the resultant changes in its workforce needs and report back to the Commission on the study. The workforce transition study should identify gaps in current workforce development programs with specific reference to new professional and job opportunities likely to result from the transition of California toward its green energy economy goals. The study should make recommendations on how to best coordinate industry, government, academic, business and professional groups relevant to filling those gaps and should present a detailed plan. This plan may include a recommendation as to whether the Institute should take on a role in workforce development, and, if so, how the Institute could best collaborate with others as to how to fill the gaps, and specific recommendations to relevant State and public authorities such as educational and vocational institutions. We recognize that funding workforce training programs is not a direct purpose of this research Institute; however, the study may include suggestions for funding or providing matching money for discrete projects that are not to be funded by others in an effort to jumpstart appropriate emerging workforce development programs. The WTS subcommittee shall submit its study to both the Governing Board and the Commission within six months from its initiation, and Commission should act on the subcommittee's recommendations within three months of receipt. If the study supports having the Institute fund grants for the emerging workforce training and the Commission concurs in this recommendation, the Commission may allocate an appropriate percentage of Institute funds for that purpose. 3.3.2. Institute Director The Governing Board shall conduct a national search for a qualified Institute Executive Director. The Executive Director should have expertise in climate change science, technology, or policy, should have demonstrated fundraising abilities and should be familiar with grant administration processes. The Director shall be responsible for: 1. Overseeing the requests for grant applications and managing the grant administration process, including the evaluation and approval of individual grants; 2. Organizing and supervising the peer review process; 3. Overseeing the Strategic Planning process; 4. Soliciting non-ratepayer funding for Institute programs and optimizing financial leverage opportunities for the Institute; 5. Supervising and causing the completion of all annual reporting and auditing processes; 6. Making all necessary arrangements for the biennial external performance review; 7. Interfacing with the California Public Utilities Commission, the California Environmental Protection Agency, the CEC, the California Legislature, the Governor's Office, all other relevant local, state and federal government agencies and organizations, and the public; and, 8. Negotiating the terms of grant awards, intellectual property agreements, and agreements to secure additional, non-ratepayer funding. The Executive Director will have authority, subject to the oversight of the Governing Board, to: organize, administer, and commit the resources of the Institute as necessary for the administrative function of the Institute's hub; make personnel decisions; and appoint and replace members of the SRC and the subcommittees. The Institute will also hire a Managing Director for the Institute. The duties and responsibilities of the Managing Director shall be established by the Governing Board in consultation with the Executive Director. The Executive Director may delegate any of his duties and responsibilities to the Managing Director. The Executive Director shall submit to the Commission for approval the Strategic Plan, the annual proposed budget and the annual report that includes the annual audit. The Executive Director should be prepared to appear at a public Commission meeting to answer questions on the reports and to consult with the Commissioners. 3.3.3. Strategic Research Committee The SRC will be chosen by the Governing Board from a list of nominees compiled by Institute staff. The SRC shall have no more than 20 members, all residing within California or associated with an entity with a presence in California. The nominees should be experts from universities, research institutes, government, industry, and the environmental community. The nominees must have subject matter expertise in the fields of climate change science; green technology; electrical generation, transmission, and storage; energy efficiency; renewable generation; engineering; biotechnology; carbon capture and sequestration technology; and forestry and agriculture. The SRC will be responsible for: 1. Developing a Strategic Plan by March 13, 2009, and updating it on an annual basis. 2. Assisting the CICS officers in developing short term and long term goals that are consistent with the Strategic Plan. 3. Reviewing grant proposals recommended by the peer review committee. The SRC will also provide a forum for researchers and research managers to have an ongoing dialogue with industry and government regarding the direction, scope, and relevance of the Institute's research. It will be responsible for recommending potential mid-course corrections, in the event that they become necessary, to the Strategic Plan. The SRC will allow for a convergence of technical insight, market intelligence and policy priorities with academic expertise The SRC will assist the Institute's staff in developing and implementing the grant administration process. It is expected that the SRC will be involved in all planning phases prior to the release of RFAs so that the RFAs clearly reflect the priorities established in the short-term and long-term strategic plans. Members of the SRC will serve at the pleasure of the Governing Board. They will be reimbursed for all direct expenses incurred as a result of serving on SRC and will collect a small per diem that will be established each year in the annual budget.32 Members of the SRC are subject to the CICS conflict of interest policy statement. 3.3.4. Strategic Plan The Institute will need a Strategic Plan to effectively and cost-efficiently administer grants in a targeted fashion. To accomplish this, the SRC is to undertake the following tasks: 1. Conduct an inventory of current publicly and privately funded research efforts to meet the requirements of AB 32; 2. Identify areas of technological innovation, not being developed, that will bring about the most promising options for reducing GHG emissions; 3. Develop a ratepayer benefit index and identify the uncharted R&D areas that will bring about the highest ratepayer benefits; and 4. Utilize the resources that the hub provides to execute the above functions.33 While CARB and the Climate Action Team are developing a scoping plan and analyzing a set of measures to meet the targets set in AB 32, the CICS will identify specific applied and directed research that is focused on practical technological solutions for reducing GHG emissions. There is currently no centralized statewide directed R&D for how to get from present emissions levels to those established in AB 32, and no one institution with the mandate to evaluate and fund the most promising options for reducing GHG emissions. We believe this is a key role that CICS can fulfill, and that the SRC can facilitate. To implement this focus for the CICS, the SRC will begin by conducting an inventory of current publicly and privately funded research efforts. CARB and the Climate Action Team are developing a scoping plan and analyzing a set of measures to meet the targets set in AB 32. The SRC should utilize this scoping plan, as well as the recommendations the ETAAC is preparing, in developing a Strategic Plan for the Institute. By coordinating with the other state agencies and programs that are also working on AB 32 goals, the SRC will avoid redundancy and waste. Once the inventory is complete, it is to be submitted to the Commission as a report. Next, the SRC is to identify areas where applied and directed research focused on practical technological solution is needed to reduce GHG emissions. As much as possible, the SRC shall rely upon and not duplicate existing work done by the CPUC, CARB, CEC, or other entities that have identified opportunities for and barriers to GHG emission reductions. The third critical task for the SRC is to develop a ratepayer benefit index, that will then be a key component of the strategic plan and will inform the entire grant process from the solicitations through selection. And, finally, the SRC is to utilize the resources available at the hub to create the CICS Strategic Plan. 3.3.5. Short-Term and Long-Term Goals The SRC and Institute staff, shall develop short-term and long-term goals, that can implement the "umbrella" Strategic Plan. These goals should emphasize areas of research that will achieve the greatest GHG reductions at the lowest cost and to the greatest benefit of ratepayers as determined by the ratepayer benefit index. The Strategic Plan will determine for which areas of research the Institute will develop the grant RFAs. The short-term strategic plan should identify technologies that are 1-5 years away from being commercially deployable. The long-term strategic plan should identify those areas in which the lowest cost GHG reductions can be accomplished to the greatest ratepayer benefit but that need technological innovation to be realized. The long-term strategic plan should be focused on technologies that are 5-50 years away from being commercially deployable. 3.4. Grant Administration and the RFA Process One of the central functions of the Climate Institute is issuing RFAs for grant applications, reviewing proposals, and awarding grants. The grant award process must be competitive in order to ensure that the most qualified individuals and institutions with the most viable ideas carry out ratepayer-funded research. The California Institute for Regenerative Medicine provides a useful model in its Grants Administration Policy.34 While we decline to elaborate in detail how such a process shall work for the CICS, the Institute's staff shall develop a Grant Administration Policy specific to the Institute and present it to the Governing Board for adoption in the bylaws prior to the initiation of the first RFA. We repeat here the Conflict of Interest policy whereby no Governing Board member affiliated with a person, entity or institution that applies for a grant may vote in the grant selection process. The Grant Administration Policy shall be consistent with the following: 1. Eligibility (a) To be eligible to apply for a CICS grant an individual must be a resident of California,be employed full time by a California based entity, or be affiliated with a California institution. (b) Applicants for a CICS grant must submit a statement of qualifications demonstrating expertise in research, development, demonstration, deployment, or commercialization of technology relevant to a specific RFA. Specific qualification standards shall be adopted as part of the Grant Administration Policy. (c) Applicants for a CICS grant need not hold an academic position or be affiliated with a University or publicly and privately funded research laboratory. (d) Applicants for a CICS programmatic grant must be affiliated with a California institution. (e) Collaborative teams including partnerships between relevant private and public sector entities should be encouraged. (f) Individuals residing outside of California and entities based outside of California may apply as part of a collaborative team that includes a California-based entity. 2. Application Submission Institute grant funding opportunities will be announced via an official solicitation, referred to here throughout as an RFA, on the CICS website. Each announcement or solicitation will specify the objectives and requirements that apply, and the review criteria that will be used to evaluate the merits of applications submitted in response to the announcement, including the ratepayer benefit index. 3. Application Review The Grant Administration Policy should specify appropriate procedures and steps in the application review process. The Policy should establish specific criteria for review of research grant applications and create a formal process for appeals of scientific review, and approval of funding notices. 4. The application review process shall include: (a) An analysis of the nexus between the proposed project and ratepayer benefits using the ratepayer benefit index that will be incorporated into the RFA documents. Ratepayer benefit is to be used as an evaluative tool by grant reviewers in comparing grant applications and must be a factor that is explicitly considered in the peer review process (see below). Unless a proposal can articulate a ratepayer benefit it can not be the recipient of a CICS grant. (a) An objective scoring system for judging the scientific merit and viability of each application that will be used by both the SRC and peer review panels. (b) Anonymity of individual applicants and applicant entities, except as provided below. (c) A cutoff score to narrow the pool of applicants prior to compiling a shortlist of finalists. (d) An opportunity, if deemed necessary, for the SRC and peer review panels to interview finalists about details of each proposal prior to awarding a grant. 5. Sharing of Intellectual Property CICS grantees shall share IP generated under a CICS grant according to CICS IP and Technology Transfer protocols. 6. Preference for California Suppliers The CICS should expect the grantee to purchase from California suppliers, to the extent reasonably possible, the goods and services it uses in its CICS-supported research. The grantee must provide a clear and compelling explanation in its annual programmatic report for not purchasing more than 50 percent of its goods and services from California suppliers. 7. Confidentiality The Institute's grant administration policy should include confidentiality rules that, to the degree permitted by California law, allow applicants to designate commercially sensitive information as confidential. 8. Additional Funding Grant applicants are encouraged to seek funding from sources other than the CICS. Accordingly, the level of matching funds secured from other sources must be a factor that is explicitly considered in the peer review process. 3.4.1. Peer Review Several parties strongly urged the Commission to ensure that grant awards be disbursed according to an open, competitive, peer-reviewed process. Many parties stated that peer review was the key to ensuring that individual grant awards generated the highest quality work. Both CCST and CSU urged the Commission to keep the process open by having peer review by recognized experts in the various disciplines. USC agrees that a peer review process would help ensure that project funds to the "most qualified" institutions.35 Morrison and Foerster proposes that a peer review board should be established to review the grant proposals and assist with monitoring and evaluation.36 We agree that impartial peer review is an important function of any grant-making body. Peer review ensures that grants are awarded and administered in a fair and objective manner. We are not convinced, however, that a permanent peer review board would have the broad expertise required to effectively evaluate highly technical grant applications submitted in response to an RFA. Instead, ad hoc peer review panels should be assembled for each RFA. The expertise of each panel can then be tailored to match the subject area that is the focus of the RFA. If, for example, the Institute issues an RFA for a certain kind of electricity storage technology, the peer review panel should be composed of experts with knowledge as specific to that kind of electricity storage technology as possible. We require that Institute staff, in consultation with the Governing Board, develop a complete peer review process for the grant administration processes. All grant applications must be reviewed prior to being put on a short-list or approved for funding. The grant administration peer review process should be consistent with the following requirements: (1) Peer review groups should be comprised of experts who are unaffiliated with any of the applicants, to the degree possible. This means that peer reviewers can be selected from institutions outside of California and outside the United States. (2) Peer reviewers should not know the identity or institutional affiliation of an applicant. (3) Peer reviewers may not be compensated for their work. (4) The peer review process should be structured so that it does not unduly delay awards for grants. Accordingly, each peer review panel will have a designated chairperson who will set a schedule to which the rest of the peer review panel will be bound. 3.5. Oversight and Accountability In the OIR, we asked parties to comment on certain aspects of oversight and accountability including the role the CPUC should play in overseeing Institute programs, CPUC control of expenditures to maximize ratepayer benefits, and performance measures or guidelines that may be applied to funding. Oversight of the CICS shall be performed by the Governing Board and the Commission. In response to parties' comments, we have taken several accountability measures that will safeguard ratepayers' interests and ensure ongoing oversight. First, the Governing Board has several members that are accountable to the ratepayers, including one Commissioner, the Director of DRA, and a representative from an IOU. Second, there will be two legislators on the Governing Board. Third, and most importantly, the Commission maintains extensive continuing oversight authority. This decision is not a contract and does not obligate the Commission in any way going forward. The terms and requirements of the grant of ratepayer funds can be modified by any subsequent Commission decision. The Commission shall vote on several key aspects of the institute. As described in Section 3.3.1, the Commission shall approve any non ex officio appointments to the Governing Board and the members of the Executive Committee. Furthermore, the annual proposed budget, annual report, that includes a financial audit, shall be brought to the Commission for approval once they have been approved by the Governing Board.37 The Commission will also have an opportunity to approve the Strategic Plan and the IP and technology transfer policies and protocols drafted by the TTS that are specific to the Institute. And, in addition, the Commission will review any proposal submitted by the WTS on recommendations for supporting the energy sector's transition to a carbon-constrained future through workforce development. Finally, we require that the Institute submit to the Commission two external audits: a biennial performance review and an annual financial audit. Through these reports the Institute must demonstrate that it is accomplishing the goals set forth for it, and it must demonstrate that it is spending ratepayer money efficiently and prudently as directed by the Commission. 3.5.1. Annual Financial and Progress Report Given the magnitude of ratepayer funding and the wide interest in the activities of the Institute, we find that annual external financial audits are warranted and we direct the Executive Director to be responsible for ensuring that this audit occurs, that the external auditors are given all necessary Institute data to undertake the audit, and that the audit is delivered to the Governing Board for review and approval within 90 days of the close of each fiscal year of the Institute's operation. Once approved by the Governing Board, the financial audit is to be submitted to the Commission's Executive Director for approval by the Commission in compliance with the protocols established herein. DRA and PG&E both emphasize the need for annual reports. DRA suggests that annual reports include information on revenues and expenditures, the status of funded projects, and projected activities for the next year. PG&E recommends that two annual reports be required: a financial report and a programmatic report. We agree that annual reports must include both financial and programmatic information, but we do not see the need for two separate reports, particularly since we are ordering that the Institute have an external annual financial audit. Accordingly, we hereby order the Institute Executive Director to present an annual report to the Governing Board within 90 days of the close of the fiscal year of the Institute. The annual report will serve as an internal assessment by the Institute of its own performance. The annual report shall be posted to the Institute website following approval by the Governing Board. The annual report will describe the activities of the Institute during the course of the year including but not limited to fundraising activities, RFAs issued, grant applications received, grants awarded, relevant conferences organized, and accomplishments achieved by the Institute and its grantees. The annual report must also include externally audited financial statements and a summary of expenditures and funds received. The Institute shall maintain detailed financial records under generally accepted accounting principles, and these records shall be maintained for at least six years. The Commission shall have the ability to obtain any financial records upon request. Furthermore, upon request by the Commission, the Institute Executive Director shall appear in person at public meetings of the Commission to answer questions on the annual report. In addition, the Executive Director must prepare a proposed annual budget for the upcoming year and submit it to the Commission for approval, pursuant to the protocols set forth herein, and be prepared, if requested, to appear in person at public meetings to answer questions on it. 3.6. Biennial External Performance Review Several parties commented that the CICS should be subject to a periodic external performance review, with most suggesting a biennial review period. We support this recommendation and therefore require that an external evaluator conduct a comprehensive biennial performance review. We require that every two years, beginning in Year 2 (e.g., Years 2, 4, 6, 8 and 10), an external evaluator such as CCST38 perform a comprehensive performance review. The biennial performance review must be submitted by the Institute's Executive Director to the Governing Board and then posted on the Institute's public website and submitted to the Commission's Executive Director for Commission approval. The Institute Executive Director must make the external annual financial audit report and detailed programmatic information available to the external performance evaluator. The performance review will include an overall assessment of the Institute's effectiveness in reaching the long-term and short term strategic plans approved by the Governing Board as well as an assessment of meeting the goals outlined in this decision. In opening comments, CSU offered several examples of such metrics such as "number of students educated, number of publications, number of dissemination activities (e.g., presentations given, websites accessed), response time to stakeholder requests, patents filed, and new products transferred to the commercial market." CSU also recommended that performance metrics include information on funding leveraged by recipient institutions.39 UC's opening comments referenced a National Academy of Sciences (NAS) report on performance metrics that may provide other useful indicators. The NAS report, "Thinking Strategically: The Appropriate Use of Metrics for the Climate Change Science Program," provides a detailed discussion of the possible metrics for use in Government R&D programs like this. The generic pool of metrics for science and technology includes Process Metrics like cycle time, Input Metrics like expenditures by program or time frame, and Output Metrics like the number of publications issued or patents filed.40 This report is an excellent starting point, but while we agree that specific metrics will be essential to providing a thorough performance assessment, we decline to adopt a specific set of metrics in this decision. In consultation with CCST and other stakeholders, the Institute Executive Director and the Governing Board shall determine which exact metrics should be included. CSU's and other parties' recommendations should be given serious consideration. The performance review shall be presented to the Governing Board and, as with the annual reviews and audits, be delivered to the Commission`s Executive Director for placement on a Commission agenda for approval. Parties all expressed interest in the disposition of IP rights, or revenues generated there from, arising from the proposed work of the Institute. Consumer groups, utilities and, to some extent, environmental groups indicated that the benefits from patents or other intellectual property should flow directly to ratepayers in the form of royalties. PG&E requests that a "clear path" provide benefits for electric and gas utility customers from their investment in the Institute's programs, suggesting incorporation of "`benefit-sharing'" mechanisms that provide free access to and licensing of technologies, information and research results generated by the Institute, as well as royalties in the revenues and value generated by patents and licenses granted by the Institute to third parties."41 Of primary concern in this matter is the effect of the federal Bayh-Dole Act, officially titled the University and Small Business Patent Procedures Act ("Bayh Dole"). [35 U.S.C. § 200-212.] The academic and research institutions strongly recommend that the practices of the Institute be fully compatible with the provisions of the federal Bayh-Dole Act" because "failure to comply with the Bayh-Dole Act would assure that CICS funds could not be used to leverage any federal funding and would thus significantly reduce the effectiveness of the Institute."42 CCST for example, recommends "that to the fullest extent possible, the state's IP policies reflect the federal Bayh-Dole Act, and that royalty income earned by universities from profitable technologies ... be reinvested in ongoing research."43 USC urges that technology transfer "be a decentralized activity assumed by each participating institution to accelerate the impact of CICS' research."44 CSU argues that the benefits of the Institute will be largely non-financial and suggests that Bayh-Dole be used as the basis for any policies related to revenue sharing from profitable technologies.45 DRA suggests that it may be possible to structure a sharing mechanism that both ensures ratepayers a return on their investment and addresses the universities' concerns regarding consistency with Bayh-Dole. The California Institute for Regenerative Medicine specifically provides for revenue sharing in its governing regulations, which require grantee organizations to pay the State 25 percent of net revenues above a threshold amount "unless such action violates any federal law." PIER's standard agreement with UC requires royalty payments of 10% of net revenues to the CEC.46 SDG&E and SoCalGas offered joint comments that suggests a secondary aim of the Board "should be to create additional incentives for research institutions to competently and efficiently patent inventions by introducing the potential for the Board to confiscate ownership" of an unpatented invention and to retain "march-in rights" to prevent abuse of monopoly power by patent holders benefiting from CICS funded research.47 Finally, they argue that "[s]ince United States IP law does not provide for an automated devolution of IP profits or licensing by virtue of providing funding contributions, the Board ought to be granted a non-exclusive license" for inventions coming out of the CICS program.48 Caltech expresses the concern, echoed in written comments and during the December workshop, that "[t]he addition of a new layer of regulation on this process [the Bayh-Dole Act] would create significant, sometimes insurmountable, disincentives for the robust research partnerships that redound so greatly to California's benefit at present."49 UC's presentation at the workshop indicated that the financial benefits from any inventions developed as a result of Institute grants were likely minimal and would be far overshadowed by more qualitative economic, while also pointing out the potential difficulty in then qualifying for federal funds under Bayh-Dole.50 In general, the UC presentation made a strong case for complying with Bayh-Dole. Stanford's presentation at the workshop also supported the UC proposal, particularly in the context of indirect costs and accounting procedures used for federal funding. We recognize that Bayh-Dole's public purpose is generally consistent with the mission of the CICS. Furthermore, it appears that there is sufficient flexibility around the elements of Bayh-Dole that the programmatic objectives of CICS can be fully met without being at cross-purposes. It would be imprudent to discourage participation by other universities and researchers by prematurely restricting the open framework established in Bayh-Dole. We are convinced that leveraging federal funds is crucial to the success of the Institute and California's ability to meet the State policy goals established in the EAP and AB 32. Nonetheless, it will be necessary, when bringing in federal funds, to create grant agreements that are in the interest of California and its ratepayers. One possible approach to the question of revenue sharing might be to require that grantees reinvest a portion of their net licensing revenues in research related to climate solutions, though other solutions are possible as well. It is too early to tell what form such agreements may take. Accordingly, we require that the Governing Board establish a Technology Transfer Subcommittee (TTS) responsible for (1) reviewing the existing policies and practices pertaining to IP, inventions, and technology transfer of the hub's host institution or entity, (2) identifying any barriers to technology transfer the host institution's policies present and bringing them to the attention of the Executive Committee, (3) if necessary, developing IP and technology transfer policies and protocols specific to the Institute, in consultation with stakeholders, (4) advising the Institute and Executive Director regarding IP and technology transfer matters, and (5) reviewing all proposed agreements for additional non-ratepayer funding for the purpose of identifying potential technology transfer issues. Because these are complex issues, requiring specialized knowledge and experience, the TTS will be expected to establish a means of seeking input from professionals with relevant expertise. And, in order to ensure that ratepayers receive a benefit from this IP and technology transfer, we direct the TTS to require that at least 10% of net revenues revert to ratepayers, unless such an action is violative of existing laws. Prior to the establishment of IP and technology transfer policies and protocols specific to the Institute, however, all grant agreements shall be consistent with the framework established by Bayh-Dole. Once IP and technology transfer policies and protocols specific to the Institute are set up, they are to be submitted to the Commission's Executive Director for placement on a Commission agenda for Commission approval. The proposed decision of President Peevey in this matter was mailed to the parties in accordance with Section 311 of the Public Utilities Code and comments were allowed under Rule 14.3 of the Commission's Rules of Practice and Procedure. Comments were filed on March 3, 2008 and reply comments were filed on March 10, 2008. Michael R. Peevey is the assigned Commissioner and Carol A. Brown is the assigned Administrative Law Judge in this proceeding. 1. This mission of the CICS is: · To administer grants to facilitate mission-oriented, applied and directed research that results in practical technological solutions and supports development of policies to reduce GHG emissions or otherwise mitigate the impacts of climate change in California; · To speed the transfer, deployment, and commercialization of technologies that have the potential to reduce GHG emissions or otherwise mitigate the impacts of climate change in California; · To facilitate coordination and cooperation among relevant institutions, including private, state, and federal entities, in order to most efficiently achieve mission-oriented, applied and directed research. 2. It is necessary for the CICS to first develop a Strategic Plan as described in this decision. 3. The Strategic Research Committee will undertake the following tasks to develop the Strategic Plan: (a) Conduct an inventory of current publicly and privately funded research efforts to meet the requirements of AB 32 to ensure that the Institute does not duplicate other agency efforts; (b) Identify areas of technological innovation, not being developed, that will bring about the most promising options for reducing GHG emissions; (c) Identify the uncharted R&D areas that will bring about the highest ratepayer benefits and develop a ratepayer benefits index to inform the grant process. 4. The Strategic Plan will be the framework from which the Institute will formulate its budget, long- and short-term goals and grant administration process. It will be updated annually. 5. The Institute will reduce GHG emissions within the state both by transferring technology for cleaner energy and improved EE that has already been developed and by formulating new commercially viable technology 6. Stabilizing GHG emissions will require an economic investment in this Institute on the scale established in this decision. 7. The mission of the CICS is consistent with the purpose and findings contained in AB 32 wherein the Legislature found that "global warming poses a serious threat to the economic well-being, public health, natural resources, and the environment of California," and with SB 1368 wherein the Legislature found that California must reduce its exposure to the costs associated with future federal regulations of [GHG] emissions. 8. We find that it is appropriate and necessary to direct ratepayer funding for the establishment of CICS and the activities described in this decision. 9. We find it necessary and reasonable given ratepayer funding of the Institute that the Institute be accountable to the Commission and the ratepayers. The Commission shall approve the following: non ex officio appointments to the Governing Board; appointments to the Executive Committee; the Strategic Plan; the annual proposed budget; annual report that includes external financial audit; biennial external performance review; and the IP and technology transfer policies and protocols. 10. A ratepayer benefits index will be an integral part of the Strategic Plan and will rank proposed projects on a continuum, from a high ratepayer benefit to low, or no ratepayer benefit, depending on cost-effectiveness, amount of GHG emission reductions, and whether the results are in the energy sector or another field. The SRC will develop this index. 11. The ratepayer benefit index will be included in the grant RFA, must be referenced in individual grant applications and will be employed as a selection factor in the choice of grants to receive CICS funding. Only proposals with articulated ratepayer benefits can be considered for CICS funding. 12. We find that the proposed budget of $60 million a year over 10 years is appropriate and reasonable for the CICS investment, especially if it is leveraged with additional funds from private and public sources. 13. We find that in the absence of statewide legislation authorizing a tax to fund the Institute, it is appropriate to use ratepayer funds. 14. Energy use is a logical and equitable means of apportioning the costs of CICS and allocating the surcharge on an equal cents per therm or kWh basis among all CPUC jurisdictional California electric and gas utilities is fair and reasonable. However, to avoid any duplication, gas-fired electricity generators are explicitly exempt from assessment for gas CICS costs for gas purchases. 15. Residential ratepayers covered by the AB 1X rate freeze and/or eligible for CARE may be exempt from paying this utility surcharge. 16. We find it reasonable to specify that administrative costs, including the development of the Strategic Plan, and grant administration should be kept to a minimum, although we anticipate that there could be higher up-front costs for the initial administrative function costs that must be incurred before work in other areas can begin. We limit administrative costs to a maximum of 10% of the yearly total funding for the Institute. In the competitive process for the hub site, a critical selection factor will be how the applicant proposes developing the Strategic Plan and managing and controlling the administrative costs associated with operating the hub. 17. Mission-oriented applied and directed technological R&D as facilitated by the grant administration process is the primary purpose of the Institute and we expect that it will require up to a minimum of 85% of the CICS budget. 18. The Workforce Transition Subcommittee will study whether there is a need to support the energy sector's transition to a carbon-constrained future by anticipating and preparing for the resultant changes through workforce development and report back to the Commission on the study within six months. If the study supports having the Institute fund grants for the emerging workforce training, the Commission can consider, and approve if appropriate, an appropriate percentage allocation of Institute funds for that purpose. The Commission must act on the report within three months of its receipt. 19. We find it reasonable to allow the Governing Board and the Institute Executive Director to exercise some discretion in the percentage allocations between the administrative and R&D budget, as long as at a minimum 85% of the Institute's budget is allocated strictly to the R&D function. Any unspent funds from any yearly budget are to be rolled-over to the next budget year of the Institute. Any unspent funds remaining at the end of the 10th year are to be returned to the ratepayers, unless the Commission acts to continue ratepayer funding of the Institute. 20. We do not find it reasonable to allow the CICS to spend or allocate the ratepayer funds authorized in this decision for the purchase of research equipment or information infrastructure for the central hub of the Institute beyond the 10% allotted for program administration. Grant recipients may spend grant monies on equipment if the need for the equipment was identified in the grant application. 21. We find it reasonable to establish that the Institute will have a Governing Board with an Executive Committee, an Institute Executive Director, a Managing Director, staff, a Strategic Research Committee (SRC), and subcommittees. 22. It is reasonable for the CICS Governing Board to select the geographical location of the Institute's headquarters, or hub, in California, through a competitive solicitation. The Governing Board is to issue a Request for Proposals to which all non-profit California-based entities, including but not limited to public and private universities may respond. A peer review committee will rank the proposals and present the rankings to the Governing Board for selection. 23. Although it is our intent for the Institute to have a presence in both northern and southern California, we will leave it to the discretion of the Governing Board to determine how to best ensure that, once the physical location of the hub is determined. 24. We find that the CICS would benefit from a broad-based Governing Board as set forth in the decision and in Attachment C. No single organization or interest may hold a majority of seats on the Governing Board. 25. The Governing Board shall be co-chaired by the President of the Commission and the President of UC, or their respective designees. Other specifics relating to the Governing Board, including its duties, are set forth in the Charter, Attachment A. 26. We find it reasonable to require all members of the Governing Board to be subject to the conflict of interest policy, Attachment B. 27. In particular, members of the Governing Board who are affiliated with an applicant for the hub site or for a grant may not vote on that selection. 28. The Governing Board will conduct a national search for an Institute Executive Director who has responsibilities as set forth in the Charter, Attachment A. 29. The SRC shall be chosen by the Governing Board and will have no more than 20 members, all residing in California, or connected with an entity with a presence in California, with subject-matter expertise in a designated field related to climate change issues. The duties and responsibilities of the SRC are set forth in the Charter, Attachment A. 30. It will be the responsibility of the SRC to develop a Strategic Plan from which the short-term and long-term goals for the Institute will follow. The SRC is to undertake the following tasks as part of developing the Strategic Plan: conduct an inventory of current publicly and privately funded research efforts to meet the requirements of AB 32; identify area of technological innovation not being developed that will bring about the most promising options for reducing GHG emissions; identify which R&D areas have the potential for the greatest ratepayer benefits and develop a ratepayer benefit index; utilize the resources that the hub provides to execute the above functions; and identify and, where appropriate, prioritized opportunities that have the potential to benefit and/or engage members of California's disadvantaged communities. 31. The purpose of the research inventory is to avoid redundancy and overlap with existing programs and to utilize the efforts undertaken by CARB, the ETAAC and PIER. 32. The SRC will assist the Institute's staff in developing and administering the grant process. Utilizing the Strategic Plan, SRC is to develop target RFAs for the short-term and long-term research goals consistent with the Plan. 33. Once the Institute issues grant RFAs, it must ensure a competitive process for the review and awarding of grants. The awarding of grants shall be consistent with the policy set forth in this decision. 34. We find it reasonable to direct the Institute staff, in consultation with the Governing Board, to develop a peer review process through which all grant applications will be reviewed prior to being approved for funding. The peer review process shall be consistent with the requirements set forth in this decision. 35. The Governing Board shall oversee the Institute. 36. The Executive Director's duties and responsibilities are set forth in the Charter but include the responsibility to cause to be prepared a biennial comprehensive performance review by an outside source, and we adopt the recommendation that CCST is qualified to do such a review. This performance review should include an overall assessment of the Institute's effectiveness in achieving the Strategic Plan and reaching the long-term and short-term goals. CCST is to develop specific performance metrics to use to evaluate the success of the Institute. 37. The Executive Director is also to cause to be prepared an annual external financial audit. 38. The Executive Director is to prepare, in conjunction with Institute staff, an annual report. This report is to be submitted to the Governing Board within 90 days of the close of the fiscal year, and then to the Commission for approval. The annual report is to describe the activities of the Institute during the course of the year, including the RFAs issued, grant applications received, grants awarded, conferences organized, private and public funds solicited and obtained, and the accomplishments achieved by the Institute and its grantees. 39. The Executive Director is to prepare a proposed budget for each fiscal year of the Institute and submit it to the Commission for approval. 40. All reports submitted to the Commission for approval are to be posted on the Institute's website simultaneously with their submission to the Commission. The Executive Director is to appear before a public Commission meeting to answer questions on any Institute item before the Commission. 41. The annual report is to include the external financial audit that presents a financial summary of expenditures and funds received. The CICS is to maintain detailed financial records under generally accepted accounting principles and these records shall be maintained for at least six years. These records are to be made available to the Commission upon request. 42. It is reasonable to require that the Governing Board establish a TTS responsible for taking specific steps outlined in the decision to establish IP and technology transfer policies and protocols specific t
http://docs.cpuc.ca.gov/PUBLISHED/AGENDA_DECISION/81119.htm
2008-05-11T18:32:27
crawl-001
crawl-001-007
[]
docs.cpuc.ca.gov
California Public Utilities Commission 505 Van Ness Ave., San Francisco ________________________________________________________________________ FOR IMMEDIATE RELEASE PRESS RELEASE Contact: Terrie Prosper, 415.703.1366, [email protected] Docket #: R.07-09-008 SAN FRANCISCO, April 10, 2008 - The California Public Utilities Commission (CPUC) today created the California Institute for Climate Solutions (CICS), taking a bold and innovative approach to expanding California's leadership on this most pressing of environmental issues. The mission of the CICS is based on these essential pillars: · To facilitate mission-oriented, applied and directed research that results in practical technological solutions and supports development of policies to reduce greenhouse gas emissions in the electric and natural gas sectors, or otherwise mitigates the impacts of climate change in California. · To speed the transfer, deployment, and commercialization of technologies that have the highest potential to reduce greenhouse gas emissions in the electric and natural gas sectors. "California leads the nation in aggressively battling global warming with our policies to reduce greenhouse gases and our ambitious energy efficiency and renewable energy goals," said Governor Schwarzenegger. "I applaud the CPUC for taking another important step by creating the California Institute for Climate Solutions, which will bring together the state's preeminent colleges, universities, and laboratories to fight climate change." Commented CPUC President Michael R. Peevey, "Today we have embarked on another groundbreaking path to find solutions to the most pressing problem of our time. Innovation - technological and otherwise - is the key to alleviating the adverse consequences of climate change. The CICS will allow us to devise and deploy the most cost-effective solutions by mobilizing our financial and human capital." The work of the CICS will be directed by a Strategic Plan that will identify potential areas of research, maximize consumer benefit, and minimize unnecessary redundancy. The Strategic Plan will identify those areas of research and technological innovation that are most likely to achieve the greatest greenhouse gas reductions in the energy sector at the lowest cost. The CICS will have a Governing Board that will be responsible for ensuring that it fulfills its mission. In order to retain CPUC oversight, the Governing Board will be co-chaired by the CPUC President and the University of California President, with seats reserved for the State Senate and Assembly, and the Director of the CPUC's Division of Ratepayer Advocates. Other members will be drawn from other state agencies, universities, utilities, private firms, underserved communities, and consumer/environmental advocacy groups. A Strategic Research Committee chosen by the Governing Board will be responsible for three main tasks: developing a Strategic Plan by March 13, 2009, and updating it on an annual basis; assisting the CICS officers in developing short-term and long-term strategic plans; and reviewing grant proposals recommended by a peer review committee. The Governing Board will have the power to establish any subcommittees necessary to perform its duties and responsibilities. At a minimum the following subcommittees will be formed: a Technology Transfer Subcommittee to establish protocols for CICS IP rights and tech transfer policies; a Conflicts of Interest Subcommittee to develop and maintain conflict of interest protocols for CICS as a whole; and a Workforce Transition Subcommittee to study ways to support the energy sector's transition to a carbon-constrained future through anticipating and preparing for changes in workforce needs. If the study supports having the CICS fund grants for workforce training, the CPUC may allocate appropriate funds. The funding for the CICS, $60 million per year for 10 years, is an investment in California's future and will directly benefit ratepayers, the CPUC determined. The CPUC has charged the CICS Executive Director with obtaining 100 percent matching funds over 10 years in order to maximize ratepayer benefits. The CPUC will maintain extensive continuing oversight over the CICS and will require two external audits - a biennial performance review and an annual financial audit. The mission of the CICS is consistent with the purpose and findings of Assembly Bill 32, The Global Warming Solutions Act of 2006, and Senate Bill 1368, regulating emissions of greenhouse gas from electric utilities. The proposal voted on by the CPUC is available at. For more information on the CPUC, please visit. ### Statement from CPUC President Michael R. Peevey, presented at the April 10, 2008, CPUC Meeting · As many of you have heard me say, the global climate crisis is the defining environmental challenge of our time. In the words of Dr. Pachauri, Chair of the IPCC, "We have a very short window for turning around the trend we have in rising greenhouse gas emissions. We don't have the luxury of time." I believe that history will judge us on how we face up to this test. This state, historically, has been an environmental leader. · With this item, it is our turn, again, to take bold and immediate action. · Innovation-technological and otherwise-is the key to alleviating the adverse consequences of climate change. o To devise and deploy the most effective and lowest cost solutions we must fully mobilize our financial and human capital. o Our state's great public and private universities and the California based national labs hold vast stores of intellectual wealth. o We must focus this resource on developing solutions to the climate crisis. o And we must do it in a way that yields solutions that are truly used and useful. · With this in mind, I asked UC last year to prepare a proposal for a California Institute for Climate Solutions. o The original proposal was circulated with the order that opened this proceeding. o The concept and plan for the Institute have been greatly refined in response to the questions and comments of stakeholders, PUC staff, other agencies and my fellow commissioners. · The decision before us today is the culmination of that process. o It establishes the California Institute for Climate Solutions and directs it to do the following: _ Administer grants for applied research to develop new technologies and other practical solutions to reduce GHG emissions from the electricity and natural gas sectors or help with their adaptation to inevitable climate change. _ Promote technology transfer and speed the commercialization of these technologies. _ Explore the need to couple a workforce development function with the Institute's core mission of applied research and development. o The decision dedicates to this mission $60 million in ratepayer funding each year for ten years, a total of $600 million. There is also a requirement that the Institute secure an equal amount in matching funds over this period. o The decision establishes the composition of the Governing Board, rules for governance of the Institute and procedures for ongoing oversight by this Commission. o And it requires that the host institution for the hub be selected by the Governing Board via a competitive, peer-reviewed process. · Some have asked: Why should utility rate-payers alone pay for the institute? o The short answer is that they shouldn't- ratepayer financing should serve as seed money to leverage other public and private sources of funding. o Broad-based taxpayer financing would certainly be preferable. o But we cannot wait for the Legislature to allocate funds any more than the US should defer decisive action on climate change until China and India take action. · Some have said the scope of the Institute's mission should be limited to topics that will directly benefit electric and gas ratepayers. Others have said that the scope should be broad, encompassing all aspects of the climate challenge facing our state. o The fact is that the climate challenge breaks down the conventional inter-sector boundaries. o We have scoped the Institute's mission narrowly in this decision, focusing it on the electricity and natural gas industries. o We also require the development of a Ratepayer Benefit Index and its use in evaluating grant applications. o I would prefer to see the mission scoped more broadly as it is clear that the lowest cost GHG reduction opportunities are not all within the electric and gas sectors. Electric and gas customers can benefit from identifying and exploiting opportunities in other sectors of the economy and in locales beyond California. o There is no bright line to be drawn here. o I hope that we will be able to broaden the Institute's mission as we attract funding from other sources beyond utility ratepayers. · Some of you have asked, why the hurry? Former Vice President Al Gore, in his Nobel Peace Prize Acceptance speech last year, underscored the need for bold and timely action: "These are the last few years of decision, but they can be the first years of a bright and hopeful future if we do what we must. `The way ahead is difficult. The outer boundary of what we currently believe is feasible is still far short of what we actually must do. "That is just another way of saying that we have to expand the boundaries of what is possible. In the words of the Spanish poet, Antonio Machado, "Pathwalker, there is no path. You must make the path as you walk." · Today we have an opportunity to blaze one of many new paths to solve the climate challenge. I hope we can all join in this endeavor. · Before I move the item I would like to offer my thanks to the people who have worked to develop the concept for the Institute and to craft today's decision. o ALJ Carol Brown o Sach Constantine and Scott Murtishaw of Energy Division o My Chief of Staff, Nancy Ryan o And My Legal Advisor, Jack Stoddard. o I would also like to offer special thanks to Commissioner Chong for her yeoman service in shepherding this decision through the final stages of edits. o And, I particularly want to thank our Governor, who has been supportive from the start.
http://docs.cpuc.ca.gov/PUBLISHED/NEWS_RELEASE/81168.htm
2008-05-11T18:32:36
crawl-001
crawl-001-007
[]
docs.cpuc.ca.gov
Welcome to the new Cacti Documentation site! The Cacti Manual and the Cacti Howto are the most complete areas of the site right now. The API reference is being worked on, and the Plugins area may disappear. It was decided by the Cacti Group that user accounts from the forum would not be transferred to the documentation system. If you want to leave comments, use the contact pages or see 'not ready for prime time' documentation, you will have to sign up for a separate account for the documentation system. The documentation system is based on Drupal , and we should be migrating to 5.x soon. This will mean that the API reference will disappear, but it will be replaced by a full version of Doxygen for all branches of the Cacti Subversion Repository. Cacti version 0.8.6j has been released to address the security vulnerabilities that were discovered in Cacti's PHP-based poller. All users are urged to upgrade to this version or apply the security patches for either 0.8.6h or 0.8.6i. See the downloads page or the release notes for more information.
http://docs.cacti.net/
2008-05-11T21:00:35
crawl-001
crawl-001-007
[]
docs.cacti.net
{"_id":"561af4a878436c19009e87d"},"githubsync":"","__v":10,"},"project":"561ae15363ef571900ca68d3","user":"561ae13963ef571900ca68d2","updates":[],"next":{"pages":[],"description":""},"createdAt":"2015-10-11T23:45:44.909Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":5,"body":"Some developers wish to configure their own backend to host and serve builds to mobile clients. This is an advanced feature; most developers will choose to use AppHub hosting and the AppHub developer dashboard.\n\nHere we provide a basic outline for the endpoints that must be implemented by a build-hosting server, as well as a minimal `Node.js` implementation.\n\n---\n\n## Server Endpoints\n\nFrom the AppHub client, configure the `rootURL` to point to your server at the beginning of the `applicationDidFinishLaunching:withOptions` in `AppDelegate.m`:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"[AppHub setRootURL::::at:::\\\"\\\"];\",\n \"language\": \"objectivec\"\n }\n ]\n}\n[/block]\nThe server must respond to a `GET` request to `/projects/:projectId/build`.\n\nPossible query parameters are:\n- `app_version`: The device's native app version.\n- `debug`: `1` if the device is in debug mode, `0` otherwise.\n- `device_uid`: The device's unique identifier.\n- `sdk_version`: The device's version of the AppHub client SDK.\n\nThe response must be a JSON one of the following two forms:\n\n### No Build Available\n\nIf there is no new build available (the client should use the bundled JavaScript), respond with a `NO-BUILD` response:\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"{\\n\\tstatus: 'success',\\n data: {\\n type: 'NO-BUILD',\\n }\\n}\",\n \"language\": \"json\",\n \"name\": \"No Build Available Response\"\n }\n ]\n}\n[/block]\n### Build Available\n\nIf there a new build available, the server responds with a `GET-BUILD` response. All fields are required (though some might not be used by the client):\n[block:code]\n{\n \"codes\": [\n {\n \"code\": \"{\\n\\tstatus: 'success',\\n data: {\\n \\ttype: 'GET-BUILD',\\n \\n // The project id as configured on the client. Must match exactly \\n // the value passed to [AppHub setApplicationID:@\\\"\\\"].\\n project_uid: '',\\n\\n // A consistent id for the build, such as the key or hash.\\n // The AppHub client uses this id to determine whether the\\n // most current build has changed.\\n uid: '',\\n\\n // Name for the build.\\n name: '',\\n\\n // Description for the build.\\n description: '',\\n\\n // URL of the zipped build. Does not necessarily have to be an s3 url.\\n s3_url: '',\\n\\n // Epoch time in milliseconds since 1970 when the build was\\n // created. This is only used for metadata, not to determine\\n // whether the build should be used.\\n created: 0,\\n\\n // Native app versions for which the build is compatible.\\n // The official AppHub client only uses the values of the object, not the keys.\\n app_versions: {\\n '1.0': '1.0',\\n }\\n\\t}\\n}\",\n \"language\": \"json\",\n \"name\": \"Build Available Response\"\n }\n ]\n}\n[/block]\n---\n\n## Minimal Server Implementation\n\nSee []() for a reference implementation in Node.\n\n---\n\n## Creating New Builds\n\nUse the [AppHub CLI](doc:apphub-cli) to create new builds of your app. You can use s3, or any other static hosting server to host the builds. We recommend using a CDN to improve performance.","excerpt":"Host AppHub on your own servers.","slug":"self-hosting","type":"basic","title":"Hosting an AppHub Server"} Hosting an AppHub Server Host AppHub on your own servers.
http://docs.apphub.io/docs/self-hosting
2019-05-19T09:12:49
CC-MAIN-2019-22
1558232254731.5
[]
docs.apphub.io
Creating Events On this page Creating an event Events are created from the planning view. To create an event select the 'Create Event' button at the top of the planning window or right-click anywhere in the calendar and choose 'Create event' from the contextual menu. You can also double-click at the top of any date in the calendar to create a new event starting from that date. From the Create event dialog, give the event a name and choose the start and end date. You can edit the event later, changing the duration if you choose to. In this case we'll set up an event for a winter collection campaign lasting for 10 days. The event can also contain an attachment, which is specified as a URL. You might include the URL of a web based content creation brief or other planning information related to this event. Once you've entered all the information for an event you can choose to save it and add an edition or save and add editions later. In this case we'll just click "Save" to save the empty event. The event information screen is now displayed. The summer sales campaign event currently contains no editions. We can add editions by clicking "Create edition" from this window, by editing this event later or from the event's contextual menu in the planning view. To return to the planning view, click "Planning" at the top of the window or just use the browser back button. If we go back to the planning view, the event is now shown on the calendar. From the event's contextual menu you can view and edit the event, create an edition and delete the event. Assigning locales to an event If you have teams working on content in different languages, then one way to organise and schedule your content is to create events containing editions with content for particular locales. You can assign one or more locales to an event and filter the timeline, calendar and list view by locale, allowing you to focus on the events containing editions with content for the particular locales you're working with. Note that you can only assign locales to events and filter by locales if you have one or more locales added to your hub. Locales can be assigned to an event when it's created and from the event details window. details window. You can edit the event to add, edit and remove the locales assigned to it. Related Pages For more information about adding editions to an event, see the Adding editions page. See the Localization for an overview of localization, including locales.
https://docs.amplience.net/planning/creatingevents.html
2019-05-19T09:11:16
CC-MAIN-2019-22
1558232254731.5
[]
docs.amplience.net
Returns information about DB cluster snapshots. This API operation supports pagination. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. describe-db-cluster-snapshots [--db-cluster-identifier <value>] [--db-cluster-snapshot-identifier <value>] [--snapshot-type <value>] [--filters <value>] [--max-records <value>] [--marker <value>] [--include-shared | --no-include-shared] [--include-public | --no-include-public] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --db-cluster-identifier (string) The ID of the DB cluster to retrieve the list of DB cluster snapshots for. This parameter can't be used with the DBClusterSnapshotIdentifier parameter. This parameter is not case sensitive. Constraints: - If provided, must match the identifier of an existing DBCluster . --db-cluster-snapshot-identifier (string) A specific DB cluster snapshot identifier to describe. This parameter can't be used with the DBClusterIdentifier parameter. This value is stored as a lowercase string. Constraints: - If provided, Amazon DocumentDB has automatically created for your AWS account. - manual - Return all DB cluster snapshots that you have manually created for your AWS account. - shared - Return all manual DB cluster snapshots that have been shared to snapshots with these results by setting the IncludePublic parameter to true .) This parameter is not currently supported. Shorthand Syntax: Name=string,Values=string,string ... JSON Syntax: [ { "Name": "string", "Values": ["string", ...] } ... ] --max-records (integer) The maximum number of records to include in the response. If more records exist than the specified MaxRecords value, a pagination token (marker) is included in the response so that the remaining results can be retrieved. Default: 100 Constraints: Minimum 20, maximum 100. --marker (string) An optional pagination token provided by a previous request. If this parameter is specified, the response includes only records beyond the marker, up to the value specified by MaxRecords . --include-shared | --no-include-shared (boolean) Set to true to include shared manual DB cluster snapshots from other AWS accounts that this AWS account has been given permission to copy or restore, and otherwise false . The default is false . --include-public | --no-include-public (boolean) Set to true to include manual DB cluster snapshots that are public and can be copied or restored by any AWS account, and otherwise false . . DBClusterSnapshots -> (list) Provides a list of DB cluster snapshots. (structure) Detailed information about a DB cluster snapshot. AvailabilityZones -> (list) Provides the list of Amazon EC2 Availability Zones that instances in the DB cluster snapshot can be restored in. (string) DBClusterSnapshotIdentifier -> (string)Specifies the identifier for the DB cluster snapshot. DBClusterIdentifier -> (string)Specifies the DB cluster identifier of the DB cluster that this DB cluster snapshot was created from. SnapshotCreateTime -> (timestamp)Provides the time when the snapshot was taken, in UTC. Engine -> (string)Specifies the name of the database engine. Status -> (string)Specifies the status of this DB cluster snapshot. Port -> (integer)Specifies the port that the DB cluster was listening on at the time of the snapshot. VpcId -> (string)Provides the virtual private cloud (VPC) ID that is associated with the DB cluster snapshot. ClusterCreateTime -> (timestamp)Specifies the time when the DB cluster was created, in Universal Coordinated Time (UTC). MasterUsername -> (string)Provides the master user name for the DB cluster snapshot. EngineVersion -> (string)Provides the version of the database engine for this DB cluster snapshot. SnapshotType -> (string)Provides the type of the DB cluster snapshot. PercentProgress -> (integer)Specifies the percentage of the estimated data that has been transferred. StorageEncrypted -> (boolean)Specifies whether the DB cluster snapshot is encrypted. KmsKeyId -> (string)If StorageEncrypted is true , the AWS KMS key identifier for the encrypted DB cluster snapshot. DBClusterSnapshotArn -> (string)The Amazon Resource Name (ARN) for the DB cluster snapshot. SourceDBClusterSnapshotArn -> (string)If the DB cluster snapshot was copied from a source DB cluster snapshot, the ARN for the source DB cluster snapshot; otherwise, a null value.
https://docs.aws.amazon.com/cli/latest/reference/docdb/describe-db-cluster-snapshots.html
2019-05-19T09:17:55
CC-MAIN-2019-22
1558232254731.5
[]
docs.aws.amazon.com
All content with label hibernate_search+hot_rod+infinispan+jboss_cache+release+s3+scala. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, deadlock, archetype, jbossas, lock_striping, nexus, guide, schema, listener, cache, amazon, grid, test, api, xsd, ehcache, maven, documentation, write_behind, 缓存,, repeatable_read, hotrod, webdav, docs, batching, consistent_hash, store, jta, faq, 2lcache, as5, jsr-107, jgroups, lucene, locking, rest more » ( - hibernate_search, - hot_rod, - infinispan, - jboss_cache, - release, - s3, - scala ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/hibernate_search+hot_rod+infinispan+jboss_cache+release+s3+scala
2019-05-19T09:15:22
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
VirtualAllocFromApp function Reserves, commits, or changes the state of a region of pages in the virtual address space of the calling process. Memory allocated by this function is automatically initialized to zero. Syntax PVOID VirtualAllocFromApp( PVOID BaseAddress, SIZE_T Size, ULONG AllocationType, ULONG Protection ); Parameters BaseAddress. Size The size of the region, in bytes. If the BaseAddress parameter is NULL, this value is rounded up to the next page boundary. Otherwise, the allocated pages include all pages containing one or more bytes in the range from BaseAddress to BaseAddress+Size. This means that a 2-byte range straddling a page boundary causes both pages to be included in the allocated region. AllocationType The type of memory allocation. This parameter must contain one of the following values. This parameter can also specify the following values as indicated. Protection The memory protection for the region of pages to be allocated. If the pages are being committed, you can specify one of the memory protection constants. The following constants generate an error: - PAGE_EXECUTE - PAGE_EXECUTE_READ - PAGE_EXECUTE_READWRITE - PAGE_EXECUTE_WRITECOPY Return Value If the function succeeds, the return value is the base address of the allocated region of pages. If the function fails, the return value is NULL. To get extended error information, call GetLastError. Remarks You can call VirtualAllocFromApp from Windows Store apps with just-in-time (JIT) capabilities to use JIT functionality. The app must include the codeGeneration capability in the app manifest file to use JIT capabilities. Each page has an associated page state. The VirtualAllocFromApp function can perform the following operations: - Commit a region of reserved pages - Reserve a region of free pages - Simultaneously reserve and commit a region of free pages You can use VirtualAllocFromApp to reserve a block of pages and then make additional calls to VirtualAllocFromApp to commit individual pages from the reserved block. This enables a process to reserve a range of its virtual address space without consuming physical storage until it is needed. If the BaseAddress parameter is not NULL, the function uses the BaseAddress and Size parameters to compute the region of pages to be allocated. The current state of the entire range of pages must be compatible with the type of allocation specified by the AllocationType parameter. Otherwise, the function fails and none of the pages are allocated. This compatibility requirement does not preclude committing an already committed page, as mentioned previously. VirtualAllocFromApp does not allow the creation of executable pages. The VirtualAllocFromApp. Requirements See Also Memory Management Functions
https://docs.microsoft.com/en-us/windows/desktop/api/memoryapi/nf-memoryapi-virtualallocfromapp
2019-05-19T09:16:29
CC-MAIN-2019-22
1558232254731.5
[]
docs.microsoft.com
VPNs and Firewall Rules¶ VPNs and firewall rules are handled somewhat inconsistently in pfSense. This section describes how firewall rules are handled for each of the individual VPN options. For the automatically added rules discussed here, the addition of those rules may be disabled by checking Disable all auto-added VPN rules under System > Advanced on the Firewall/NAT tab. IPsec¶ IPsec traffic coming in to the specified WAN interface is automatically allowed as described in IPsec. Traffic encapsulated within an active IPsec connection is controlled via user-defined rules on the IPsec tab under Firewall > Rules. OpenVPN¶ OpenVPN does not automatically add rules to WAN interfaces. The OpenVPN remote access VPN Wizard offers to optionally create rules to pass WAN traffic and traffic on the OpenVPN interface. Traffic encapsulated within an active OpenVPN connection is controlled via user-defined rules on the OpenVPN tab under Firewall > Rules. OpenVPN interfaces may also be assigned similar to other interfaces on pfSense. In such cases the OpenVPN tab firewall rules still apply, but there is a separate tab specific to the assigned VPN instance that controls traffic only for that one VPN.
https://docs.netgate.com/pfsense/en/latest/book/vpn/vpns-and-firewall-rules.html
2019-05-19T09:14:19
CC-MAIN-2019-22
1558232254731.5
[]
docs.netgate.com
All content with label amazon+gridfs+infinispan+installation. Related Labels: expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, query, deadlock, archetype, jbossas, nexus, guide, listener, cache, s3, grid, test, jcache, api, xsd, ehcache, maven,, » ( - amazon, - gridfs, - infinispan, - installation ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/amazon+gridfs+infinispan+installation
2019-05-19T08:59:56
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
All content with label clustering+hibernate_search+infinispan+locking+maven+partitioning. Related Labels: podcast, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release,, async, transaction, interactive, xaresource, build, domain, searchable, subsystem, demo, installation, scala, ispn, mod_cluster, client, migration, non-blocking, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, ejb, hotrod, snapshot, repeatable_read, webdav, docs, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, rest, hot_rod more » ( - clustering, - hibernate_search, - infinispan, - locking, - maven, - partitioning ) Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today.
https://docs.jboss.org/author/label/clustering+hibernate_search+infinispan+locking+maven+partitioning
2019-05-19T10:04:09
CC-MAIN-2019-22
1558232254731.5
[]
docs.jboss.org
OpenRecurringAppointmentDialog The OpenRecurringAppointmentDialog is shown when you try to edit a recurring appointment. Figure 1. OpenRecurringAppointmentDialog It will pop up when you double click a recurring appointment. Alternatively, you can show it by using the "Edit Appointment" option in the default context menu when a recurring appointment is currently selected. The OpenRecurringAppointmentDialog is shown in the EditAppointmentDialog.EditAppointment method when trying to edit an existing appointment. If you create a custom EditAppointmentDialog and override its EditAppointment method you can replace the default OpenRecurringAppointmentDialog with a custom one if needed. OpenRecurringAppointmentDialog inherits RadSchedulerDialog and implements the IOpenRecurringAppointmentDialog interface. The IOpenRecurringAppointmentDialog interface requires implementing the following methods and properties: - DialogResult ShowDialog() - string EventName - bool EditOccurrence - string ThemeName As a derivative of RadSchedulerDialog which inherits RadForm, the ShowDialog method and the ThemeName property are already available. It is necessary to implement the EventName and EditOccurrence properties.
https://docs.telerik.com/devtools/winforms/controls/scheduler/dialogs/openrecurringappointmentdialog
2019-05-19T08:40:44
CC-MAIN-2019-22
1558232254731.5
[array(['images/scheduler-winforms-scheduler-dialogs-openrecurringappointmentdialog001.png', 'scheduler-winforms-scheduler-dialogs-openrecurringappointmentdialog 001'], dtype=object) ]
docs.telerik.com
Networking protocol for low-latency transport of content over the web. Originally started out from the SPDY protocol, now standardized as HTTP version 2. See also support for the SPDY protocol, precursor of HTTP2. Partial support in IE11 refers to being limited to Windows 10. Only supports HTTP2 over TLS (https) Partial support in Safari refers to being limited to OSX 10.11+ Only supports HTTP2 if servers support protocol negotiation via ALPN Data by caniuse.com Licensed under the Creative Commons Attribution License v4.0.
https://docs.w3cub.com/browser_support_tables/http2/
2019-05-19T09:20:56
CC-MAIN-2019-22
1558232254731.5
[]
docs.w3cub.com
Transfer CFT 3.2.2 Local Administration Guide Configuring the environment This topic describes how to configure the environment for a directory type exit. Before you submit a directory type EXIT, you must customize the following Transfer CFT objects: CFTPROT defines both the application protocol type and profile CFTEXIT describes the EXIT environment and how this EXIT is activated Each CFTEXIT object corresponds to an EXIT task. The number of EXIT tasks of all types simultaneously active is limited to a number depending on the operating system. EXIT type directory tasks are activated in memory when Transfer CFT is started and de-activated when the monitor is shut down. Defining the CFTPROT object The parameters mentioned here are the ones that are specific to this EXIT. Syntax CFTPROT ID = identifier, ... [EXITA = identifier,..,] .... [DYNAM = identifier,] ... Parameters ID = identifier Protocol identifier. [ EXITA = identifier] Directory EXIT identifier. To activate a directory type EXIT for the protocol considered, an EXIT identifier of this type should be indicated. There can only be one directory type EXIT task per protocol command. The directory type EXIT identifier can include the symbolic variable &NPART:(EXITA = (&NPART, ...) where NPART designates the remote partner network name. DYNAM = identifier Identifier of the dynamic partner in server mode. The value of this identifier corresponds to that of the model CFTPART object ID parameter. Defining the CFTEXIT object Syntax CFTEXIT ID = identifier, TYPE = ACCESS, [FORMAT = { V23 | V24 }] [LANGUAGE = {COBOL | C},] [MODE = MODE,] [PARM = string,] [PROG = {CFTEXIT | string},] [RESERV = {1024 | n}] Parameters ID = identifier Command identifier. The value of this identifier corresponds to the identifier defined in the EXITA parameter of the related CFTPROT object. [ FORMAT = V23 | V24 ] Optional parameter. Indicates the format for the communication area. V23 (Default value) V24 [ LANGUAGE = {COBOL | C}] Language in which the user program is written. The possible values are COBOL and C language. Transfer CFT uses this attribute to exchange data with the program using the EXIT via the structure best suited to the language in which it is implemented. [ PARM = string64] Free user field. [ PROG = {CFTEXIT | string512}] Name of the executable module associated with the EXIT task. This module is built from the interface provided with Transfer CFT linked to the program written by the user. In order to facilitate identification of the associated module, it is advised to name it CFTEXIA. [ RESERV = { 1024 | n}] {0 ...1024} Size of the working area reserved for the user. This area is not used by the Transfer CFT interface. You can use it to save data required for the processing of the program that you have written. This area is de-allocated when the Transfer CFT interface de-selects the file. TYPE = ACCESS EXIT type. Related Links
https://docs.axway.com/bundle/Transfer_CFT_322_UsersGuide_LocalAdministration_allOS_en_HTML5/page/Content/Prog/Exits/Directory_exit/configuring_the_environment.htm
2019-05-19T08:33:39
CC-MAIN-2019-22
1558232254731.5
[]
docs.axway.com
. Tabs in the Edit mode - Preview – displays the latest version of the page. This means that it displays even pages that are not yet published. The view mode also allows you to use built-in page validation features. Tabs in the Preview mode - Listing – shows a list of all pages under the currently selected page. You can use the Listing mode to perform multiple (batch) page operations such as deleting, publishing, or translating pages at once. You can also view pages directly on the Live site. You can use the Application list to access the, you may want to know that there are two different kinds of pages. All the items in the content tree are pages. Even the files that you upload into the content tree ( such as PDF, Word, image files, etc. ). Structured the news or products can have its own fields such as title, main text, and teaser image that you edit.?
https://docs.kentico.com/k11/managing-website-content/working-with-pages
2019-05-19T09:53:45
CC-MAIN-2019-22
1558232254731.5
[]
docs.kentico.com
Spring Cloud Stream Application Starters provide you with predefined Spring Cloud Stream applications that you can run independently or with Spring Cloud Data Flow. You can also use the starters as a basis for creating your own applications. They include: You can find a detailed listing of all the starters and as their options in the corresponding section of this guide. You can find all available app starter repositories in this GitHub Organization. As a user of Spring Cloud Stream Application Starters you have access to two types of artifacts. Starters are libraries that contain the complete configuration of a Spring Cloud Stream application with a specific role (e.g. an HTTP source that receives HTTP POST requests and forwards the data on its output channel to downstream Spring Cloud Stream applications). Starters are not executable applications, and are intended to be included in other Spring Boot applications, along with a Binder implementation. Prebuilt applications are Spring Boot applications that include the starters and a Binder implementation. Prebuilt applications are uberjars and include minimal code required to execute standalone. For each starter, the project provides a prebuilt version including the Kafka Binder (one each for 0.9 and 0.10 versions of Kafka) and a prebuilt version including the Rabbit MQ Binder. Based on their target application type, starters can be either: You can easily identify the type and functionality of a starter based on its name. All starters are named following the convention spring-cloud-starter-stream-<type>-<functionality>. For example spring-cloud-starter-stream-source-file is a starter for a file source that polls a directory and sends file data on the output channel (read the reference documentation of the source for details). Conversely, spring-cloud-starter-stream-sink-cassandra is a starter for a Cassandra sink that writes the data that it receives on the input channel to Cassandra (read the reference documentation of the sink for details). The prebuilt applications follow a naming convention too: <functionality>-<type>-<binder>. For example, cassandra-sink-kafka-10 is a Cassandra sink using the Kafka binder that is running with Kafka version 0.10. You either get access to the artifacts produced by Spring Cloud Stream Application Starters via Maven, Docker, or building the artifacts yourself. Starters are available as Maven artifacts in the Spring repositories. You can add them as dependencies to your application, as follows: <dependency> <groupId>org.springframework.cloud.stream.app</groupId> <artifactId>spring-cloud-starter-stream-sink-cassandra</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> From this, you can infer the coordinates for other starters found in this guide. While the version may vary, the group will always remain org.springframework.cloud.stream.app and the artifact id follows the naming convention spring-cloud-starter-stream-<type>-<functionality> described previously. Prebuilt applications are available as Maven artifacts too. It is not encouraged to use them directly as dependencies, as starters should be used instead. Following the typical Maven <group>:<artifactId>:<version> convention, they can be referenced for example as: org.springframework.cloud.stream.app:cassandra-sink-rabbit:1.0.0.BUILD-SNAPSHOT Just as with the starters, you can infer the coordinates for other prebuilt applications found in the guide. The group will be always org.springframework.cloud.stream.app. The version may vary. The artifact id follows the format <functionality>-<type>-<binder> previously described. You can download the executable jar artifacts from the Spring Maven repositories. The root directory of the Maven repository that hosts release versions is repo.spring.io/release/org/springframework/cloud/stream/app/. From there you can navigate to the latest release version of a specific app, for example log-sink-rabbit-1.1.1.RELEASE.jar. Use the Milestone and Snapshot repository locations for Milestone and Snapshot executuable jar artifacts. The Docker versions of the applications are available in Docker Hub, at hub.docker.com/r/springcloudstream/. Naming and versioning follows the same general conventions as Maven, e.g. docker pull springcloudstream/cassandra-sink-kafka-10 will pull the latest Docker image of the Cassandra sink with the Kafka binder that is running with Kafka version 0.10. You can also build the project and generate the artifacts (including the prebuilt applications) on your own. This is useful if you want to deploy the artifacts locally or add additional features. First, you need to generate the prebuilt applications. This is done by running the application generation Maven plugin. You can do so by simply invoking the maven build with the generateApps profile and install lifecycle. mvn clean install -PgenerateApps Each of the prebuilt applciation will contain: pom.xmlfile with the required dependencies (starter and binder) mainmethod of the application and imports the predefined configuration For example, spring-cloud-starter-stream-sink-cassandra will generate cassandra-sink-rabbit, cassandra-sink-kafka-09 and cassandra-sink-kafka-10 as completely functional applications. Apart from accessing the sources, sinks and processors already provided by the project, in this section we will describe how to: Prebuilt applications are provided for both kafka and rabbit binders. But if you want to connect to a different middleware system, and you have a binder for it, you will need to create new artifacts. <dependencies> <!- other dependencies --> <dependency> <groupId>org.springframework.cloud.stream.app</groupId> <artifactId>spring-cloud-starter-stream-sink-cassandra</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-gemfire</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> </dependencies> The next step is to create the project’s main class and import the configuration provided by the starter. package org.springframework.cloud.stream.app.cassandra.sink.rabbit; import org.springframework.boot.SpringApplication; import org.springframework.boot.autoconfigure.SpringBootApplication; import org.springframework.cloud.stream.app.cassandra.sink.CassandraSinkConfiguration; import org.springframework.context.annotation.Import; @SpringBootApplication @Import(CassandraSinkConfiguration.class) public class CassandraSinkGemfireApplication { public static void main(String[] args) { SpringApplication.run(CassandraSinkGemfireApplication.class, args); } } Spring Cloud Stream Application consists of regular Spring Boot applications with some additional conventions that facilitate generating prebuilt applications with the preconfigured binders. Sometimes, your solution may require additional applications that are not in the scope of out of the box Spring Cloud Stream Application Starters, or require additional tweaks and enhancements. In this section we will show you how to create custom applications that can be part of your solution, along with Spring Cloud Stream application starters. You have the following options: If you want to add your own custom applications to your solution, you can simply create a new Spring Cloud Stream app project with the binder of your choice and run it the same way as the applications provided by Spring Cloud Stream Application Starters, independently or via Spring Cloud Data Flow. The process is described in the Getting Started Guide of Spring Cloud Stream. An alternative way to bootstrap your application is to go to the Spring Initializr and choose a Spring Cloud Stream Binder of your choice. This way you already have the necessary infrastructure ready to go and mainly focus on the specifics of the application. The following requirements need to be followed when you go with this option: inputfor sources - the simplest way to do so is by using the predefined interface org.spring.cloud.stream.messaging.Source; outputfor sinks - the simplest way to do so is by using the predefined interface org.spring.cloud.stream.messaging.Sink; inputand an outbound channel named outputfor processors - the simplest way to do so is by using the predefined interface org.spring.cloud.stream.messaging.Processor. You can also reuse the starters provided by Spring Cloud Stream Application Starters to create custom components, enriching the behavior of the application. For example, you can add a Spring Security layer to your HTTP source, add additional configurations to the ObjectMapper used for JSON transformation wherever that happens, or change the JDBC driver or Hadoop distribution that the application is using. In order to do this, you should set up your project following a process similar to customizing a binder. In fact, customizing the binder is the simplest form of creating a custom component. As a reminder, this involves: After doing so, you can simply add the additional configuration for the extra features of your application. If you’re looking to patch the pre-built applications to accommodate addition of new dependencies, you can use the following example as the reference. Let’s review the steps to add mysql driver to jdbc-sink application. mysqljava-driver dependency <dependencies> <dependency> <groupId>mysql</groupId> <artifactId>mysql-connector-java</artifactId> <version>5.1.37</version> </dependency> <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-binder-rabbit</artifactId> </dependency> <dependency> <groupId>org.springframework.cloud.stream.app</groupId> <artifactId>spring-cloud-starter-stream-sink-jdbc</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-test</artifactId> <scope>test</scope> </dependency> </dependencies> jdbcsink, it is: @Import(org.springframework.cloud.stream.app.jdbc.sink.JdbcSinkConfiguration.class). You can find the configuration class for other applications in their respective repositories. @SpringBootApplication @Import(org.springframework.cloud.stream.app.jdbc.sink.JdbcSinkConfiguration.class) public class DemoApplication { public static void main(String[] args) { SpringApplication.run(DemoApplication.class, args); } } jdbc-sinkapplication now includes mysqldriver in it In this section, we will explain how to develop a custom source/sink/processor application and then generate maven and docker artifacts for it with the necessary middleware bindings using the existing tooling provided by the spring cloud stream app starter infrastructure. For explanation purposes, we will assume that we are creating a new source application for a technology named foobar. app-starters-build Please follow the instructions above for designing a proper Spring Cloud Stream Source. You may also look into the existing starters for how to structure a new one. The default naming for the main @Configuration class is FoobarSourceConfiguration and the default package for this @Configuration is org.springfamework.cloud.stream.app.foobar.source. If you have a different class/package name, see below for overriding that in the app generator. The technology/functionality name for which you create a starter can be a hyphenated stream of strings such as in scriptable-transform which is a processor type in the module spring-cloud-starter-stream-processor-scriptable-transform. The starters in spring-cloud-stream-app-starters-build) buildsection. This will add the necessary plugin configuration for app generation as well as generating proper documentation metadata. Please ensure that your root pom inherits app-starters-build as the base configuration for the plugins is specified there. .stream.app</groupId> <artifactId>foobar-app-dependencies</artifactId> <version>${project.version}</version> </bom> <generatedApps> <foobar-source/> </generatedApps> </configuration> </plugin> </plugins> </build> More information about the maven plugin used above to generate the apps can be found here: github.com/spring-cloud/spring-cloud-stream-app-maven-plugin If you did not follow the default convention expected by the plugin for where it is looking for the main configuration class, which is org.springfamework.cloud.stream.app.foobar.source.FoobarSourceConfiguration, you can override that in the configuration for the plugin. For example, if your main configuration class is foo.bar.SpecialFooBarConfiguration.class, this is how you can tell the plugin to override the default. <foobar-source> <autoConfigClass>foo.bar.SpecialFooBarConfiguration.class</autoConfigClass> </foobar-source> foobar-app-dependencies). This is the bom (bill of material) for this project. It is advised that this bom is inherited from spring-cloud-dependencies-parent. Please see other starter repositories for guidelines. <dependencyManagement> ... ... <dependency> <groupId>org.springframework.cloud.stream.app</groupId> <artifactId>spring-cloud-starter-stream-source-foobar</artifactId> <version>1.0.0.BUILD-SNAPSHOT</version> </dependency> ... ... ./mvnw clean install -PgenerateApps This will generate the binder based foobar source apps in a directory named apps at the root of the repository. If you want to change the location where the apps are generated, for instance `/tmp/scs-apps, you can do it in the configuration section of the plugin. <configuration> ... <generatedProjectHome>/tmp/scs-apps</generatedProjectHome> ... </configuration By default, we generate apps for both Kafka 09/10 and Rabbitmq binders - spring-cloud-stream-binder-kafka and spring-cloud-stream-binder-rabbit. Say, if you have a custom binder you created for some middleware (say JMS), which you need to generate apps for foobar source, you can add that binder to the binders list in the configuration section as in the following. <binders> <jms /> </binders> Please note that this would only work, as long as there is a binder with the maven coordinates of org.springframework.cloud.stream as group id and spring-cloud-stream-binder-jms as artifact id. This artifact needs to be specified in the BOM above and available through a maven repository as well.-source> <extraRepositories> <private-internal-nexus /> </extraRepositories> </foobar-source> appsat the root of the repository by default, unless you changed it elsewhere as described above). Here you will see foobar-source-kafka-09, foobar-source-kafka-10 and foobar-source-rabbit. If you added more binders as described above, you would see that app as well here - for example foobar-source-jms. You can import these apps directly into your IDE of choice if you further want to do any customizations on them. Each of them is a self contained spring boot application project. For the generated apps, the parent is spring-boot-starter-parent as required by the underlying Spring Initializr library. You can cd into these custom foobar-source directories and do the following to build the apps: cd foo-source-kafka-10 mvn clean install This would install the foo-source-kafka-10 into your local maven cache (~/.m2 by default). The app generation phase adds an integration test to the app project that is making sure that all the spring components and contexts are loaded properly. However, these tests are not run by default when you do a mvn install. You can force the running of these tests by doing the following: mvn clean install -DskipTests=false One important note about running these tests in generated apps: If your application’s spring beans need to interact with some real services out there or expect some properties to be present in the context, these tests will fail unless you make those things available. An example would be a Twitter Source, where the underlying spring beans are trying to create a twitter template and will fail if it can’t find the credentials available through properties. One way to solve this and still run the generated context load tests would be to create a mock class that provides these properties or mock beans (for example, a mock twitter template) and tell the maven plugin about its existence. You can use the existing module app-starters-test-support for this purpose and add the mock class there. See the class org.springframework.cloud.stream.app.test.twitter.TwitterTestConfiguration for reference. You can create a similar class for your foobar source - FoobarTestConfiguration and add that to the plugin configuration. You only need to do this if you run into this particular issue of spring beans are not created properly in the integration test in the generated apps. <foobar-source> <extraTestConfigClass>org.springframework.cloud.stream.app.test.foobar.FoobarTestConfiguration.class</extraTestConfigClass> </foobar-source> When you do the above, this test configuration will be automatically imported into the context of your test class. Also note that, you need to regenerate the apps each time you make a configuration change in the plugin. targetdirectories of the respective apps and also as maven artifacts in your local maven repository. Go to the targetdirectory and run the following: java -jar foobar-source-kafa-10.jar [Ensure that you have kafka running locally when you do this] It should start the application up. mvn clean package docker:build This creates the docker image under the target/docker/springcloudstream directory. Please ensure that the Docker container is up and running and DOCKER_HOST environment variable is properly set before you try docker:build. All the generated apps from the various app repositories are uploaded to Docker Hub However, for a custom app that you build, this won’t be uploaded to docker hub under springcloudstream repository. If you think that there is a general need for this app, you should try contributing this starter as a new repository to Spring Cloud Stream App Starters. Upon review, this app then can be eventually available through the above location in docker hub. If you still need to push this to docker hub under a different repository (may be an enterprise repo that you manage for your organization) you can take the following steps. Go to the pom.xml of the generated app [ example - foo-source-kafka/pom.xml] springcloudstream. Replace with your repository name. Then do this: mvn clean package docker:build docker:push -Ddocker.username=[provide your username] -Ddocker.password=[provide password] This would upload the docker image to the docker hub in your custom repository. In the following sections, you can find a brief faq on various things that we discussed above and a few other infrastructure related topics. What is the parent for stream app starters? The parent for all app starters is app-starters-build which is coming from the core project. github.com/spring-cloud-stream-app-starters/core For example: <parent> <groupId>org.springframework.cloud.stream.app</groupId> <artifactId>app-starters-build</artifactId> <version>1.3.1.RELEASE</version> <relativePath/> </parent> app-starters-core-dependencies. We need this bom during app generation to pull down all the core dependencies. app-starters-buildartfiact. This same BOM is referenced through the maven plugin configuration for the app generation. The generated apps thus will include this bom also in their pom.xml files. What spring cloud stream artifacts does the parent artifact ( app-starters-build) include? What other artfiacts are available through the parent app-starters-build and where are they coming from? In addition to the above artifacts, the artifacts below also included in app-starters-build by default. Can you summarize all the BOM’s that SCSt app starters depend on? All SCSt app starters have access to dependencies defined in the following BOM’s and other dependencies from any other BOM’s these three boms import transitively as in the case of Spring Integration: app-starter-buildas the parent which in turn has spring-cloud-buildas parent. The above documentation states that the generated apps have spring-boot-starteras the parent. Why the mismatch? There is no mismatch per se, but a slight subtlety. As the question frames, each app starter has access to artifacts managed all the way through spring-cloud-buildat compile time. However, this is not the case for the generated apps at runtime. Generated apps are managed by boot. Their parent is spring-boot-starterthat imports spring-boot-dependenciesbom that includes a majority of the components that these apps need. The additional dependencies that the generated application needs are managed by including a BOM specific to each application starter. time-app-dependencies. This is an important BOM. At runtime, the generated apps get the versions used in their dependencies through a BOM that is managing the dependencies. Since all the boms that we specified above only for the helper artifacts, we need a place to manage the starters themselves. This is where the app specific BOM comes into play. In addition to this need, as it becomes clear below, there are other uses for this BOM such as dependency overrides etc. But in a nutshell, all the starter dependencies go to this BOM. For instance, take TCP repo as an example. It has a starter for source, sink, client processor etc. All these dependencies are managed through the app specific tcp-app-dependenciesbom. This bom is provided to the app generator maven plugin in addition to the core bom. This app specific bom has spring-cloud-dependencies-parentas parent. spring-cloud-stream-app-startersorganization where you can start contributing the starters and other components. How do I override Spring Integration version that is coming from spring-boot-dependencies by default? The following solution only works if the versions you want to override are available through a new Spring Integration BOM. Go to your app starter specific bom. Override the property as following: <spring-integration.version>VERSION GOES HERE</spring-integration.version> Then add the following in the dependencies management section in the BOM. <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-bom</artifactId> <version>${spring-integration.version}</version> <scope>import</scope> <type>pom</type> </dependency> How do I override spring-cloud-stream artifacts coming by default in spring-cloud-dependencies defined in core BOM? The following solution only works if the versions you want to override are available through a new Spring-Cloud-Dependencies BOM. Go to your app starter specific bom. Override the property as following: <spring-cloud-dependencies.version>VERSION GOES HERE</spring-cloud-dependencies.version> Then add the following in the dependencies management section in the BOM. <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-dependencies</artifactId> <version>${spring-cloud-dependencies.version}</version> <scope>import</scope> <type>pom</type> </dependency> What if there is no spring-cloud-dependencies BOM available that contains my versions of spring-cloud-stream, but there is a spring-cloud-stream BOM available? Go to your app starter specific BOM. Override the property as below. <spring-cloud-stream.version>VERSION GOES HERE</spring-cloud-stream.version> Then add the following in the dependencies management section in the BOM. <dependency> <groupId>org.springframework.cloud</groupId> <artifactId>spring-cloud-stream-dependencies</artifactId> <version>${spring-cloud-stream.version}</version> <scope>import</scope> <type>pom</type> </dependency> What if I want to override a single artifact that is provided through a bom? For example spring-integration-java-dsl? Go to your app starter BOM and add the following property with the version you want to override: <spring-integration-java-dsl.version>VERSION GOES HERE</spring-integration-java-dsl.version> Then in the dependency management section add the following: <dependency> <groupId>org.springframework.integration</groupId> <artifactId>spring-integration-java-dsl</artifactId> <version>${spring-integration-java-dsl.version}</version> </dependency> How do I override the boot version used in a particular app? When you generate the app, override the boot version as follows. ./mvnw clean install -PgenerateApps -DbootVersion=<boot version to override> For example: ./mvnw clean install -PgenerateApps -DbootVersion=2.0.0.BUILD-SNAPSHOT You can also override the boot version more permanently by overriding the following property in your starter pom. <bootVersion>2.0.0.BUILD-SNAPSHOT</bootVersion>
https://docs.spring.io/spring-cloud-stream-app-starters/docs/Celsius.SR1/reference/html/_introduction.html
2019-05-19T08:33:17
CC-MAIN-2019-22
1558232254731.5
[]
docs.spring.io
Troubleshooting Alexa for Business Identity and Access Use the following information to help you diagnose and fix common issues that you might encounter when working with Alexa for Business and IAM. Topics Alexa for Business. Alexa for Business. Alexa for Business To allow others to access Alexa for Business, you must create an IAM entity (user or role) for the person or application that needs access. They will use the credentials for that entity to access AWS. You must then attach a policy to the entity that grants them the correct permissions in Alexa for Business. To get started right away, see Creating your first IAM delegated user and group in the IAM User Guide. I Want to Allow People Outside of My AWS Account to Access My Alexa for Business Alexa for Business supports these features, see How Alexa for Business.
https://docs.aws.amazon.com/a4b/latest/ag/security_iam_troubleshoot.html
2021-02-24T22:40:41
CC-MAIN-2021-10
1614178349708.2
[]
docs.aws.amazon.com
2.2.9.2 SAM_VALIDATE_PERSISTED_FIELDS The SAM_VALIDATE_PERSISTED_FIELDS structure holds various characteristics about password state. typedef struct _SAM_VALIDATE_PERSISTED_FIELDS { unsigned long PresentFields; LARGE_INTEGER PasswordLastSet; LARGE_INTEGER BadPasswordTime; LARGE_INTEGER LockoutTime; unsigned long BadPasswordCount; unsigned long PasswordHistoryLength; [unique, size_is(PasswordHistoryLength)] PSAM_VALIDATE_PASSWORD_HASH PasswordHistory; } SAM_VALIDATE_PERSISTED_FIELDS, *PSAM_VALIDATE_PERSISTED_FIELDS; PresentFields: A bitmask to indicate which of the fields are valid. The following table shows the defined values. If a bit is set, the corresponding field is valid; if a bit is not set, the field is not valid. PasswordLastSet: This field represents the time at which the password was last reset or changed. It uses FILETIME syntax. BadPasswordTime: This field represents the time at which an invalid password was presented to either a password change request or an authentication request. It uses FILETIME syntax. LockoutTime: This field represents the time at which the owner of the password data was locked out. It uses FILETIME syntax. BadPasswordCount: Indicates how many invalid passwords have accumulated (see message processing for details). PasswordHistoryLength: Indicates how many previous passwords are in the PasswordHistory field. PasswordHistory: An array of hash values representing the previous PasswordHistoryLength passwords.
https://docs.microsoft.com/en-us/openspecs/windows_protocols/ms-samr/e0b2d21d-0b1c-4fc0-8f4a-895bef6ffc4e
2021-02-25T00:29:02
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
Portugal ReMix Silverlight 3 and .NET RIA Services ReMix.
https://docs.microsoft.com/en-us/archive/blogs/brada/portugal-remix-silverlight-3-and-net-ria-services
2021-02-25T00:55:42
CC-MAIN-2021-10
1614178349708.2
[array(['https://www.microsoft.com/belux/interactive/newsletter/img/egg.jpg', None], dtype=object) ]
docs.microsoft.com
. Bitnami Multi-Tier Solutions launched through the GCP Marketplace have port 22 (the SSH access port) disabled by default. This is done to increase the overall security of the deployment. Bitnami recommends that access to port 22 should be specifically enabled at deployment time for trusted IP addresses or IP address ranges. If you do not enable this access, you will not be able to connect to the nodes via the Web console or an external SSH client. Refer to the FAQ for more information on how to enable SSH access at deployment time. Obtain SSH credentials Obtain your SSH credentials from the GCP Marketplace. To connect with an SSH client, follow these steps: Prepare an SSH key pair for use. suffix in the name and the tag vm instance. Click the “Manage Resource” link. The External IP section contains the IP address you will need to use for contacting to the server. Note the public IP address of the server and click the “Edit” link in the top control bar. On the resulting page, copy and paste your prepared. Connect with an SSH client Connect with an SSH client on Windows
https://docs.bitnami.com/google-templates/faq/get-started/connect-ssh/
2021-02-24T23:56:47
CC-MAIN-2021-10
1614178349708.2
[]
docs.bitnami.com
Show Start Page Next, you will configure the server to display content in a smartphone browser. The rest of this tutorial considers the game server domains to be as shown below, but you should keep in mind that you will be substituting the domain or IP address of the development environment you are using. Creating gadget.xml The Smartphone browser application is displayed via the gadget server, therefore, you must set up gadget.xml where the starting URL is defined. Create gadget.xml for the Sandbox environment as follows and define the start page of the game. This gadget.xml definition displays, which is the start page of a game on the development game server when the game is to be accessed from a smartphone (the view attribute is touch). Registering gadget.xml Place the gadget.xml described above on the game server so that it is identified as and register it on the Mobage Developer Site. Click "Common" tab on "Manage Application" screen, press the "Change the information" button, and register for the URL of gadget.xml. To use a game server domain other than the start URL that was recorded in gadget.xml, you must add it to the whitelist. Note that a link to a domain that is not specified in the whitelist will end up being recognized as an external site, an intermediate page will be inserted, and the link will be displayed in an external browser outside of the application. Although it will not be used in the tutorial, add the provisional domain dev2.gameserver.com to the whitelist at this time. Normally, when gadget.xml is parsed, the start URL information is displayed as follows. Note that when gadget.xml is registered, its contents are cached by the platform. Therefore, if gadget.xml is modified, you must click "Refresh" to clear the cache. Creating the Start Page Finally, create the following simple index.html file and place it so that it is located at, which is the URL of the start page. Displaying the Start Page in the Sandbox Try to access the Sandbox from the actual device and see if the start URL of the game is displayed. You can access the Sandbox from a browser to send a smartphone user agent for debugging. Verify the Sandbox development URL, which is displayed on the "SP Web" tab on the Mobage Developer Site and try accessing it from the browser. Example) or Log in to the Sandbox environment using the test account that you created. (Login as MobageID) Please see here about createing a test account. If the start page is displayed as shown below, it indicates successful processing. This completes the registration of gadget.xml and display of the start page. Reference Material For more information about the detailed schema of gadget.xml, please see here. Revision History - 03/15/2012 - Initial release
https://docs.mobage.com/display/JPSPBP/Show+Start+Page
2021-02-24T23:10:51
CC-MAIN-2021-10
1614178349708.2
[array(['/download/attachments/2001311/BrowserPlatformDevTutorialGadgetXML_01.png?version=1&modificationDate=1360923707000', None], dtype=object) array(['/download/attachments/2001311/BrowserPlatformDevTutorialGadgetXML_02.png?version=1&modificationDate=1360923449000', None], dtype=object) array(['/download/attachments/2001311/BrowserPlatformDevTutorialGadgetXML_03.png?version=1&modificationDate=1360923456000', None], dtype=object) array(['/download/attachments/2001311/BrowserPlatformDevTutorialGadgetXML_04.png?version=1&modificationDate=1360923489000', None], dtype=object) ]
docs.mobage.com
When you run a Command Line Interface (CLI) utility, the command is executed when you press Enter. You are not asked for confirmation. CLI commands generally rely on a component ID or alias to identify the component that is being operated upon. The showcomponents command allows you to find Component IDs or aliases. You can also select a component in the Tree View, and then select View > Properties to display the properties for the component, including the component ID or alias. The following general rules apply to all CLI commands: - Command names are case-sensitive. - All commands, except where noted, accept more than one Component ID or alias, separated by commas or spaces. For example, you can power on two cabinets with one command. The following table lists the conventions used in the SMWeb command line syntax.
https://docs.teradata.com/r/ULK3h~H_CWRoPgUHHeFjyA/vMhIhMT9AajTfeCMIIxgyQ
2021-02-25T00:04:53
CC-MAIN-2021-10
1614178349708.2
[]
docs.teradata.com
Generating Thumbnails T-SBADV-009-013 When you display thumbnails in the Library view, Storyboard Pro generates a series of small images (thumbnails) for you. However, it is possible to generate template thumbnails manually. By default the automatic option is enabled, which means the thumbnails in the Library view will be automatically generated. You can disable this option to prevent the thumbnails from being automatically generated. You can delete the thumbnails files from the Library view. - In the Library view’s right side, select a template. - Right-click and select Generate Thumbnails. - Do one of the following: - Select Edit > Preferences (Windows) or Storyboard Pro > Preferences (Mac OS X). - Press Ctrl+U (Windows) or ⌘+ , (Mac OS X). - Select the General tab. - In the General section, select the Automatically Generate Thumbnails in Library option. NOTE: When you deselect the Automatically Generate Thumbnails in Library option, you can manually generate them. - In the Library view, right-click in the left section and select Delete Thumbnails.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/library/generate-thumbnail.html
2021-02-24T22:54:37
CC-MAIN-2021-10
1614178349708.2
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Steps/SBP2_Library_GenerateThumbnails.png', None], dtype=object) array(['../../Resources/Images/SBP/Steps/SBP2_Library_GenerateThumbnails_02.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Configuring Access Token Policy You can configure the access token policy, such as the default and upper limit of the expiration period and if you want to use the refresh token. See Logging in and Using an Access Token for the overview of the access token. For the overview of the access token expiration period and refresh token, see Expiration Period of the Access Token and Refreshing the Access Token. How to configure Here are the steps to configure the policy: Click the gear icon in the upper-right corner, and then click the "Settings" button. Select "SECURITY". Configure the access token policy. The following three values are configurable. - Enable Refresh Token: If you are going to use the refresh token or not. The feature is OFF by default. - Default expiration period in minutes: The default expiration period applied when APIs are called without any expiration period specified. The default value is 35791394 minutes. - Maximum expiration period in minutes: The upper limit of the expiration period allowed in this application. If you try to get an access token with longer expiration period than this value, you will get an error. The value specified in "Default expiration period in minutes" must be smaller than or equal to this value. The default value is 35791394 minutes.
https://docs.kii.com/en/guides/devportal/application_configuration/application_settings/access_token/
2021-02-24T22:47:55
CC-MAIN-2021-10
1614178349708.2
[]
docs.kii.com
SSH Proxy SSH Proxy decryption decrypts inbound and outbound SSH sessions and ensures that attackers can’t use SSH to tunnel potentially malicious applications and content. In an SSH Proxy configuration, the firewall resides between a client and a server. SSH Proxy enables the firewall to decrypt inbound and outbound SSH connections and ensures that attackers don’t use SSH to tunnel unwanted applications and content. SSH decryption does not require certificates and the firewall automatically generates the key used for SSH decryption when the firewall boots up. During the boot up process, the firewall checks if there is an existing key. If not, the firewall generates a key. The firewall uses the key to decrypt SSH sessions for all virtual systems configured on the firewall and all SSH v2 sessions. SSH allows tunneling, which can hide malicious traffic from decryption. The firewall can’t decrypt traffic inside an SSH tunnel. You can block all SSH tunnel traffic by configuring a Security policy rule for the application ssh-tunnelwith the Actionset to Deny(along with a Security policy rule to allow traffic from the sshapplication). SSH tunneling sessions can tunnel X11 Windows packets and TCP packets. One SSH connection may contain multiple channels. When you apply an SSH Decryption profile to traffic, for each channel in the connection, the firewall examines the App-ID of the traffic and identifies the channel type. The channel type can be: - session - X11 - forwarded-tcpip - direct-tcpip When the channel type is session, the firewall identifies the traffic as allowed SSH traffic such as SFTP or SCP. When the channel type is X11, forwarded-tcpip, or direct-tcpip, the firewall identifies the traffic as SSH tunneling traffic and blocks it. Limit SSH use to administrators who need to manage network devices, log all SSH traffic, and consider configuring Multi-Factor Authentication to help ensure that only legitimate users can use SSH to access devices, which reduces the attack surface. The following figure shows how SSH Proxy decryption works. See Configure SSH Proxy for how to enable SSH Proxy decryption. When the client sends an SSH request to the server to initiate a session, the firewall intercepts the request and forwards it to the server. The firewall then intercepts the server response and forwards it to the client. This establishes two separate SSH tunnels, one between the firewall and the client and one between the firewall and the server, with firewall functioning as a proxy. As traffic flows between the client and the server, the firewall checks whether the SSH traffic is being routed normally or if it is using SSH tunneling (port forwarding). The firewall doesn’t perform content and threat inspection on SSH tunnels; however, if the firewall identifies SSH tunnels, it blocks the SSH tunneled traffic and restricts the traffic according to configured security policies. Recommended For You Recommended Videos Recommended videos not found.
https://docs.paloaltonetworks.com/pan-os/9-1/pan-os-admin/decryption/decryption-concepts/ssh-proxy.html
2021-02-24T23:15:55
CC-MAIN-2021-10
1614178349708.2
[array(['/content/dam/techdocs/en_US/dita/_graphics/9-1/decryption/ssh-proxy.png', 'ssh-proxy.png'], dtype=object) ]
docs.paloaltonetworks.com
This. © 2000–2020 Kitware, Inc. and Contributors Licensed under the BSD 3-clause License.
https://docs.w3cub.com/cmake~3.19/module/finddevil
2021-02-24T23:48:11
CC-MAIN-2021-10
1614178349708.2
[]
docs.w3cub.com
2) same as 1), and additionally std::remove_all_extents<T>::type is either a non-class type or a class type with a trivial destructor. 3) same as 1), but the destructor is noexcept. T shall be a complete type, (possibly cv-qualified) void, or an array of unknown bound. Otherwise, the behavior is undefined. If an instantiation of a template above depends, directly or indirectly, on an incomplete type, and that instantiation could yield a different result if that type were hypothetically completed, the behavior is undefined.. © cppreference.com Licensed under the Creative Commons Attribution-ShareAlike Unported License v3.0.
https://docs.w3cub.com/cpp/types/is_destructible
2021-02-25T00:04:35
CC-MAIN-2021-10
1614178349708.2
[]
docs.w3cub.com
Inher. Performs a lookup operation on the texture provided as a uniform for the shader. enum TextureType: hint_albedoas hint to the uniform declaration for proper sRGB to linear conversion. hint_normalas hint to the uniform declaration, which internally converts the texture for proper usage as normal map. hint_anisoas hint to the uniform declaration to use for a flowmap. enum ColorDefault: Sets the default color if no texture is assigned to the uniform. Defines the type of data provided by the source texture. See TextureType for options. © 2014–2020 Juan Linietsky, Ariel Manzur, Godot Engine contributors Licensed under the MIT License.
https://docs.w3cub.com/godot~3.2/classes/class_visualshadernodetextureuniform
2021-02-24T23:52:44
CC-MAIN-2021-10
1614178349708.2
[]
docs.w3cub.com
GlusterFS and NFS-Ganesha integration Nfs-ganesha can. 1.) Pre-requisites - Before starting to setup NFS-Ganesha, a GlusterFS volume should be created. - Disable kernel-nfs, gluster-nfs services on the system using the following commands - service nfs stop - gluster vol set nfs.disable ON (Note: this command has to be repeated for all the volumes in the trusted-pool) - Usually the libgfapi.so files are installed in “/usr/lib” or “/usr/local/lib”, based on whether you have installed glusterfs using rpm or sources. Verify if those libgfapi.so files are linked in “/usr/lib64″ and “/usr/local/lib64″ as well. If not create the links for those .so files in those directories. 2.) Installing nfs-ganesha i) using rpm install - nfs-ganesha rpms are available in Fedora19 or later packages. So to install nfs-ganesha, run - #yum install nfs-ganesha - Using CentOS or EL, download the rpms from the below link : - ii) using sources - cd /root - git clone git://github.com/nfs-ganesha/nfs-ganesha.git - cd nfs-ganesha/ - git submodule update --init - git checkout -b next origin/next (Note : origin/next is the current development branch) - rm -rf ~/build; mkdir ~/build ; cd ~/build - cmake -DUSE_FSAL_GLUSTER=ON -DCURSES_LIBRARY=/usr/lib64 -DCURSES_INCLUDE_PATH=/usr/include/ncurses -DCMAKE_BUILD_TYPE=Maintainer /root/nfs-ganesha/src/ - make; make install Note: libcap-devel, libnfsidmap, dbus-devel, libacl-devel ncurses* packages may need to be installed prior to running this command. For Fedora, libjemalloc, libjemalloc-devel may also be required. 3.) Run nfs-ganesha server - To start nfs-ganesha manually, execute the following command: - *#ganesha.nfsd -f -L -N -d For example: #ganesha.nfsd -f nfs-ganesha.conf -L nfs-ganesha.log -N NIV_DEBUG -d where: nfs-ganesha.log is the log file for the ganesha.nfsd process. nfs-ganesha.conf is the configuration file NIV_DEBUG is the log level. - To check if nfs-ganesha has started, execute the following command: - #ps aux | grep ganesha - By default '/' will be exported 4.) Exporting GlusterFS volume via nfs-ganesha step 1 : To export any GlusterFS volume or directory inside volume, create the EXPORT block for each of those entries in a .conf file, for example export.conf. The following paremeters : Define/copy “nfs-ganesha.conf” file to a suitable location. This file is available in “/etc/glusterfs-ganesha” on installation of nfs-ganesha rpms or incase if using the sources, rename “/root/nfs-ganesha/src/FSAL/FSAL_GLUSTER/README” file to “nfs-ganesha.conf” file. step 3 : Now include the “export.conf” file in nfs-ganesha.conf. This can be done by adding the line below at the end of nfs-ganesha.conf. - %include “export.conf” step 4 : - run ganesha server as mentioned in section 3 - To check if the volume is exported, run - #showmount -e localhost 5.) Additional Notes To switch back to gluster-nfs/kernel-nfs, kill the ganesha daemon and start those services using the below commands : - pkill ganesha - service nfs start (for kernel-nfs) - gluster v set nfs.disable off 6.) References Setup and create glusterfs volumes : NFS-Ganesha wiki : Sample configuration files - /root/nfs-ganesha/src/config_samples/gluster.conf -
https://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/glusterfs_nfs-ganesha_integration/
2021-02-24T23:42:49
CC-MAIN-2021-10
1614178349708.2
[]
staged-gluster-docs.readthedocs.io
Restore a SQL Server Database to a Point in Time (Full Recovery Model) Applies to: SQL Server (all supported versions) This topic describes how to restore a database to a point in time in SQL Server. Using SQL Server Management Studio. Note If the backup is taken from a different server, the destination server will not have the backup history information for the specified database. In this case, select Device to manually specify the file or device to restore.. Note Use the Timeline Interval box to change the amount of time displayed on the timeline.. Using Transact-SQL Restore a Database Backup Using SSMS Back Up a Transaction Log (SQL Server) Restore a Database to the Point of Failure Under the Full Recovery Model (Transact-SQL) Restore a Database to a Marked Transaction (SQL Server Management Studio) Recover to a Log Sequence Number (SQL Server) ToPointInTime (SMO) See Also backupset (Transact-SQL) RESTORE (Transact-SQL) RESTORE HEADERONLY (Transact-SQL)
https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/restore-a-sql-server-database-to-a-point-in-time-full-recovery-model?view=sql-server-ver15
2021-02-25T00:31:34
CC-MAIN-2021-10
1614178349708.2
[]
docs.microsoft.com
You can use REST APIs provided by Workflow Automation (WFA) to invoke workflows from external portals and the data center orchestration software. WFA supports XML and JSON content types for all REST APIs. WFA allows external services to access various resource collections, such as workflows, users, filters, and finders, through URI paths. The external services can use HTTP methods, such as GET, PUT, POST, and DELETE, on these URIs to perform CRUD operations on the resources. You can perform several actions through the WFA REST APIs, including the following: REST documentation has more information about REST APIs: wfa_server_ip is the IP address of your WFA server and port is the TCP port number you have used for the WFA server during installation.
https://docs.netapp.com/wfa-42/topic/com.netapp.doc.onc-wfa-wdg/GUID-AB37FFF2-515D-4334-8613-3DCB95ECBB10.html
2021-02-25T00:16:25
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
15.1.1.3.45. UserDataQosPolicy¶ - class UserDataQosPolicy: public eprosima::fastdds::dds::GenericDataQosPolicy¶ Class derived from GenericDataQosPolicy. The purpose of this QoS is to allow the application to attach additional information to the created Entity objects such that when a remote application discovers their existence it can access that information and use it for its own purposes. One possible use of this QoS is to attach security credentials or some other information that can be used by the remote application to authenticate the source.
https://fast-dds.docs.eprosima.com/en/latest/fastdds/api_reference/dds_pim/core/policy/userdataqospolicy.html
2021-02-24T23:12:44
CC-MAIN-2021-10
1614178349708.2
[]
fast-dds.docs.eprosima.com
A Swift tenant account is required before Swift API clients can store and retrieve objects on StorageGRID. Each tenant account has its own account ID, groups and users, and containers and objects. Swift tenant accounts are created by a StorageGRID grid administrator using the Grid Manager or the Grid Management API.
https://docs.netapp.com/sgws-114/topic/com.netapp.doc.sg-swift/GUID-DDA6E74E-53A9-4CB4-B8B8-625207ED10A4.html
2021-02-25T00:46:18
CC-MAIN-2021-10
1614178349708.2
[]
docs.netapp.com
is so that you!
https://docs.splunk.com/Documentation/Splunk/7.1.10/Security/ConfigureSplunkforwardingtousesignedcertificates
2021-02-25T00:35:52
CC-MAIN-2021-10
1614178349708.2
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
Renaming Folders T-SBADV-009-005 Once you add a folder, you can rename it. This also renames the folder on your hard drive. - In the Library view’s left side, select the folder to rename. - Right-click on the selected library and select Rename Folder. - Rename the selected folder. - Press Enter/Return to validate the operation.
https://docs.toonboom.com/help/storyboard-pro-5/storyboard/library/rename-folder.html
2021-02-24T23:31:36
CC-MAIN-2021-10
1614178349708.2
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/HAR/_Skins/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/download.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/SBP/Steps/SBP2_Library_CreateFolder_02.png', None], dtype=object) ]
docs.toonboom.com
VMware vRealize Business automates cloud costing, consumption analysis and comparison, delivering the insight you need to efficiently deploy and manage cloud environments. Use vRealize Business for Cloud to track and manage the costs of private and public cloud resources from a single dashboard. It offers a comprehensive way to see, plan and manage your cloud costs. vRealize Business for Cloud is. You can deploy only the data collection services by using the remote data collector version of the vRealize Business appliance. Remote data collectors reduce the data collection workload of the vRealize Business server, and enable remote data collection from geographically distributed endpoints. - Facts Repo Inventory Service - An inventory service that stores on MongoDB the collected data that vRealize Business uses for cost computation. - Data Transformation Service - A service that converts source-specific data from the data collection services into data structures for consumption by the FactsRepo inventory service. The data transformation service serves as is a single point of aggregation of data from all data collectors. - vRealize Business Server - A Web application that runs on Pivotal tc Server. vRealize Business has multiple data collection services that run periodically, collecting inventory information and statistics, which are stored in a PostgreSQL database. Data that is collected from the data collection services is used for cost calculations. - Reference Database - A database that is responsible for providing default, out-of-the-box costs for each of the supported cost drivers. The reference database is updated automatically or manually, and you can download the latest data set and import it into vRealize Business.. - Other Sources of Information - These information sources are optional. You use them only if installed and configured. The sources include vRealize Automation, vCloud Director, vRealize Operations Manager, Amazon Web Services (AWS), Microsoft Azure, vCloud Air, and EMC Storage Resource Manager (SRM). Operational Model vRealize Business for Cloud continuously collects data from external sources, and periodically updates the Facts Repo inventory service. You can view the collected data using the vRealize Business dashboard or generate a report. Data synchronization and updates occur at regular intervals. You can also manually trigger the data collection process when inventory changes occur. For example, in response to the initialization of the system, or addition of a cloud account. Backup You back up each vRealize Business node using traditional virtual machine backup solutions that are compatible with VMware vSphere Storage APIs – Data Protection (VADP). Consolidated vRealize Business Deployment Because of its scope, the VMware Validated Design for Workload and Management Consolidation uses one vRealize Business server appliance that is connected to a network that supports failover, and one data collector. By using this configuration you can scale out the Cloud Management Platform in a dual-region environment as required with minimal downtime.
https://docs.vmware.com/en/VMware-Validated-Design/5.0/com.vmware.vvd-sddc-consolidated-design.doc/GUID-0B482129-B746-4B2F-A059-E79B04739EB8.html
2021-02-24T22:42:23
CC-MAIN-2021-10
1614178349708.2
[array(['images/GUID-F234791E-5518-4102-8509-E8B9375718BE-high.png', 'vRealize Business integrates with the vRealize Automation appliance and collects data from the underlying virtual infrastructure platforms, such as vCenter Server, vCloud Director, Amazon Web Services, and others.'], dtype=object) ]
docs.vmware.com
Kinect This article contains information about the new Kinect 2 device. See Kinect1 for the original Kinect. Microsoft Kinect2 was originally released for Microsoft's Xbox One gaming console in 2013. In 2014, Microsoft released Kinect 2 for Windows which included a Kinect SDK which works exclusively with new Kinect hardware for Windows operating systems. Kinect 2 Support in TouchDesigner[edit] TouchDesigner has built-in support for Kinect using Microsoft's official Kinect for Windows SDK. As such, only Kinect for Windows hardware is supported by TouchDesigner's built-in Kinect operators. Requirements - Windows 8 or 10 operating system. - Kinect for Windows hardware device. Only compatible with Kinect for Xbox One devices if the extra compatibility dongle is purchased from Microsoft. - Install the Kinect SDK 2.0 Ways to interface with Kinect in TouchDesigner - Depth camera - Kinect TOP - RGB camera - Kinect TOP - Infrared camera - Kinect TOP - Skeleton Point Tracking - Kinect CHOP - Hand Interaction - Kinect CHOP - Microphone Array Audio Capture - Audio Device In CHOP Tips for Working with Kinect Sensors[edit] - As of the writing of this article, Microsoft has not added support for accessing multiple Kinect 2 devices on a the same system. - For additional support and troubleshooting refer to the Kinect Support Community.
https://docs.derivative.ca/Kinect
2021-02-24T22:54:37
CC-MAIN-2021-10
1614178349708.2
[]
docs.derivative.ca
HotFix HotFix 1 10.4.0 10.2 HotFix 2 10.2 HotFix 1 10.2 10.1.1 10.0 Back Step 1. Import User Accounts from Active Directory into an LDAP Security Domain Step 1. Import User Accounts from Active Directory into an LDAP Security Domain Import Informatica user accounts from Active Directory into the LDAP security domain that contains Kerberos user accounts. When you enable Kerberos authentication in the domain, Informatica creates an empty LDAP security domain with the same name as the Kerberos realm. You can import user accounts from Active Directory into this LDAP security domain, or you can import the user accounts into a different LDAP security domain. You use the Administrator tool to import the user accounts that use Kerberos authentication from Active Directory into an LDAP security domain. Start the domain and all Informatica services. Start the services in the following order: Model Repository Service Data Integration Service Analyst Service Content Management Service PowerCenter® Repository Service PowerCenter® Integration Service Metadata Manager Service Log in to Windows with the administrator account you specified when you enabled Kerberos authentication in the domain. The following image shows the user name and password for nodeuser01 entered in the login dialog box: Log in to the Administrator tool. Select _infaInternalNamespace as the security domain. The following image shows _infaInternalNamespace selected as the security domain: In the Administrator tool, click the Security tab. Click the Actions menu and select LDAP Configuration . In the LDAP Configuration dialog box, click the LDAP Connectivity tab. Configure the connection properties for the Active Directory. You might need to consult the LDAP administrator to get the information needed to connect to the LDAP server. The following table describes the LDAP server configuration properties: Property Description Server name Host name or IP address of the Active Directory server. Port Listening port for the Active Directory server. LDAP Directory Service Select Microsoft Active Directory Service. Name Specify the bind user account you created in Active Directory to synchronize accounts in Active Directory with the LDAP security domain. Because the domain is enabled for Kerberos authentication, you do not have the option to provide a password for the account. Use SSL Certificate Indicates that the LDAP server uses the Secure Socket Layer (SSL) protocol. Trust LDAP Certificate Determines whether the Service Manager can trust the SSL certificate of the LDAP server. If selected, the Service Manager connects to the LDAP server without verifying the SSL certificate. If not selected, the Service Manager verifies that the SSL certificate is signed by a certificate authority before connecting to the LDAP server. Not Case Sensitive Indicates that the Service Manager must ignore case sensitivity for distinguished name attributes when assigning users to groups. Group Membership Attribute Name of the attribute that contains group membership information for a user. This is the attribute in the LDAP group object that contains the DNs of the users or groups who are members of a group. For example, member or memberof . Maximum Size Maximum number of user accounts to import into a security domain. For example, if the value is set to 100, you can import a maximum of 100 user accounts into the security domain. If the number of user to be imported exceeds the value for this property, the Service Manager generates an error message and does not import any user. Set this property to a higher value if you have many users to import. Default is 1000. The following image shows the ldapuser user account specified with the connection details for an Active Directory server set in the LDAP Connectivity panel of the LDAP Configuration dialog box: In the LDAP Configuration dialog box, click the Security Domains tab. Click Add . The following table describes the filter properties that you can set for a security domain: Property Description Security Domain Name of the LDAP security domain into which you want to import user accounts from Active Directory. User search base Distinguished name (DN) of the entry that serves as the starting point to search for user names in Active Directory. The search finds an object in the directory according to the path in the distinguished name of the object. For example, to search the USERS container that contains Informatica user accounts in the example.com Windows domain, specify CN=USERS,DC=EXAMPLE,DC=COM.. The following image shows the information required to import LDAP users from Active Directory into the LDAP security domain created when you enabled Kerberos in the domain: LDAP security domain. Enabling User Accounts to Use Kerberos Authentication Updated October 10, 2019 Download Guide Send Feedback Explore Informatica Network Communities Knowledge Base Success Portal Back to Top Back
https://docs.informatica.com/data-integration/powercenter/10-1-1-hotfix-1/security-guide/kerberos-authentication/enabling-user-accounts-to-use-kerberos-authentication/step-1--import-user-accounts-from-active-directory-into-an-ldap-.html
2021-02-25T00:09:02
CC-MAIN-2021-10
1614178349708.2
[]
docs.informatica.com
ListNext Synopsis ListNext(list,ptr,value) Parameters Description ListNext sequentially returns elements from list. You initialize ptr to 0 before the first invocation of ListNext. This causes ListNext to begin returning elements from the beginning of the list. Each successive invocation of ListNext advances ptr and returns the next list element value to value. The ListNext function returns 1, indicating that a list element has been successfully retrieved. When ListNext reaches the end of the list, it returns 0, resets ptr to 0, and leaves value unchanged from the previous invocation. Because ptr has been reset to 0, the next invocation of ListNext would start at the beginning of the list. Caché Basic increments ptr using an internal address algorithm. Therefore, the only value you should use to set ptr is 0. You can use ListValid to determine if list is a valid list. An invalid list causes ListNext to generate a <LIST> error. Not all lists validated by ListValid can be used successfully with ListNext. When ListNext encounters a list element with a null value, it returns 1 indicating that a list element has been successfully retrieved, advances ptr to the next element, and resets value to be an undefined variable. This can happen with any of the following valid lists: value=ListBuild(), value=ListBuild(NULL), value=ListBuild(,), or when encountering an omitted list element, such as the second invocation of ListNext on value=ListBuild("a",,"b"). ListNext("",ptr,value) returns 0, and does not advance the pointer or set value. ListNext(ListBuild(""),ptr,value) returns 1, advances the pointer, and set value to the null string (""). ListNext and Nested Lists The following example returns three elements, because ListNext does not recognize the individual elements in nested lists: mylist = ListBuild("Apple","Pear",ListBuild("Walnut","Pecan")) ptr = 0 count = 0 While ListNext(mylist,ptr,value) count=count+1 PrintLn value Wend PrintLn "End of list: ",count," elements found" Examples The following example sequentially returns all the elements in the list: mylist = ListBuild("Red","Blue","Green") ptr = 0 count = 0 While ListNext(mylist,ptr,value) count = count+1 PrintLn value Wend PrintLn "End of list: ",count," elements found" See Also ListExists function ListFromString function ListLength function ListToString function
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RBAS_FLISTNEXT
2021-02-24T23:15:33
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
PatrolTopProcesses Synopsis [Monitor] PatrolTopProcesses=n Description Any non-zero value sets the number of processes to be displayed in the Process Status window on the Patrol console. This window shows the “top” processes as sorted by global or routine activity. The default number of processes is 20. A value of 0 tells the Patrol utility to stop calculating the top processes, potentially saving significant work on systems with a lot of processes. PatrolTopProcesses=10 Default is 20. On the page System Administration > Configuration > Additional Settings > Monitor, for the Patrol Top Processes to Monitor setting, enter a number of processes.
https://docs.intersystems.com/latest/csp/docbook/DocBook.UI.Page.cls?KEY=RCPF_PATROLTOPPROCESSES
2021-02-24T23:18:07
CC-MAIN-2021-10
1614178349708.2
[]
docs.intersystems.com
- Create or Import a MongoDB Deployment > - Deploy a Sharded Cluster Deploy a Sharded Cluster¶ On this page Overview¶ Sharded clusters provide horizontal scaling for large data sets and enable high throughput operations by distributing the data set across a group of servers. See the Sharding Introduction in the MongoDB manual for more information. Use this procedure to deploy a new sharded cluster managed by Cloud Manager. Later, you can use Cloud Manager to add shards and perform other maintenance operations on the cluster. Unique Names for Deployment Items¶ Use unique names for the new cluster and its shards. Important Replica set, sharded cluster, and shard names within the same group must be unique. Failure to have unique names for the deployment items will result in broken backup snapshots. Prerequisites¶ You must provision servers onto which to deploy, and Cloud.
https://docs.cloudmanager.mongodb.com/tutorial/deploy-sharded-cluster/
2017-05-22T23:06:44
CC-MAIN-2017-22
1495463607242.32
[]
docs.cloudmanager.mongodb.com
(The USS Pueblo in January, 1968) Recent events between the United States and North Korea cast a long shadow over relations between these countries. The “supposed” computer hacking of Sony pictures by North Korea, the disagreement over North Korean attempts to develop nuclear weapons, and a host of other issues like North Korean attacks against South Korean ships makes the appearance of Jack Cheevers’ ACT OF WAR rather timely. Cheevers, a former political reporter for the Los Angeles Times presents a comprehensive study of the North Korean seizure of the USS Pueblo, an American spy ship trolling international waters in January, 1968. Today we worry about North Korean threats to South Korea and Japan, but in the 1960s the United States was in the midst of the Cold War and only a decade out from the end of the Korean War. Embroiled in Vietnam, the United States continued to spy on the Soviet Union, Communist China, and North Korea throughout the period. One might wonder why the North Koreans would seize an American ship at that time. The answer probably rests with North Korean dictator, Kim Il-Sung’s hatred for the United States, and when presented an opportunity to give Washington a “black eye,” Kim could not resist, especially with the United States caught up in the quagmire of Vietnam. According to Cheevers, American loses while spying in the region were not uncommon before the Pueblo was seized. Between 1950 and 1956, seven American reconnaissance aircraft were shot down over the Sea of Japan or near Siberia, resulting in the loss of forty-six US airmen, with another sixteen lost to a typhoon. (2) The Pueblo was part of a top secret Navy program to pack refurbished US freighters with advanced electronics to keep tabs on the Soviet Union’s expanding Pacific and Mediterranean fleets. The program called for seventy ships, but only three were built, one of which was the Pueblo. The loss of the ship with its sophisticated surveillance gear, code machines, and documents was one of the worst intelligence debacles in American history. Subsequent congressional and naval investigations revealed “appalling complacency and short sightedness in the planning and execution of the Pueblo mission.” (3) The goal was to determine how much of a threat existed for South Korea, since North Korea’s Stalinist leaders were committed to unifying the peninsula, an area were 55,000 American troops stood in the way of a possible invasion. This book is important as we continue to unleash covert operations worldwide, as it shows what can happen when things do not proceed as planned. (Capt. Loyd Bucher and his crew seized by North Korea in January, 1968) Cheevers offers a detailed description of the planning of the mission and what emerges is that Captain Lloyd Bucher was given command of a ship that was not in the best condition and was overloaded with top secret documents, many of which were not needed for the mission. A full description of the seizure of the ship, the incarceration of the crew, their torture and interrogation, their final release, and the Naval and Congressional investigations that’s followed are presented. The ship was supposedly conducting “oceanic research,” and many of the crew were not fully cognizant of the Pueblo’s spy mission. What separates Cheevers’ work from previous books on the subject is his access to new documentation, particularly those of the Soviet Union, and American naval archives. Further, he was able to interview a large number of the Pueblo’s original crew. This leads to a narrative that at times reads like a transcript or movie script of many important scenes, particularly the North Korean seizure of the ship, the interactions of the crew during their imprisonment, and the Navy Court of Inquiry which was formed to determine if Capt. Bucher and his crew had conducted themselves appropriately. The first surprising aspect of the book is the lack of training the crew experienced, and how they should respond if attacked. Bucher was told by naval officials not to worry because he would always remain in international waters beyond the twelve mile limit the North Koreans claimed. Further, Bucher was not given the appropriate equipment to destroy sensitive documents and equipment, even though he requested it. In addition, the two linguists assigned to the mission hadn’t spoken Korean in a few years and confessed that they needed dictionaries to translate radio intercepts or documents, and in addition, the overall crew was very inexperienced. The bottom line is that there was no real contingency plan to assist the Pueblo should North Korea become a problem. It was clear no naval assistance would be forthcoming in the event of an attack, and Bucher would be on his own. Once the attack occurred it appears Bucher did his best, knowing the United States would not entertain a rescue operation. (The Pueblo crew in captivity) The seizure of the ship compounded problems for the Johnson administration. The Tet offensive was a few weeks away, the Marine fire base at Ke Sanh was isolated, the anti-war movement in the United States was growing, and the South Korean President, Park Chung Hee wanted to use the situation to launch an attack on North Korea. Cheevers reviews the mindset of the American government as well as the public’s reaction to the seizure and accurately describes President Johnson’s reluctance to take military action. The United States did deploy battle groups to the Sea of Japan as a show of force, but with no plan to use it, it was a hollow gesture. A far bigger problem was reigning in President Park, whose palace was almost breached by North Korean commandos shortly before the Pueblo was seized. Cheevers’ dialogue between Cyrus Vance, Johnson’s emissary and Park is eye opening as was the meeting between Johnson and Park later in the crisis. If Park could not gain American acquiescence for a military response, he requested hundreds of millions of dollars of military hardware instead. There were 30,000 South Korean troops fighting in Vietnam, and Park had promised another 11,000, and Johnson wanted to make sure that Park did not renege on his commitment. Cheevers does a commendable job always placing the Pueblo crisis in the context of the war raging in Southeast Asia. Cheevers’ absorbing description of how the Americans were treated in captivity is largely based on interviews of the crew. The brutality of their treatment and the psychological games their captures subjected Bucher and his crew was unconscionable. The beatings, outright torture, lack of hygiene and malnutrition the crew suffered through are catalogued in detail. The pressure on the Johnson administration domestically increased throughout the incarceration until a deal was finally reached. The issue revolved around the North Korean demand of an apology which was finally papered over by a convoluted strategy that produced a US admission of spying at the same time they offered a strong denial. Perhaps the most interesting part of the book is Cheevers’ coverage of the hero’s welcome Bucher and his crew received and how the Navy investigated who was to blame for the ship’s seizure. The fact that Bucher surrendered his ship without a fight to save his crew did not sit well with naval history purists. For the Navy, the men were expendable, but the intelligence equipment and documents were not. The details of the Naval Court of Inquiry headed by five career admirals, three of which had commanded destroyers during World War II and the Korean War concluded that Bucher should be court-martialed, but were overruled because of public opinion. The questions and answers from the trial reflect how difficult a task it was to investigate the seizure and find a scapegoat for the Navy. Throughout, Bucher never lost the respect of his crew and his leadership allowed his men to bond, which in large part is responsible for their survival. Cheevers should be commended for his approach to the crisis, the important questions he raises, and the reconstruction of testimony both Naval and Congressional. ACT OF WAR seems to me the definitive account of the seizure of the Pueblo and its ramifications for the Navy, the intelligence community, and politicians. It is an excellent historical narrative that reads like a novel in sections of the book. It is a great read and a superb work of investigative reporting.
https://docs-books.com/2015/02/14/act-of-war-by-jack-cheevers/
2021-05-06T01:46:18
CC-MAIN-2021-21
1620243988724.75
[]
docs-books.com
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Integrate External Systems to Kafka¶ Kafka Connect is a framework for connecting Kafka with external systems such as databases, key-value stores, search indexes, and file systems. You can use existing connector implementations for common data sources and sinks to move data into and out of Kafka, or write your own connectors. - Kafka Connect - Kafka Connectors
https://docs.confluent.io/5.1.0/connect.html
2021-05-06T00:38:28
CC-MAIN-2021-21
1620243988724.75
[]
docs.confluent.io
ANSI/SQL DateTime Specifications The ANSI/SQL DATE, TIME, TIMESTAMP and INTERVAL DateTime data types in Teradata SQL CREATE TABLE statements, can be used. Specify them as column/field modifiers in INSERT statements. However, certain restrictions should be noted: For a description of the fixed‑length CHAR representations for each DATE, TIME, TIMESTAMP and INTERVAL data type specification, see Table 27 on page 96.
https://docs.teradata.com/r/PE6mc9dhvMF3BuuHeZzsGg/WS4eLpeqd2KxmFFJ5CG~VQ
2021-05-06T01:03:56
CC-MAIN-2021-21
1620243988724.75
[]
docs.teradata.com
Moving around the subview tree. Screen, controller, and root_view find.screen # alias to find.view_controller find.view_controller find.root_view # View of the view_controller Window of the root_view find.window All subviews, subviews of subviews, etc for root_view: find.all Find Find all children/grandchildren/etc: find(my_view).find # Different from children as it keeps on going down the tree More commonly, you are searching for something: find(my_view).find(UITextField) Closest Closest is interesting, and very useful. It searches through parents/grandparents/etc and finds the first occurrence that matches the selectors: find(my_view).closest(Section) Let's say that someone clicked on a button in a table cell. You want to find and disable all buttons in that cell. So first you need to find the cell itself, and then find all buttons for that cell, then let's say we want to hide them. You'd do that like so: find(sender).closest(UITableViewCell).find(UIButton).hide Children of selected view, views, or root_view find.children # All children (but not grandchildren) of root_view find(:section).children # All children of any view with the tag or stylename of :section You can also add selectors find(:section).children(UILabel) # All children (that are of type UILabel of any view with the tag or stylename of :section Parent or parents of selected view(s) find(my_view).parent # superview of my_view find(my_view).parents # superview of my_view, plus any grandparents, great-grandparents, etc find(UIButton).parent # all parents of all buttons Siblings Find all your siblings: find(my_view).siblings # All children of my_view's parent, minus my_view) Get the sibling right next to the view, below the view: find(my_view).next Get the sibling right next to the view, above the view: find(my_view).prev And, not, back, and self These four could be thought of as Selectors, not Traversing. They kind of go in both, anywho. By default selectors are an OR, not an AND. This will return any UILabels and anything with text == '': find(UILabel, text: "") # This is an OR So if you want to do an and, do this: find(UILabel).and(text: "") Not works the same way: find(UILabel).not(text: "") Back is interesting, it moves you back up the chain one. In this example, we find all images that are inside test_view. Then we tag them as :foo. Now we want to find all labels in test_view and tag them :bar. So after the first tag, we go back up the chain to test_view, find labels, then tag them :bar: find(test_view).find(UIImageView).tag(:foo).back.find(UILabel).tag(:bar) Filter Filter is what everything else uses (parents, children, find, etc), you typically don't use it yourself.
http://docs.redpotion.org/en/latest/cookbook/traversing/
2021-05-05T23:52:00
CC-MAIN-2021-21
1620243988724.75
[]
docs.redpotion.org