content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Portworx with DC/OS Secrets
Portworx can integrate with DC/OS Secrets to store your encryption keys/secrets and credentials. This guide will help you configure Portworx to connect to DC/OS Secrets. DC/OS Secrets can then be used to store Portworx secrets for Volume Encryption and Cloud Credentials.
Secrets is an DC/OS Enterprise only feature
Supported from PX Enterprise 1.4 onwards Portworx.
If you want only Portworx framework to access the username and password secrets path, the path should have prefix same as Portworx service name (default service name is
portworx).
Update config.json for existing installation
If the Portworx framework is already installed, you will need to update the
/etc/pwx/config.json on all nodes to start using DC/OS secrets by default. You still need to edit the framework from the above section, so that you don’t have to update the config.json for new nodes.
Add the following
secret_type and
cluster_secret_key fields in the
secret section to the
/etc/pwx/config.json on each node in the cluster:
{ "clusterid": "", "secret": { "secret_type": "dcos", "cluster_secret_key": "pwx/secrets/cluster-wide-secret-key" }, ... }
You need to restart Portworx for the config.json to take effect:
sudo systemctl restart portworx Successfully set cluster Portworx cli. Run the following command:
/opt/pwx/bin/pxctl secrets dcos login \ --username <dcos-username> \ --password <dcos-password> \ --base-path <optional-base-path> Successfully authenticated with DC/OS Secrets. ** WARNING, this is probably not what you want to do. This login will not be persisted across PX or node reboots and also expire in 5 days. Please provide your login information through package config or refer docs.portworx.com for more information.
You need to run this command on all Portworx nodes, so that you could create and mount encrypted volumes on all nodes.
If the cli is used to authenticate with DC/OS Secrets, for every restart of Portworx container it needs to be re-authenticated with DC/OS Secrets by running the
login command. | https://docs.portworx.com/key-management/portworx-with-dc-os-secrets/ | 2018-12-09T23:42:43 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['/img/dcos-portworx-secrets-setup.png', 'portworx-dcos-secret'],
dtype=object) ] | docs.portworx.com |
This topic describes the Desktone_HypervisorManagerStatus CIM provider.
Description
HypervisorManagerStatus provider is derived from CIM_LogicalElement, and it provides information and status of Hypervisor Managers in DaaS platform. The Hypervisor Manager is a DaaS entity which manages the hypervisor hosts. This provider runs on service provider appliances only.
Properties
CSCreationClassName [key]: Name of the class used to create the database instance.
SystemName [key]: Name of the system on which the provider instance is running. Set to host name in our case.
CreationClassName [key]: Name of the class used to create the provider instance.
HostAddress [key]: describes the hypervisor manager host address and version. It is an address of vCenter or ESX host.
Type: describes the type of hypervisor manager whether it is vCenter/ESX and its product version. Ex: "ESX, 5.1.0"
CommunicationStatus [derived]: indicates the ability of the DaaS Hypervisor Manager to communicate with Hypervisor Host. 2 – OK, 4 – Lost Communication
OperationalStatus [derived]: indicates the current status of the DaaS Hypervisor Manager in DaaS platform. 2- OK, 13 – Lost communication,
Status [derived, deprecated]: indicates the current status of DaaS Hypervisor Manager in DaaS platform (OK, Lost Comm)
Mitigation
Make sure that discovered host is assigned to resource manager.
Make sure that Hypervisor host is running and reachable from service provider appliance.
Please verify if there any API compatibility errors in service provider or resource manager desktone logs.
Check the required communication ports are open between DaaS appliances and hypervisor hosts. | https://docs.vmware.com/en/VMware-Horizon-DaaS/services/horizondaas.spmanual800/GUID-F1A61675-8F54-4E4F-AB14-863B17F4B545.html | 2018-12-09T23:23:49 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.vmware.com |
For this example, we’ll use 2 domains:
The FTP deployment method is quite simple in the options you need to fill out. Below, we see an example of what your settings might look like and we’ll go through what each option does:
Beware – be careful not to overwrite your WordPress site with your static site. This can be a pain to cleanup and may even cause your site to be inaccessible until cleaned. The main thing to avoid is setting your Base Url to the same address as your WordPress site. If it’s the same domain, but a different subdomain or subdirectory, that’s fine, though.
Set your Base Url to what we defined at the top of this page as or your equivalent.
Set this to what you would use in what’s usually called the host or server address field in your FTP program. Often, it’s the same as your domain, or with the ftp subdomain in front of it. Other times, especially if you’re setting up a new site and don’t have your domain yet, it will be an IP address.
Again, the same username as what you would use to connect via FileZilla, Cyberduck or whatever app you like to use for FTP.
As above. This will not be shown as you enter it, so if you have a tricky password that you need to enter manually, you can type it somewhere else that you can see it, such as the browser’s address bar and then copy and paste it in. Be careful not to copy any space characters at the start or end of your password as this may cause them to be interpreted as part of your actual password characters.
This may not be intuitive as to what you need to set here, so we advise to first do a test upl. | https://docs.wp2static.com/en/deploying/deployment-options/ftp/ | 2018-12-09T23:27:18 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.wp2static.com |
Getting started with Navigator¶
Anaconda Navigator is a graphical user interface to the conda package and environment manager.
This 10-minute guide to Navigator will have you navigating the powerful conda program in a web-like interface without having to learn command line commands.
SEE ALSO: Getting started with conda to learn how to use conda. Compare the Getting started guides for each to see which program you prefer.
Before you start¶
You should have already installed Anaconda .
Contents¶
- Starting Navigator on Windows, macOS or Linux. 1 MINUTE
- Managing Navigator. Verify that Anaconda is installed and check that Navigator is updated to the current version. 1 MINUTE
- Managing environments. Create environments and move easily between them. 3 MINUTES
- Managing Python. Create an environment that has a different version of Python. 2 MINUTES
- Managing packages. Find packages available for you to install. Install packages. 3 MINUTES
TOTAL TIME: 10 MINUTES
Starting Navigator¶
Windows
- From the Start menu, click the Anaconda Navigator desktop app.
- Or from the Start menu, search for and open “Anaconda Prompt” and type the command
anaconda-navigator.
MacOS
- Open Launchpad, then click the Anaconda-Navigator icon.
- Or open Launchpad and click the Terminal icon. Then in Terminal, type
anaconda-navigator.
Linux
- Open a Terminal window and type
anaconda-navigator.
Managing Navigator¶
Verify that Anaconda is installed and running on your system.
- When Navigator starts up, it verifies that Anaconda is installed.
- If Navigator does not start up, go back to Anaconda installation and make sure you followed all the steps.
Check that Navigator is updated to the current version.
Click the “Yes” button to update Navigator to the current version.
TIP: We recommend that you always keep Navigator updated to the latest version.
Managing Environments¶
Navigator uses conda to create separate environments containing files, packages and their dependencies that will not interact with other environments.
Create a new environment named
snowflakesand install a package in it:
In Navigator, click the Environments tab, then click the Create button.
The Create new environment dialog box appears.
In the Environment name field, type a descriptive name for your environment:
Click Create. Navigator creates the new environment and activates it:
Now you have two environments, the default environment
base (root), and
snowflakes.
Switch between them (activate and deactivate environments) by clicking the name of the environment you want to use.
TIP: The active environment is the one with the arrow next to its name.
Return to the other environment by clicking its name.
Managing Python¶
When you create a new environment, Navigator installs the same Python version you used when you downloaded and installed Anaconda. If you want to use a different version of Python, for example Python 3.5, simply create a new environment and specify the version of Python that you want in that environment.
Create a new environment named “snakes” that contains Python 3.5:
In Navigator, click the Environments tab, then click the Create button.
The Create new environment dialog box appears.
In the Environment name field, type the descriptive name “snakes” and select the version of Python you want to use from the Python Packages box (3.6, 3.5 or 2.7). Select a different version of Python than is in your other environments, base or snowflakes.
Click the Create button.
Activate the version of Python you want to use by clicking the name of that environment.
Managing packages¶
In this section, you check which packages you have installed, check which are available and look for a specific package and install it.
To find a package you have already installed, click the name of the environment you want to search. The installed packages are displayed in the right pane.
You can change the selection of packages displayed in the right pane at any time by clicking the drop-down box above it and selecting Installed, Not Installed, Updateable, Selected, or All.
Check to see if a package you have not installed named “beautifulsoup4” is available from the Anaconda repository (must be connected to the Internet):
On the Environments tab, in the Search Packages box, type
beautifulsoup4,
and from the Search Subset box select All or Not Installed.
To install the package into the current environment:
Check the checkbox next to the package name, then click the bottom Apply button.
The newly installed program is displayed in your list of installed programs. | https://docs.anaconda.com/anaconda/navigator/getting-started/ | 2018-12-10T01:03:07 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.anaconda.com |
After reports are generated, you can email them as attachments from the License Report tab.
You can append a prefix to the zip file name, so that:
- the license compliance team can search for license report zip files received in the BMC mailbox that include the zip file prefixes provided by customers.
- customers can easily find the license report zip file in their sent mailbox .
Zip file names are formatted as follows: Prefix_TimestampDirName_LUCU.zip. The zip file contains the report in CSV and the native file format per product, such as XML or HTML. The emailed reports are also stored in the user\EmailReports folder where the License Utility is installed.
Prerequisites
An email client be configured on computers where the License Utility is installed in order to use this feature. For supported email clients, see the Email client section in the System requirements topic.
To email reports:
- On the License Report tab, click the checkbox next to the report you want to email.
You can select multiple reports at once.
- To append a prefix to the file name, in the Email Attachment File Prefix text box, enter a prefix, for example: YourOrganizationName.
The prefix can contain up to 15 characters except the following: \ / : * ? " < > | and .
- Click Email Reports.
Your email client opens with a zip file of the report and log attachment, including the report name you specified along with a date/time stamp in the following format:
- Report file: Prefix_TimestampDirName_LUCU.zip
- Log file: Prefix_Log_TimestampDirName_LUCU.zip
Note
- If an email client does not exist on the computer or the email client is not configured, the following message appears: "Error occurred while launching mail client. One or more unspecified errors occurred."
- When the license utility is running with "Run as administrator" and Outlook is opened by a non-administrator user, the following error message appears after clicking Email Reports: "Error occurred while launching mail client. One or more unspecified errors occurred."
- Workaround:
- Close Outlook.
- Click Email Reports.
By default, the following appears in the email message:
- To: [email protected]
- You can change the email recipient in your email client.
The following can be changed in the email message, but not in the License Utility application:
- Subject: BMC License Utility Reports.
- Message text: Please send this email to BMC Software. | https://docs.bmc.com/docs/lucu47/en/emailing-reports-824240638.html | 2018-12-10T01:14:27 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.bmc.com |
Enabling email notification of reports with SMTP
The License Utility can be configured to automatically send license usage reports via email. To use this feature, make sure you have scheduled the license reports, then configure the connection with the SMTP server, as described in this topic.
Note
BMC recommends using a professional SMTP service, since using a free email provider does not ensure the correct delivery of your messages (especially if you want to send to a large number of recipients)..
Before you begin
Before you establish a connection with the SMTP server, you must know the following information:
- Name of the SMTP server that will send emails
Port that the specified SMTP server will use
- Email addresses that the License Utility will use to send and receive emails
- Security level for the connection
To configure a connection with the SMTP server
- Select the SMTP Configuration tab.
- In the Host Name box, enter the SMTP server name.
In the Port box, enter the port number that the SMTP server will use.
Port 25 is the SMTP standard TCP port and is also the default port.
Tip
If the default port does not work, locate the non-default SMTP port on the server that is running your email application.
- In the From box, enter the email address of the sender.
This will populate the From field in the License Utility email messages.
- In the To box, enter the email address of the recipient.
- To send emails to multiple recipients, separate the email addresses with a semicolon.
This email address will receive emails generated by the License Utility.
- If your SMTP server is configured to send authentication credentials:
(Generally, non-default ports and free email providers require these authentication details.)
- Select the Authentication Required check box.
Enter the User Name and Password in the boxes.
For Yahoo, Hotmail, and Gmail, enter your email address in the User Name box and your email password in the Password box.
- If SMTP TLS/SSL is required, enter the following in the Properties section:
- In the Name box, enter the property name, for example: mail.smtp.ssl.enable.
- In the Value box, enter the property value, for example: True.
- To add more properties fields, click Add Property.
- When you are finished, click Save.
To send a test email:
- Click Send Test Email.
- On the Email successfully sent message, click OK. | https://docs.bmc.com/docs/lucu47/en/enabling-email-notification-of-reports-with-smtp-824240644.html | 2018-12-10T01:16:21 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.bmc.com |
On July 16, 2011, ClassiPress 3.1.1 was released. This was a minor maintenance release to fix a couple of bugs that slipped through the 3.1 release. All customers should upgrade. Fixed 3 tickets total. A breakdown of tickets can be found below.
Upgrade Information
To download v3.1.1, visit AppThemes and login to your customer account. Existing customers can download the patch or the full version.
Fixes
- fixed issue on theme-comments template where $ was accidentally added to a function
- fixed issue where spaces in refine search didn’t work correctly
- fixed error in refine search when selecting a city caused an implode error
Changes
- none | https://docs.appthemes.com/classipress/classipress-version-3-1-1/ | 2018-06-18T04:13:34 | CC-MAIN-2018-26 | 1529267860041.64 | [] | docs.appthemes.com |
.8, 1.10, or 1.11, and Python 2.7 or >= 3.4.
To install from PyPi:
pip install django-scribbler
Note
If you need to run an unreleased version from the repository, see the Contributing Guide for additional instructions.. | http://django-scribbler.readthedocs.io/en/latest/ | 2018-06-18T03:17:44 | CC-MAIN-2018-26 | 1529267860041.64 | [] | django-scribbler.readthedocs.io |
Official Documentation¶
What is Viper?¶
- update.py from 1.1 to 1.2 IOError ‘data/web/’
- PreprocessError: data/yara/index.yara:0:Invalid file extension ‘.yara’.Can only include .yar
- Error Messages in log: ssl.SSLEOFError: EOF occurred in violation of protocol
- Final Remarks | http://viper-framework.readthedocs.io/en/latest/index.html | 2018-06-18T03:35:37 | CC-MAIN-2018-26 | 1529267860041.64 | [array(['_images/viper.png', '_images/viper.png'], dtype=object)] | viper-framework.readthedocs.io |
[ aws . apigateway ]
Gets a specified VPC link under the caller's account in a region.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
get-vpc-link --vpc-link-id <value> [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--vpc-link-id (string)
[Required] The identifier of the VpcLink . It is used in an Integration to reference this VpcL.
tags -> (map)
The collection of tags. Each tag element is associated with a given resource.
key -> (string)
value -> (string) | https://docs.aws.amazon.com/cli/latest/reference/apigateway/get-vpc-link.html | 2019-12-05T17:13:41 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.aws.amazon.com |
Event history
This tutorial shows you how to access and utilize the event history in PlayFab.
Access event history
- Open the PlayFab Game Manager, and select Analytics from the navigation sidebar.
- Locate and select the Event History tab.
Event History page overview
- The Event Data Retention area shows the time interval for the events. For example, when it says 7 Days, only events that happened a week ago or later will be queried.
- The Event Search Query panel allows the changing of graph behavior, and the filtering of event flow by different event properties.
- The Event History Chart panel displays a chart that shows the number and types of events happening in your title during the specified time interval.
- The Events Timeline panel is a list of events data sorted by time (starting with the most recent).
Search and inspect events
In this section we have the following goals:
- Sign the player in and produce a
player_logged_inevent.
- Use the Event Search Query Panel to find this event using search query.
- Inspect this event using the Events Timeline Panel.
- Observe how this event effects the Event History Chart.
Demonstration
We are going to use the
LoginWithCustomID method, to sign the player in and produce a
player_logged_in event.
- Execute the API call shown below.
PlayFabClientAPI.LoginWithCustomID(new LoginWithCustomIDRequest() { CreateAccount = true, CustomId = "12345QWERY" }, result => Debug.Log("Logged in"), error => Debug.LogError(error.GenerateErrorReport()));
- If no player is registered with the
CustomIdvalue of
12345QWERY, the player will be generated thanks to the second parameter that we passed (see below).
CreateAccount = true
We now have to locate the event.
- The easiest way to do this is by means of the Event Search Query panel. We know the
CustomIdvalue, so we can use it as a search query.
- Once you have located the event in the Events Timeline panel, you can further inspect it by selecting the timestamp label, as shown below.
- Finally, you may analyze how this event effects the overall event flow using the Event History Chart.
The graph shows the player_logged_in event being a part of several events (that match the current query) produced on May 5th.
How to inspect player events
It is possible to access the event history for a specific player (see below).
- Use the Game Manager, and navigate to Players in the menu to the left.
- Select the Players tab.
- Locate the player you want to inspect, and select the Player ID label.
- Select Event History from the toolbar.
You will be presented with an event history page where only events related to the inspected player are shown.
Feedback | https://docs.microsoft.com/en-us/gaming/playfab/features/analytics/metrics/event-history | 2019-12-05T17:19:23 | CC-MAIN-2019-51 | 1575540481281.1 | [array(['media/tutorials/game-manager-access-event-history.png',
'Game Manager - Access Event History'], dtype=object)
array(['media/tutorials/game-manager-event-history-page-overview.png',
'Game Manager - Event History Page Overview'], dtype=object)
array(['media/tutorials/game-manager-event-history-chart-search-query.png',
'Game Manager - Event History Chart - search query'], dtype=object)
array(['media/tutorials/game-manager-event-history-timestamp.png',
'Game Manager - Event History - Timestamp'], dtype=object)
array(['media/tutorials/game-manager-event-history-event-graph.png',
'Game Manager - Event History Chart - event graph'], dtype=object)
array(['media/tutorials/game-manager-inspect-player-event.png',
'Game Manager - inspect a Player Event'], dtype=object) ] | docs.microsoft.com |
CSS Color Names
In this chapter we will speak about colors.
At first let's see the three simple colors of HTML. Usually, colors are displayed combining RED, GREEN, and BLUE.
CSS colors are defined using a hexadecimal (hex) notation for the combination of Red, Green, and Blue color values (RGB). The lowest value which can be given to one of the light sources is 0 (hex 00). The highest value is 255 (hex FF).
Hex values are written as 6 digit numbers, starting with a # sign.
Below you will see the HEX values of web colors, you can write it even in small letters. It's the same.
For example white is #FFFFFF or #ffffff. | https://www.w3docs.com/learn-css/css-color-names.html | 2019-12-05T17:16:32 | CC-MAIN-2019-51 | 1575540481281.1 | [] | www.w3docs.com |
Configuring FRRouting
This section discusses FRRouting configuration.
Configure FRRouting
FRRouting does not start by default in Cumulus Linux. Before you run FRRouting, make sure you have enabled the relevant daemons that you intend to use (
bgpd,
ospfd,
ospf6d or
pimd) in the
/etc/frr/daemons file.
Cumulus Networks has not tested RIP, RIPv6, IS-IS and Babel.
The
zebra daemon is enabled by default. You can enable the other daemons according to how you plan to route your network.
Before you start FRRouting,outing
After you enable the appropriate daemons, enable and start the FRRouting service:
cumulus@switch:~$ sudo systemctl enable frr.service cumulus@switch:~$ sudo systemctl start frr.service
All the routing protocol daemons (
bgpd,
ospfd,
ospf6d,
ripd,
ripngd,
isisd and
pimd) are dependent on
zebra. When you start FFRouting,
systemd determines whether zebra is running; if zebra is not running,
systemd starts
zebra, then starts the dependent service, such as
bgpd.
In general, if you restart a service, its dependent services are also restarted. For example, running
systemctl restart frr.service restarts any of the routing protocol daemons that are enabled and running.
For more information on the
systemctl command and changing the state of daemons, read Services and Daemons in Cumulus Linux.
Integrated Configurations
By default in Cumulus Linux, FRRouting saves all daemon configurations in a single integrated configuration file,
frr.conf.
You can disable this mode by running the following command in the
vtysh FRRouting CLI:
cumulus@switch:~$ sudo vtysh switch# configure terminal switch(config)# no service integrated-vtysh-config
To reenable integrated configuration file mode, run:
switch(config)# service integrated-vtysh-config
If you disable integrated configuration mode, FRRouting saves each daemon-specific configuration file in a separate file. At a minimum for a daemon to start, that daemon must be enabled and its daemon-specific configuration file must be present, even if that file is empty.
To save the current configuration:
switch# write memory Building Configuration... Integrated configuration saved to /etc/frr/frr.conf [OK] switch# exit cumulus@switch:~$
You can use
write file instead of
write memory.
When integrated configuration mode is disabled, the output looks like this:
switch# write memory Building Configuration... Configuration saved to /etc/frr/zebra.conf Configuration saved to /etc/frr/bgpd.conf [OK]
Restore the Default Configuration
If you need to restore the FRRouting configuration to the default running configuration, delete the
frr.conf file and restart the
frr service.
Back up
frr.conf (or any configuration files you want to remove) before proceeding.
Confirm that
service integrated-vtysh-configis enabled:
cumulus@switch:~$ net show configuration | grep integrated service integrated-vtysh-config
Remove
/etc/frr/frr.conf:
cumulus@switch:~$ sudo rm /etc/frr/frr.conf
If integrated configuration file mode is disabled, remove all the configuration files (such as
zebra.confor
ospf6d.conf) instead of
frr.conf.
Restart FRRouting:
cumulus@switch:~$ sudo systemctl restart frr.service
Interface IP Addresses and VRFs
FRRouting inherits the IP addresses and any associated routing tables for the network interfaces from the
/etc/network/interfaces file. This is the recommended way to define the addresses; do not create interfaces using FRRouting. For more information, see Configuring IP Addresses and Virtual Routing and Forwarding - VRF.
FRRouting vtysh Modal CLI
FRRouting many the interface-specific commands are invoked, the prompt changes to:
switch(config)# interface swp1 switch(config-if)#
When the routing protocol specific commands are invoked, the prompt changes to:
Displaying state can be done to of
vtysh:'
Notice that the commands also take a partial command name (for example,
sh ip route) as long as the partial command name is not aliased:
cumulus@switch:~$ sudo vtysh -c 'sh ip r' % Ambiguous command.
To disable a command or feature in FRRouting,:
switch# show running-config Building configuration... Current configuration: ! username cumulus nopassword ! service integrated-vtysh-config ! vrf mgmt ! interface lo link-detect ! interface swp1 ipv6 nd ra-interval 10 link-detect ! interface swp2 ipv6 nd ra-interval 10 link-detect ! interface swp3 ipv6 nd ra-interval 10 link-detect ! interface swp4 ipv6 nd ra-interval 10 link-detect ! interface swp29 ipv6 nd ra-interval 10 link-detect ! interface swp30 ipv6 nd ra-interval 10 link-detect ! interface swp31 link-detect ! interface swp32 link-detect ! interface vagrant link-detect ! interface eth0 vrf mgmt ipv6 nd suppress-ra link-detect ! interface mgmt vrf mgmt link-detect ! router bgp 65020 bgp router-id 10.0.0.21 bgp bestpath as-path multipath-relax bgp bestpath compare-routerid neighbor fabric peer-group neighbor fabric remote-as external neighbor fabric description Internal Fabric Network neighbor fabric capability extended-nexthop neighbor swp1 interface peer-group fabric neighbor swp2 interface peer-group fabric neighbor swp3 interface peer-group fabric neighbor swp4 interface peer-group fabric neighbor swp29 interface peer-group fabric neighbor swp30 interface peer-group fabric ! address-family ipv4 unicast network 10.0.0.21/32 neighbor fabric activate neighbor fabric prefix-list dc-spine in neighbor fabric prefix-list dc-spine out exit-address-family ! ip prefix-list dc-spine seq 10 permit 0.0.0.0/0 ip prefix-list dc-spine seq 20 permit 10.0.0.0/24 le 32 ip prefix-list dc-spine seq 30 permit 172.16.1.0/24 ip prefix-list dc-spine seq 40 permit 172.16.2.0/24 ip prefix-list dc-spine seq 50 permit 172.16.3.0/24 ip prefix-list dc-spine seq 60 permit 172.16.4.0/24 ip prefix-list dc-spine seq 500 deny any ! ip forwarding ipv6 forwarding ! line vty ! end
If you try to configure a routing protocol that has not been started,
vtysh silently ignores those commands.
If you do not want to use a modal CLI to configure FRRouting, you can use a suite of Cumulus Linux-specific commands instead.
Reload the FRRouting Configuration
If you make a change to your routing configuration, you need to reload FRRouting so your changes take place. FRRouting reload enables you to apply only the modifications you make to your FRRouting configuration, synchronizing its running state with the configuration in
/etc/frr/frr.conf. This is useful for optimizing FRRouting automation in your environment or to apply changes made at runtime.
FRRouting reload only applies to an integrated service configuration, where your FRRouting configuration is stored in a single
frr.conf file instead of one configuration file per FRRouting daemon (like
zebra or
bgpd).
To reload your FRRouting configuration after you modify
/etc/frr/frr.conf, run:
cumulus@switch:~$ sudo systemctl reload frr.service
Examine the running configuration and verify that it matches the configuration in
/etc/frr/frr.conf:
cumulus@switch:~$ net show configuration
If the running configuration is not what you expect, submit a support request and supply the following information:
- The current running configuration (run
net show configurationand output the contents to a file)
- The contents of
/etc/frr/frr.conf
- The contents of
/var/log/frr/frr-reload.log
FRR Logging
By default, Cumulus Linux configures FFR with syslog severity level 6 (informational). Log output is written is sent to
/var/log/frr/frr.log. However, when you manually define a log target with the
log file /var/log/frr/debug.log command, FRR automatically defaults to severity 7 (debug) logging and the output is logged to
/var/log/frr/debug.log.
Caveats
Obfuscated Passwords
In FRRouting, Cumulus Linux stores obfuscated passwords for BGP and OSPF (ISIS, OSPF area, and BGP neighbor passwords). All passwords in configuration files and those displayed in
show commands are obfuscated. The obfuscation algorithm protects passwords from casual viewing. The system can retrieve the original password when needed.
Duplicate Hostnames
If you change the hostname, either with NCLU or with the
hostname command in
vtysh, the switch can have two hostnames in the FRR configuration. For example:
Spine01# configure terminal Spine01(config)# hostname Spine01-1 Spine01-1(config)# do sh run Building configuration... Current configuration: ! frr version 4.0+cl3u1 frr defaults datacenter hostname Spine01 hostname Spine01-1 ...
Accidentally configuring the same numbered BGP neighbor using both the
neighbor x.x.x.x and
neighbor swp# interface commands results in two neighbor entries being present for the same IP address in the configuration and operationally. To correct this issue, update the configuration and restart the FRR service. | https://docs.cumulusnetworks.com/cumulus-linux/Layer-3/Configuring-FRRouting/ | 2019-12-05T17:21:48 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.cumulusnetworks.com |
Assembly
Record of Committee Proceedings
Committee on Transportation
Assembly Bill 505
Relating to: special registration plates associated with Whitetails Unlimited and making an appropriation.
By Representatives Kitchens, Tittl, Novak, A. Ott, Milroy, Horlacher, Kleefisch, Skowronski, R. Brooks and Wachs; cosponsored by Senator Lasee.
November 13, 2015 Referred to Committee on Transportation
April 07, 2016 Failed to pass pursuant to Senate Joint Resolution 1
______________________________
Elisabeth Portz
Committee Clerk | https://docs.legis.wisconsin.gov/2015/related/records/assembly/transportation/1237016 | 2019-12-05T16:49:23 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.legis.wisconsin.gov |
[−][src]Module glutin::
dpi
DPI is important, so read the docs for this module if you don't want to be confused.
Originally,
winit dealt entirely in physical pixels (excluding unintentional inconsistencies), but now all
window-related functions both produce and consume logical pixels. Monitor-related functions still use physical
pixels, as do any context-related functions in
glutin.
If you've never heard of these terms before, then you're not alone, and this documentation will explain the concepts.
Modern screens have a defined physical resolution, most commonly 1920x1080. Indepedent of that is the amount of space the screen occupies, which is to say, the height and width in millimeters. The relationship between these two measurements is the pixel density. Mobile screens require a high pixel density, as they're held close to the eyes. Larger displays also require a higher pixel density, hence the growing presence of 1440p and 4K displays.
So, this presents a problem. Let's say we want to render a square 100px button. It will occupy 100x100 of the screen's pixels, which in many cases, seems perfectly fine. However, because this size doesn't account for the screen's dimensions or pixel density, the button's size can vary quite a bit. On a 4K display, it would be unusably small.
That's a description of what happens when the button is 100x100 physical pixels. Instead, let's try using 100x100 logical pixels. To map logical pixels to physical pixels, we simply multiply by the DPI (dots per inch) factor. On a "typical" desktop display, the DPI factor will be 1.0, so 100x100 logical pixels equates to 100x100 physical pixels. However, a 1440p display may have a DPI factor of 1.25, so the button is rendered as 125x125 physical pixels. Ideally, the button now has approximately the same perceived size across varying displays.
Failure to account for the DPI factor can create a badly degraded user experience. Most notably, it can make users feel like they have bad eyesight, which will potentially cause them to think about growing elderly, resulting in them entering an existential panic. Once users enter that state, they will no longer be focused on your application.
There are two ways to get the DPI factor:
- You can track the
HiDpiFactorChangedevent of your windows. This event is sent any time the DPI factor changes, either because the window moved to another monitor, or because the user changed the configuration of their screen.
- You can also retrieve the DPI factor of a monitor by calling
MonitorHandle::hidpi_factor, or the current DPI factor applied to a window by calling
Window::hidpi_factor, which is roughly equivalent to
window.current_monitor().hidpi_factor().
Depending on the platform, the window's actual DPI factor may only be known after
the event loop has started and your window has been drawn once. To properly handle these cases,
the most robust way is to monitor the
HiDpiFactorChanged
event and dynamically adapt your drawing logic to follow the DPI factor.
Here's an overview of what sort of DPI factors you can expect, and where they come from:
- Windows: On Windows 8 and 10, per-monitor scaling is readily configured by users from the display settings. While users are free to select any option they want, they're only given a selection of "nice" DPI factors, i.e. 1.0, 1.25, 1.5... on Windows 7, the DPI factor is global and changing it requires logging out.
- macOS: The buzzword is "retina displays", which have a DPI factor of 2.0. Otherwise, the DPI factor is 1.0. Intermediate DPI factors are never used, thus 1440p displays/etc. aren't properly supported. It's possible for any display to use that 2.0 DPI factor, given the use of the command line.
- X11: On X11, we calcuate the DPI factor based on the millimeter dimensions provided by XRandR. This can result in a wide range of possible values, including some interesting ones like 1.0833333333333333. This can be overridden using the
WINIT_HIDPI_FACTORenvironment variable, though that's not recommended.
- Wayland: On Wayland, DPI factors are set per-screen by the server, and are always integers (most often 1 or 2).
- iOS: DPI factors are both constant and device-specific on iOS.
- Android: This feature isn't yet implemented on Android, so the DPI factor will always be returned as 1.0.
- Web: DPI factors are handled by the browser and will always be 1.0 for your application.
The window's logical size is conserved across DPI changes, resulting in the physical size changing instead. This
may be surprising on X11, but is quite standard elsewhere. Physical size changes always produce a
Resized event, even on platforms where no resize actually occurs,
such as macOS and Wayland. As a result, it's not necessary to separately handle
HiDpiFactorChanged if you're only listening for size.
Your GPU has no awareness of the concept of logical pixels, and unless you like wasting pixel density, your framebuffer's size should be in physical pixels.
winit will send
Resized events whenever a window's logical size
changes, and
HiDpiFactorChanged events
whenever the DPI factor changes. Receiving either of these events means that the physical size of your window has
changed, and you should recompute it using the latest values you received for each. If the logical size and the
DPI factor change simultaneously,
winit will send both events together; thus, it's recommended to buffer
these events and process them at the end of the queue.
If you never received any
HiDpiFactorChanged events,
then your window's DPI factor is 1. | https://docs.rs/glutin/0.22.0-alpha5/glutin/dpi/index.html | 2019-12-05T17:07:52 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.rs |
Manage secrets using CyberArk Conjur
Your development organization may choose to integrate with an external secrets management tool to support the secure management of passwords, keys, certificates and other secrets. While you can choose to manage sensitive key/value pairs for environment-specific information using internal encrypted dictionaries, XL Deploy also supports integration with the CyberArk Conjur secrets management tool to manage and inject secrets into XL Deploy. The API-based integration with Conjur enables you to define, manage, and use Conjur as an external data source for secret storage. This API will support future integrations with other secrets management tools.
How it works
The integration with CyberArk Conjur is controlled by the connection to the Conjur server itself, which includes a Conjur policy and the specific list of keys (Variable IDs) that the user can access. For details, see Understanding Conjur policy.
The XL Deploy integration with Conjur is an XL Deploy plugin that you install that lets you configure external dictionaries that can be used with your environments. You can also define a Conjur-based lookup provider that can reference and resolve a key/value pair stored in a CyberArk Conjur policy. XL Deploy does not save or cache the key/value information stored in Conjur in the XL Deploy system.
Use external CyberArk Conjur-based dictionaries
Managing an external CyberArk Conjur dictionary is similar to how you currently manage internal dictionaries in XL a value based on a lookup provider key that you specify. See Create an external lookup value provider for details.
User access control
As with any security-related feature, controlling access to sensitive data needs to be managed as part of the integration. XL Deploy provides controls to limit access, ensuring that:
- Developers are authenticated and authorized to read secrets
- Role-based access to secrets is supported
- Policies are provided to control credentials and how they can be used
Install the plugin
To install the plugin:
- Download the XL Deploy CyberArk Conjur plugin from the distribution site.
- Place the plugin inside the
XL_DEPLOY_SERVER_HOME/plugins/directory.
- Restart XL Deploy.
For additional details on installing or removing a plugin, see Install or remove XL Deploy plugins
Create a CyberArk Conjur connection
XL XL. | https://docs.xebialabs.com/v.9.0/xl-deploy/how-to/manage-secrets-using-cyberark-conjur/ | 2019-12-05T18:26:01 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.xebialabs.com |
Occasionally we hear from users who want to know more about the permissions we request during installation of our browser extension. Here’s a brief explanation of the two permissions we request and why we need them.
The “read and change” popup is standard across Chrome extensions and Amino does not capture or alter any personal or payment information from the websites you visit. The only website ‘change’ that is incurred while using Amino is that we inject your user style sheet into the page you’re styling. This is the primary function of our extension and could not function without this permission.
When you right click on an element and select “Copy This Selector” from the Amino context menu, we use a Chrome notification to inform you that the selector has been added to your clipboard. We felt that this was the cleanest and most intuitive way to notify you of this. These notifications are not used for promotional purposes.
Questions about privacy? Please see our Privacy Policy. | https://docs.aminoeditor.com/legal/permissions | 2019-12-05T18:33:01 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.aminoeditor.com |
Anycast Design Guide
Cumulus Networks Routing on the Host enables you to run OSPF or BGP directly on server hosts. This can enable a network architecture known as anycast, where many servers can is propag are sent Figure 2, two flows originate from a remote user destined to the anycast IP address. Each session has a different source port. Using the
cl-ecmpcalc command, you can see that the sessions were hashed. Every packet is handled individually through the routing table, saving memory and resources that would be required to track individual flows, similar to the functionality of a load balancing appliance.
As previously described,.. | https://docs.cumulusnetworks.com/cumulus-linux/Network-Solutions/Anycast-Design-Guide/ | 2019-12-05T17:06:24 | CC-MAIN-2019-51 | 1575540481281.1 | [array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast1.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast3.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast4.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast5.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast6.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast7.png',
None], dtype=object)
array(['https://docs-cdn.cumulusnetworks.com/images/cumulus-linux/network-solutions-anycast8.png',
None], dtype=object) ] | docs.cumulusnetworks.com |
"deaf" Module
Description
This module adds user mode
d (deaf) which prevents users from receiving channel messages.
Configuration
To load this module use the following
<module> tag:
<module name="m_deaf.so">
<deaf>
The
<deaf> tag defines settings about how the deaf module should behave. This tag can only be defined once.
Example Usage
<deaf bypasschars="!." bypasscharsuline="!."> | https://docs.inspircd.org/2/modules/deaf/ | 2019-12-05T17:38:28 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.inspircd.org |
Released on:
Wednesday, October 8, 2014 - 00:01
Improvements
- Crash Reporting
- This release introduces crash reporting for mobile apps. Crash reporting supports capture and reporting of unhandled Java runtime exceptions.
- Crashes include interaction trails: a history of automatically instrumented actions that occurred during the app session leading up to the crashing event. No breadcrumbs needed.
- The SDK includes a
NewRelic.crashNow()method to trigger a test crash quickly and easily.
- You can also disable crash reporting via the runtime API.
- When building an app with Proguard enabled, the Proguard mapping file is sent to New Relic to automatically provide human readable crash reports in the UI.
- Session improvements
- The SDK now more consistently records MobileSession events in Insights when users use various app switchers on Android 4.x.
Fixes
- Improve reported traffic accuracy
- Corrects an issue where the SDK could continue to report an app as active when it had entered the background. This fix will reduce the traffic reported for an app to more accurately reflect actual usage.
- Improve thread naming accuracy
- Corrects an issue where the SDK reported some metrics with an incorrect display name. This fix will more accurately reflect thread breakdown data in interactions. | https://docs.newrelic.com/docs/release-notes/mobile-release-notes/android-release-notes/android-4870 | 2019-12-05T18:31:44 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.newrelic.com |
General code style guidelinesGeneral code style guidelines
All code written in the languages described in Code categories should adhere to the following guidelines to facilitate collaboration and understanding.
Note: Uncertainties, unimplemented but known future action-items, and odd/specific constants should all be accompanied with a short comment to make others aware of the reasoning that went into the code.
WhitespacesWhitespaces
Do not use tabs for whitespace. Use 2 spaces per tab instead.
Naming ConventionsNaming Conventions
Self-documenting code reduces the need for extended code comments. It is encouraged to use names as long as necessary to describe what is occurring.
Functions and methodsFunctions and methods
Methods should be named as verbs (for example,
get or
set), while Objects/Classes should be nouns.
Objects and functions should be CamelCase. Methods on Objects should be dromedaryCase.
VariablesVariables
Constants should be CAPITALIZED_AND_UNDERSCORED for clarity, while variables can remain dromedaryCase.
Avoid non-descriptive variable names such as single letters (except for iteration in loops such as i or j) and variable names that have been arbitrarily shortened (Don't strip vowels; long variable names are OK). | https://docs.zowe.org/stable/contribute/guidelines-code/general.html | 2019-12-05T18:29:59 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.zowe.org |
Geoanalysis Task Details¶.
- Note: You can still use geoanalysis as before if you just add a buffer to the basic buffer box.
Buffer¶
The primary buffer box will be used for all buffers as well as an internal buffer on the parcel boundary
Hazard specific overrides¶
- Wetland/Flood/Slope Buffers: These input boxes will override the primary buffer input box if set.
- The primary buffer box will just be an internal buffer on parcel boundary if all overrides are set
Sieve small polygons¶
- Min Size Buildable Acres: Add an acreage value (click m2 to switch to acres). This will remove any area that isn't that big from your buildable area.
Simplify drawing¶
-.
| https://docs.andersonopt.com/ao-prospect/task-geoanalysis/ | 2019-12-05T17:55:04 | CC-MAIN-2019-51 | 1575540481281.1 | [array(['/assets/images/release-11-30-1.png', None], dtype=object)] | docs.andersonopt.com |
Object-Module-Mapping
In order to render an object, the rendering service has to decide which module to use to process the object.
- Decision according to ESObject -> ESOBJECT_MIMETYPE
ESModule :: setModuleByResource ()Objects with the mime type "application / zip" can be of different origin. For example, such a file could be a Moodle-course or it could simply be a zip-compressed data packet, the user should be offered as a download. The method
ESModule :: setModuleByResource ()differentiates, and picks a rendering module according to the type of resource.
ESModule :: setModuleByMimetype ()For most objects the rendering module can be chosen based on their mime type. The method
ESModule :: setModuleByMimetype ()retrieves the corresponding mapping from the database.
- Decision based on the ESObject->AlfrescoNode -> properties Some resources must be treated in a special way. An example is an edu-sharing Youtube resource. The object itself is an HTTP link, but the rendering service is supposed to render a video. Therefore, the type of the resource has to be determined from the properties of the Alfresco-node. This happens by means of the the
ESObject :: setModule ()method. | https://docs.edu-sharing.com/confluence/edp/de/installation-en/installation-of-the-edu-sharing-rendering-service/concept/object-module-mapping | 2019-12-05T18:19:27 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.edu-sharing.com |
Error 53
The most common symptom of a problem in NetBIOS name resolution is when the Ping utility returns an Error 53 message. The Error 53 message is generally returned when name resolution fails for a particular computer name. Error 53 can also occur when there is a problem establishing a NetBIOS session. To distinguish. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc940100(v=technet.10)?redirectedfrom=MSDN | 2019-12-05T17:52:58 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.microsoft.com |
Unity provides the following lightmappersA tool in Unity that bakes lightmaps according to the arrangement of lights and geometry in your scene. More info
See in Glossary for generating lightmapsA pre-rendered texture that contains the effects of light sources on static objects in the scene. Lightmaps are overlaid on top of scene geometry to create the effect of lighting. More info
See in Glossary and giving your SceneA Scene contains the environments and menus of your game. Think of each unique Scene file as a unique level. In each Scene, you place your environments, obstacles, and decorations, essentially designing and building your game in pieces. More info
See in Glossary global illumination:
For advice on setting up a lightmapper, see Lightmapping: Getting started.
Did you find this page useful? Please give it a rating: | https://docs.unity3d.com/Manual/Lightmappers.html | 2019-12-05T18:51:47 | CC-MAIN-2019-51 | 1575540481281.1 | [] | docs.unity3d.com |
Debug only user code with Just My Code
Just My Code is a Visual Studio debugging feature that automatically steps over calls to system, framework, and other non-user code. In the Call Stack window, Just My Code collapses these calls into [External Code] frames.
Just My Code works differently in .NET Framework, C++, and JavaScript projects.
Enable or disable Just My Code
For most programming languages, Just My Code is enabled by default.
- To enable or disable Just My Code in Visual Studio, under Tools > Options (or Debug > Options) > Debugging > General, select or deselect Enable Just My Code.
Note
Enable Just My Code is a global setting that applies to all Visual Studio projects in all languages.
Just My Code debugging
During a debugging session, the Modules window shows which code modules the debugger is treating as My Code (user code), along with their symbol loading status. For more information, see Get more familiar with how the debugger attaches to your app.
In the Call Stack or Tasks window, Just My Code collapses non-user code into a grayed-out annotated code frame labeled
[External Code].
Tip
To open the Modules, Call Stack, Tasks, or most other debugging windows, you must be in a debugging session. While debugging, under Debug > Windows, select the windows you want to open.
To view the code in a collapsed [External Code] frame, right-click in the Call Stack or Task window, and select Show External Code from the context menu. The expanded external code lines replace the [External Code] frame.
Note
Show External Code is a current user profiler setting that applies to all projects in all languages that are opened by the user.
Double-clicking an expanded external code line in the Call Stack window highlights the calling code line in green in the source code. For DLLs or other modules not found or loaded, a symbol or source not found page may open.
.NET Framework Just My Code
In .NET Framework projects, Just My Code uses symbol (.pdb) files and program optimizations to classify user and non-user code. The .NET Framework debugger considers optimized binaries and non-loaded .pdb files to be non-user code.
Three compiler attributes also affect what the .NET debugger considers to be user code:
- DebuggerNonUserCodeAttribute tells the debugger that the code it's applied to isn't user code.
- DebuggerHiddenAttribute hides the code from the debugger, even if Just My Code is turned off.
- DebuggerStepThroughAttribute tells the debugger to step through the code it's applied to, rather than step into the code.
The .NET Framework debugger considers all other code to be user code.
During .NET Framework debugging:
-), the No Source window appears. You can then use a Debug > Step command to go to the next line of user code.
If an unhandled exception occurs in non-user code, the debugger breaks at the user code line where the exception was generated.
If first chance exceptions are enabled for the exception, the calling user-code line is highlighted in green in source code. The Call Stack window displays the annotated frame labeled [External Code].
C++ Just My Code
Starting in Visual Studio 2017 version 15.8, Just My Code for code stepping is also supported. This feature also requires use of the /JMC (Just my code debugging) compiler switch. The switch is enabled by default in C++ projects. For Call Stack window and call stack support in Just My Code, the /JMC switch is not required.
To be classified as user code, the PDB for the binary containing the user code must be loaded by the debugger (use the Modules window to check this).
For call stack behavior, such as in the Call Stack window, Just My Code in C++ considers only these functions to be non-user code:
- Functions with stripped source information in their symbols file.
- Functions where the symbol files indicate that there is no source file corresponding to the stack frame.
- Functions specified in *.natjmc files in the %VsInstallDirectory%\Common7\Packages\Debugger\Visualizers folder.
For code stepping behavior, Just My Code in C++ considers only these functions to be non-user code:
- Functions for the which the corresponding PDB file has not been loaded in the debugger.
- Functions specified in *.natjmc files in the %VsInstallDirectory%\Common7\Packages\Debugger\Visualizers folder.
Note
For code stepping support in Just My Code, C++ code must be compiled using the MSVC compilers in Visual Studio 15.8 Preview 3 or later, and the /JMC compiler switch must be enabled (it is enabled by default). For additional details, see Customize C++ call stack and code stepping behavior) and this blog post. For code compiled using an older compiler, .natstepfilter files are the only way to customize code stepping, which is independent of Just My Code. See Customize C++ stepping behavior.
-), stepping continues in the non-user code.
If the debugger hits an exception, it stops on the exception, whether it is in user or non-user code. User-unhandled options in the Exception Settings dialog box are ignored.
Customize C++ call stack and code stepping behavior
For C++ projects, you can specify the modules, source files, and functions the Call Stack window treats as non-user code by specifying them in *.natjmc files. This customization also applies to code stepping if you are using the latest compiler (see C++ Just My Code).
- To specify non-user code for all users of the Visual Studio machine, add the .natjmc file to the %VsInstallDirectory%\Common7\Packages\Debugger\Visualizers folder.
- To specify non-user code for an individual user, add the .natjmc file to the %USERPROFILE%\My Documents\<Visual Studio version>\Visualizers folder.
A .natjmc file is an XML file
Customize C++ stepping behavior independent of Just My Code settings
In C++ projects, you can specify functions to step over by listing them as non-user code in *.natstepfilter files. Functions listed in *.natstepfilter files are not dependent on Just My Code settings.
- To specify non-user code for all local Visual Studio users, add the .natstepfilter file to the %VsInstallDirectory%\Common7\Packages\Debugger\Visualizers folder.
- To specify non-user code for an individual user, add the .natstepfilter file to the %USERPROFILE%\My Documents\<Visual Studio version>\Visualizers folder.
A .natstepfilter file is an XML file with this syntax:
<?xml version="1.0" encoding="utf-8"?> <StepFilter xmlns=""> <Function> <Name>FunctionSpec</Name> <Action>StepAction</Action> </Function> <Function> <Name>FunctionSpec</Name> <Module>ModuleSpec</Module> <Action>StepAction</Action> </Function> </StepFilter>
JavaScript Just My Code
JavaScript Just My Code controls stepping and call stack display by categorizing code in one of these classifications:
The JavaScript debugger classifies code as user or non-user in this order:
The default classifications.
- Script executed by passing a string to the host-provided
evalfunction is MyCode.
- Script executed by passing a string to the
Functionconstructor is LibraryCode.
- Script in a framework reference, such as WinJS or the Azure SDK, is LibraryCode.
- Script executed by passing a string to the
setTimeout,
setImmediate, or
setIntervalfunctions is UnrelatedCode.
Classifications specified for all Visual Studio JavaScript projects in the %VSInstallDirectory%\JavaScript\JustMyCode\mycode.default.wwa.json file.
Classifications in the mycode.json file of the current project.
Each classification step overrides the previous steps.
All other code is classified as MyCode.
You can modify the default classifications, and classify specific files and URLs as user or non-user code, by adding a .json file named mycode.json to the root folder of a JavaScript project. See Customize JavaScript Just My Code.
During JavaScript debugging:
- If a function is non-user code, Debug > Step Into (or F11) behaves the same as Debug > Step Over (or F10).
- If a step begins in non-user (LibraryCode or UnrelatedCode) code, stepping temporarily behaves as if Just My Code isn't enabled. When you step back to user code, Just My Code stepping is re-enabled.
- When a user code step results in leaving the current execution context, the debugger stops at the next executed user code line. For example, if a callback executes in LibraryCode code, the debugger continues until the next line of user code executes.
- Step Out (or Shift+F11) stops on the next line of user code.
If there's no more user code, debugging continues until it ends, hits another breakpoint, or throws an error.
Breakpoints set in code are always hit, but the code is classified.
- If the
debuggerkeyword occurs in LibraryCode, the debugger always breaks.
- If the
debuggerkeyword occurs in UnrelatedCode, the debugger doesn't stop.
If an unhandled exception occurs in MyCode or LibraryCode code, the debugger always breaks.
If an unhandled exception occurs in UnrelatedCode, and MyCode or LibraryCode is on the call stack, the debugger breaks.
If first-chance exceptions are enabled for the exception, and the exception occurs in LibraryCode or UnrelatedCode:
- If the exception is handled, the debugger doesn't break.
- If the exception is not handled, the debugger breaks.
Customize JavaScript Just My Code
To categorize user and non-user code for a single JavaScript project, you can add a .json file named mycode.json to the root folder of the project.
Specifications in this file override the default classifications and the mycode.default.wwa.json file. The mycode.json file does not need to list all key value pairs. The MyCode, Libraries, and Unrelated values can be empty arrays.
Mycode have one or more
* characters, which match zero or more characters.
* is the same as the regular expression
.*.
Feedback | https://docs.microsoft.com/en-us/visualstudio/debugger/just-my-code?view=vs-2017 | 2019-07-16T01:04:32 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['media/dbg_justmycode_options.png?view=vs-2017',
'Enable Just My Code Enable Just My Code in the Options dialog box'],
dtype=object)
array(['media/dbg_justmycode_module.png?view=vs-2017',
'User code in the Modules window User code in the Modules window'],
dtype=object)
array(['media/dbg_justmycode_externalcode.png?view=vs-2017',
'External Code frame External Code frame in the Call Stack window'],
dtype=object)
array(['media/dbg_justmycode_showexternalcode.png?view=vs-2017',
'Show External Code Show External Code in the Call Stack window'],
dtype=object) ] | docs.microsoft.com |
.1 requires a key/value store accessible by all Calico components. If you don’t already have an etcdv3 cluster to connect to, we provide instructions in the installation documentation.
Network requirements
Calico requires the network to allow the following types of traffic.
* If your compute hosts connect directly and don’t use IPIP, you don’t need to allow IPIP traffic. Refer to Configuring IP-in-IP for more information.
Tip: On GCE, you can allow this traffic using firewall rules. In AWS, use EC2 security group rules.. | https://docs.projectcalico.org/v3.1/getting-started/openstack/requirements | 2019-07-15T23:58:44 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.projectcalico.org |
Opening a Scene in Harmony Server
Once you are connected to the database in Harmony, you can open a scene in the database.
Connect to the database—see Connecting to the Database in Harmony Server.
The Database Selector appears when you log-in.
In the Environments column, select the scene's environment (project, movie, season).
- In the Jobs column, select the scene's job (episode, sequence).
- In the Scenes column, select the scene.
Get the permissions needed for this session by selecting one or several of the following options:
- Get rights to modify the scene: Allows you to modify the selected version of the scene as well as to manage and overwrite other versions. Other users will not be able to open a different version of the scene until you close the scene..
- Get rights to modify the scene version: Allows you to modify the currently selected scene version only. Unless the Get rights to modify the scene option is also checked, you will not be able to change other versions of the scene. This allows other users to modify different versions of the scene while you are working on the selected version.
- Get rights to modify the scene assets: Automatically gets the rights to modify all of the scene's assets, locking other users from making changes to them until you close the scene. This means that you will have the rights to modify all the scene's versions, drawings, palettes, its palette list, but not its library folder. If this option is unchecked, drawings and palettes will be locked unless you unlock them manually. This can allow another user to work on the scene's drawings and palettes in Harmony Paint while you are working on the scene's timing or staging in Harmony.
Choose the version you want to open from the Version menu.NOTE
The Saved By and Saved Date fields display the user who last saved the selected scene and the date of the last save.
- Click on Open. | https://docs.toonboom.com/help/harmony-15/advanced/project-creation/open-scene-server.html | 2019-07-16T00:33:44 | CC-MAIN-2019-30 | 1563195524290.60 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.toonboom.com |
Top Articles
What is Shuttle?
What is Shuttle? Shuttle is a data-distribution platform. In Shuttle, pieces of data (we call these "PODs") travel from Sources to Destinations, giving you control of your data.
What’s the Deal with SMS Character Limits?
As text messaging becomes an increasingly popular tool for businesses, there are more and more questions about how best to use it. In fact, one of the questions people often have when it comes to usi… | https://docs.belunar.com/ | 2019-07-16T00:54:09 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.belunar.com |
Everything. Update After adjusting the settings for a preset, the current properties are applied to the presets, as well as any other changes you made in the Manage Tool Presets window. | https://docs.toonboom.com/help/harmony-16/advanced/reference/dialog-box/manage-tool-presets-dialog-box.html | 2019-07-16T00:04:32 | CC-MAIN-2019-30 | 1563195524290.60 | [] | docs.toonboom.com |
Did you find this page useful? Do you have a suggestion? Give us feedback or send us a pull request on GitHub.
First time using the AWS CLI? See the User Guide for help getting started.
Associates the specified resource share with the specified principals and resources.
See also: AWS API Documentation
See 'aws help' for descriptions of global parameters.
associate-resource-share --resource-share-arn <value> [--resource-arns <value>] [--principals <value>] [--client-token <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>]
--resource-share-arn (string)
The Amazon Resource Name (ARN) of the resource share.
--resource-arns (list)
The Amazon Resource Names (ARN) of the resources.
Syntax:
"string" "string" ...
--principals (list)
The principals.
Syntax:
"string" "string" ...
--client-token (string)
A unique, case-sensitive identifier resource with a resource share
The following associate-resource-share example associates the specified subnet with the specified resource share.
aws ram associate-resource-share \ --resource-arns arn:aws:ec2:us-west-2:123456789012:subnet/subnet-0250c25a1f4e15235 \ -4e15235", "associationType": "RESOURCE", "status": "ASSOCIATING", "external": false ] }
resourceShareAssociations -> (list)
Information about the associations.
(structure)
Describes an association with a resource share.
resourceShareArn -> (string)The Amazon Resource Name (ARN) of the resource share.
resourceShareName -> (string)The name of the resource share.
associatedEntity -> (string)The associated entity. For resource associations, this is the ARN of the resource. For principal associations, this is the ID of an AWS account or the ARN of an OU or organization from AWS Organizations.
associationType -> (string)The association type.
status -> (string)The status of the association.
statusMessage -> (string)A message about the status of the association.
creationTime -> (timestamp)The time when the association was created.
lastUpdatedTime -> (timestamp)The time when the association was last updated.
external -> (boolean)Indicates whether the principal belongs to the same AWS organization as the AWS account that owns the resource share.
clientToken -> (string)
A unique, case-sensitive identifier that you provide to ensure the idempotency of the request. | https://docs.aws.amazon.com/cli/latest/reference/ram/associate-resource-share.html | 2020-02-17T00:44:07 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region.
New-R53VPCAssociationAuthorization-VPC_VPCId <String>-HostedZoneId <String>-VPC_VPCRegion <VPCRegion>-Select <String>-PassThru <SwitchParameter>-Force <SwitchParameter>
AssociateVPCWithHostedZonerequest to associate the VPC with a specified hosted zone that was created by a different account. To submit a
CreateVPCAssociationAuthorizationrequest, you must use the account that created the hosted zone. After you authorize the association, use the account that created the VPC to submit an
AssociateVPCWithHostedZonerequest. If you want to associate multiple VPCs that you created by using one account with a hosted zone that you created by using a different account, you must submit one authorization request for each VPC.
AWS Tools for PowerShell: 2.x.y.z | https://docs.aws.amazon.com/powershell/latest/reference/items/New-R53VPCAssociationAuthorization.html | 2020-02-17T01:48:21 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.aws.amazon.com |
Situation Merge Behavior
Moogsoft AIOps uses a configuration called the
sig_similarity_limit
to automatically merge similar Situations. When two Situations reach this similarity limit, AIOps merges them.
The default similarity limit for clustering algorithms is '0.7' so Situations sharing 70% of the same Alerts are merged.
Merge Groups
You can use
merge_groups
in
moog_farmd.conf
to control how AIOps merges Situations created by different clustering algorithms.
Any Sigaliser/moolet not defined in a new merge group belongs to the default group. By default, AIOps merges Situations when they meet the following criteria :
alert_threshold : 2, sig_similarity_limit : 0.7
To override the default behavior, you can create custom merge groups.
Create a Merge Group
You can create merge groups by following these steps:
- Edit
moog_farmd.conf.
Define new merge groups in the
merge_groupssection. For example:
# { # name: "Merge Group 1", # moolets: ["Cookbook", "Tempus"], # alert_threshold : 3, # sig_similarity_limit : 0.75 # }
This merge group would only merge Situations created by the Cookbook and Tempus Sigalisers which shared 75% of the same Alerts.
Each new merge group must be given a name and can be defined using the following values:
moolets
One or more Sigalisers/moolets which will be included in the merge group. Only Situations created by this Sigaliser or Sigalisers will be considered for merging with each other.
Type
: String
Default : n/a
alert_threshold
The minimum number of Alerts that must be present in a cluster before it can become a Situation in the merge group.
Type : Integer
Default : 2
sig_similarity_limit
The measure of the similarity between two Situations before they are merged together. This value is the ratio of shared Alerts between two Situations to total unique Alerts in both Situations. For example, if two Situations share 50% of the same Alerts, the value would be 0.5.
Type
: Integer
Default : 0.7
If you create a custom merge group for one or more Sigalisers, only Situations produced by the Sigalisers in the merge group will be considered for merging among themselves. Situations from Sigalisers outside of the defined merge group cannot be merged with any Situations in that group.
Field Behavior
When AIOps merges two or more Situations, it updates the fields of the situations as follows: | https://docs.moogsoft.com/AIOps.7.1.0/Situation-Merge-Behavior_26780535.html | 2020-02-17T00:06:04 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.moogsoft.com |
:
At the moment, a user can only be designated as a Lender when they are created. If you have any existing lenders using Lead Manager already and don't want to create new accounts for them, contact us for help migrating them.
To add a Lender user, you would:
Leads can be assigned to an Agent and a Lender independently of one another. Only one agent and one lender can be assigned to a lead.
Lenders also have their own round robin settings.
Lenders and Agents can add activities to leads, and can choose to have the other assigned Agent/Lender notified when they do.
Lenders are the same as Agents, except that they don't have the following features enabled: | https://docs.realgeeks.com/lender_user | 2020-02-17T01:12:23 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.realgeeks.com |
4.3. Client Management¶
This chapter provides guidance to install and configure SIMP clients, via kickstart, with the resources supplied by the SIMP ISO.
This guide also assumes that your SIMP server is a yum package repository.
4.3.1. System Requirements¶
Client systems should meet the following minimum requirements:
- Hardware/Virtual Machine (VM): Capable of running RHEL 6 or 7 x86_64
- RAM: 2048 MB
- HDD: 22 GB
4.3.2. Configuring the Puppet Master¶
Perform the following actions as
root on the Puppet master system prior
to attempting to install a client.
4.3.2
rndc.keyerror appears master
to propagate these updates.
4.3.2.2. Configure DHCP¶
Note
The
dhcpd.conf file was updated in SIMP 6.2 to include logic in the
pxeclients class that determines the appropriate boot loader file on the
TFTP server, based on whether the client is booting in UEFI or
BIOS mode. If you have configured DHCP using an earlier version
of SIMP and need to add UEFI support, make sure you update your
dhcpd.conf
in the rsync directory, appropriately.
MAC addresses in the following section need to be lower case letters.
- If PXE booting is being done with this DHCP server, make sure each
filenameparameter corresponds to the correct boot loader file on the TFTP server. If you are using SIMP’s
simp::server::kickstartclass to manage the TFTP server, the default
filenamevalues listed in the
pxeclientsclass of the sample
dhpcd.confwill be correct.
Save and close the file.
Run
puppet agent -t on the Puppet master to apply the changes.
4 DNS and DHCP server setup, also required for PXE booting, are discussed in.4.1. Setting up Kickstart¶
This section describes how to configure the kickstart server.
Add the Kickstart Server Profile
In the Puppet server-specific Hiera file (by default located at
/etc/puppetlabs/code/environments/simp/data/hosts/puppet.<your.domain>.yaml), add the
simp::server::kickstartclass.
--- classes: - simp::server::kickstart
This profile class adds management of DHCP, DNS, the Kickstart service, as well as the example provisioning script.
After adding the above class, run puppet:
puppet agent -t.
Locate the following files in the
/var/www/ksdirectory
pupclient_x86_64.cfg: Example client kickstart configuration script.
diskdetect.sh: Example script to determine disks available on a system and then apply disk configuration. This script is used by
pupclient_x86_64.cfg.
Open the
pupclient_x86_64.cfgfile and follow the instructions provided within it to replace the variables listed and to customize for BIOS/UEFI boot and/or FIPS/non-FIPS mode. If you have servers that require different boot mode or FIPS options, you will need to make customized copies of this file to provide those distinct configurations. You will also have to configure TFTP to point to the appropriate files.
- Instructions are provided both at the top of the file and throughout the body of the file.
- You need to know the IP Addresses of the YUM, Kickstart, and TFTP servers. (They default to the SIMP server in
simp config).
- Use the commands described in the comments at the top of the file to generate the root and grub passwords hashes. Be sure to replace
passwordwith your root password.
- Follow the instructions throughout the file to customize for BIOS/UEFI boot.
- Follow the instructions throughout the file to customize for FIPS/non-FIPS mode.
Open the
diskdetect.shscript and customize the disk device names and/or partitions as appropriate for your site. The sample
diskdetect.shscript will work, as is, for most systems, as long as your disk device names are in the list. In addition, the sample script provides STIG-compliant partitioning.
Two major changes were made to
pupclient_x86_64.cfg in SIMP 6.2:
- UEFI PXE support was added.
- To address timeout issues that caused Puppet bootstrap failures, the use of the
runpuppetscript to bootstrap Puppet on the client was replaced with the use of two scripts, both provided by the
simp::server::kickstartclass:
- A
systemdunit file for CentOS 7 (
simp_client_bootstrap.service) or a
systemvinit script for CentOS 6 (
simp_client_bootstrap).
- A common bootstrap script (
bootstrap_simp_client) used by both.
Note
The URLs and locations in the file are set up.4.2. Setting up TFTP¶
This section describes the process of setting up static files and manifests for TFTP.
Note
The tftp root directory was changed in SIMP 6.2 to conform to DISA STIG
standards. In previous versions it was
/tftpboot, and in 6.2 and later
it is
/var/lib/tftpboot. If you are upgrading to 6.2 from a prior
release and wish the files to remain in the
/tftpboot directory, set
tftpboot::tftpboot_root_dir to
/tftpboot in Hiera.
4 files are not where they should be, then create the directories as
needed and copy the files from
/var/www/yum/<OSTYPE>/<MAJORRELEASE>/<ARCH>/images/pxeboot
or from the images directory on the SIMP DVD. The link name is what is used in
the resources in the tftpboot.pp manifest examples.
Note
The images in the tftp directory need to match the distribution. For example, if you upgrade your repo from CentOS 7.3 to 7.4 and will be using this repo to kickstart machines, you must also upgrade the images in the tftp directory. If they do not match you can get an error such as “unknown file system type ‘xfs’”
Next you need to set up the boot files for either BIOS boot mode, UEFI mode, or both.
Note
UEFI support was automated in SIMP 6.2. If you are using an older version of SIMP please refer to that documentation for setting up UEFI manually.
For more information see the RedHat 7 Installation Source or RedHat 6 Installation Source Installation Guides.
4.4.2.2. Dynamic Linux Model Files¶
Create a site manifest for the TFTP server on the Puppet master to set up the various files to model different systems.
Create the file
/etc/puppetlabs/code/environments/simp/modules/site/manifests/tftpboot.pp. This file will contain Linux models for different types of systems and a mapping of MAC addresses to each model.
Use the source code example below. Linux model examples are given for CentOS 6 and 7 using both UEFI and BIOS boot mode.
Replace
KSSERVERwith the IP address of kickstart server (or the code to look up the IP Address using Hiera).
Replace
OSTYPE,
MAJORRELEASEand
ARCHwith the correct values for the systems you will be PXE booting.
MODEL NAMEis usually of the form
OSTYPE-MAJORRELEASE-ARCHfor consistency.
You will need to know what kickstart file you are using. UEFI and BIOS mode require separate kickstart files. Other things that might require a different kickstart file to be configured are disk drive configurations and FIPS configuration. Create a different Linux model file for each different kickstart file needed.
Note
If using the default cfg files, know that they do not have the ‘_el[6,7]’ tags at the end of their name.
class site::tftpboot { include '::tftpboot' #-------- # BIOS MODE MODEL EXAMPLES # for CentOS/RedHat 7 Legacy/BIOS boot tftpboot::linux_model { 'el7_x86_64': kernel => 'OSTYPE-MAJORRELEASE-ARCH/vmlinuz', initrd => 'OSTYPE-MAJORRELEASE-ARCH/initrd.img', ks => "", extra => "inst.noverifyssl ksdevice=bootif\nipappend 2" } # For CentOS/RedHat 6 Legacy/BIOS boot # Note the difference in the `extra` arguments here. tftpboot::linux_model { 'el6_x86_64': kernel => 'OSTYPE-MAJORRELEASE-ARCH/vmlinuz', initrd => 'OSTYPE-MAJORRELEASE-ARCH/initrd.img', ks => "", extra => "noverifyssl ksdevice=bootif\nipappend 2" } #------ # UEFI MODE MODEL EXAMPLES # NOTE UEFI boot uses the linux_model_efi module and has different # `extra` arguments. You also would use a different kickstart file # because the bootloader command within the kickstart file is # different. Read the instructions in the default pupclient_x86_64.cfg # file and make sure you have the correct bootloader line. # # For CentOS/RedHat 7 UEFI boot tftpboot::linux_model_efi { 'el7_x86_64_efi': kernel => 'OSTYPE-MAJORRELEASE-ARCH/vmlinuz', initrd => 'OSTYPE-MAJORRELEASE-ARCH/initrd.img', ks => "", extra => "inst.noverifyssl" } # For CentOS/RedHat 6 UEFI boot # Note the extra attribute legacy_grub. tftpboot::linux_model_efi { 'el6_x86_64_efi': kernel => 'OSTYPE-MAJORRELEASE-ARCH/vmlinuz', initrd => 'OSTYPE-MAJORRELEASE-ARCH/initrd.img', ks => "", extra => "noverifyssl", legacy_grub => true } #------ # DEFAULT HOST BOOT CONFIGURATION EXAMPLES # If desired, create defaults boot configuration for BIOS and UEFI. # Note that the name of the default UEFI configuration file needs # to be 'grub.cfg'. tftpboot::assign_host { 'default': model => 'el7_x86_64' } tftpboot::assign_host_efi { 'grub.cfg': model => 'el7_x86_64_efi' } #------ # HOST BOOT CONFIGURATION ASSIGNMENT EXAMPLES # For each system define what module you want to use by pointing # its MAC address to the appropriate model. Note that the MAC # address is preceded by ``01-``. tftpboot::assign_host { '01-aa-ab-ac-1d-05-11': model => 'el7_x86_64' } tftpboot::assign_host_efi { '01-aa-bb-cc-dd-00-11': model => 'el7_x86_64_efi' } }
Add the
tftpbootsite manifest on your puppet server node via Hiera. Create the file (or edit if it exists):
/etc/puppetlabs/code/environments/simp/data master.
Note
To provide PXE boot configuration for more OSs, create, in the
tftpboot.ppfile, a
tftpboot::linux_modelor
tftpboot::linux_model_efiblock for each OS type. Then, assign individual hosts to each model by adding
tftpboot::assign_hostor
tftpboot::assign_host_efiresources.
Finally, make sure DHCP is set up correctly. In SIMP 6.2 the example
dhcpd.confwas updated to determine the appropriate boot loader file to use, depending upon the boot mode of the PXE client. These changes are needed if you are booting UEFI systems.
For more information see the RedHat 6 PXE or RedHat 7 PXE Installation Guides.
4.5. Apply Certificates¶
All clients in a SIMP system should have Public Key Infrastructure (PKI) keypairs generated for the server. These are the referred to as the infrastructure or server keys. These certificates are used to encrypt communication and identify clients and are used by common applications such as LDAP and Apache.
Note
These keypairs are not the keys that the Puppet server uses for its operation. Do not get the two confused.
See Certificate Management for more information.
SIMP uses the
pupmod-simp-pki module to help distribute infrastructure
keypairs. The global variable,
simp_options::pki determines what parts of
the module are included. It can be overridden in hiera data at several levels
if different hosts or applications need to handle certificates differently.
simp_options::pki can have one of three settings:
simp- Keypairs are distributed from a central location on the Puppet master to the
/etc/pki/simp/x509directory on the client. Any applications using them will then make a copy in
/etc/pki/simp_apps/<app name>/x509with the correct permissions for an application to use.
true- Applications on the clients will copy the keypairs from a local directory on the client to
/etc/pki/simp_apps/<app name>/x509. The default local directory to copy from is
/etc/pki/simp/x509but this can be overridden by setting the
simp_options::pki::sourcevariable.
false- The user will have to manage keypairs themselves. You will need to look at each module that uses PKI on a client to determine what variables need to be set.
Note
A setting of
falsedoes not disable the use of PKI in a module.
The following sections describe how to populate the central key distribution
directory that
pupmod-simp-pki uses, when
simp_options::pki is set to
simp.
4.5.1. Installing Official Certificates¶
This section describes how to install infrastructure certificates from an
official certificate authority on the Puppet master for distribution to client
servers. You need to have simp_options::pki set to
simp on the client for
this to work.
The key distribution directory on the Puppet master is the
pki_files/files/keydist
sub-directory located under the SIMP-specific, alternate module path,
/var/simp/environments/<environment>/site_files. Within the
keydist
directory, the SIMP system expects there to be:
- A directory named
cacertsthat contains the CA public certificates.
- Client-specific directories, each of which contains the public and private certificates for an individual client. The name of each client directory must be the
certnameof that client, which by default is the client’s FQDN.
Here is an example key distribution directory for a
simp environment:
/var/simp/environments/simp/site_files/pki_files/files/keydist/cacerts/ /var/simp/environments/simp/site_files/pki_files/files/keydist/cacerts/cacert_a7a23f33.pem /var/simp/environments/simp/site_files/pki_files/files/keydist/cacerts/cca9a35.0 /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain/ /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain/mycomputer.my.domain.pem /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain/mycomputer.my.domain.pub /var/simp/environments/simp/site_files/pki_files/files/keydist/yourcomputer.your.domain/ /var/simp/environments/simp/site_files/pki_files/files/keydist/yourcomputer.your.domain/yourcomputer.your.domain.pem /var/simp/environments/simp/site_files/pki_files/files/keydist/yourcomputer.your.domain/yourcomputer.your.domain.pub
To install official certificates on the Puppet master, do the following:
Copy the certificates received from a proper CA to the SIMP server.
Add the certificates for the node to the key distribution directory in
site_files.
- Make the directory under the key distribution directory for the client’s certificates using the client’s
certname.
- Copy the official public and private certificates to that directory.
For example to install certificates for a system named
mycomputer.my.domaininto the
simpenvironment:
mkdir -p /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain mv /dir/where/the/certs/were/myprivatecert.pem \ /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain/mycomputer.my.domain.pem mv /dir/where/the/certs/were/mypubliccert.pub \ /var/simp/environments/simp/site_files/pki_files/files/keydist/mycomputer.my.domain/mycomputer.my.domain.pub
Create and populate the CA certificates directory.
- Make the CA directory,
cacerts.
- Copy the root CA public certificates into
cacertsin Privacy Enhanced Mail (PEM) format, one per file.
cd /var/simp/environments/simp/site_files/pki_files/files/keydist mkdir cacerts cd cacerts for file in *.pem; do ln -s $file `openssl x509 -in $file -hash -noout`.0; done
Make sure the permissions are correct.
chown -R root.puppet /var/simp/environments/simp/site_files/pki_files/files/keydist chmod -R u=rwX,g=rX,o-rwx /var/simp/environments/simp/site_files/pki_files/files/keydist
Note
The SIMP-specific alternate modules path is configured in each environment’s
environment.conf file. For example, for the
simp environment,
/etc/puppetlabs/code/environments/simp/environment.conf, would contain:
modulepath = modules:/var/simp/environments/simp/site_files:$basemodulepath
4.5.2. Generating Infrastructure Certificates from the Fake CA¶
The Fake (self signing) Certificate Authority (Fake CA) is provided by SIMP as a way to obtain server certificates if official certificates could not be obtained at the time of client installation or the servers are operating in testing environments. omit any spaces.
For example,
.name,alt.name1,alt.name2.
Type
wc cacertkey
Note
Ensure that the
cacertkeyfile is not empty. If it is, enter text into the file; then save and close the file.
Type
./gencerts_nopass.sh
Warning
If the
clean.sh command is run after the certificates have been
generated, you will not be able to generate new host certificates under the
old CA. To troubleshoot certificate problems, see the
Troubleshooting Certificate Issues section.
If issues arise while generating keys, type
cd /var/simp/environments/simp/FakeCA
to navigate to the
/var/simp/environments/simp/FakeCA directory, then type
./clean.sh to start over.
After running the
clean.sh script, type
./gencerts_nopass.sh to
run the script again using the previous procedure table.
The certificates generated by the FakeCA in SIMP are set to expire annually. To change this, edit the following files with the number of days for the desired lifespan of the certificates:
/var/simp/environments/simp/FakeCA/CA
/var/simp/environments/simp/FakeCA/ca.cnf
/var/simp/environments/simp/FakeCA/default\_altnames.cnf
/var/simp/environments/simp/FakeCA/default.cnf
/var/simp/environments/simp/FakeCA/user.cnf
In addition, any certificates that have already been created and signed will
have a config file containing all of its details in
/var/simp/environments/simp/FakeCA/output/conf/.
Important
Editing any entries in the above mentioned config files will not affect existing certificates. Existing certificates must be regenerated if you need to make changes.
The following is an example of how to change the expiration time from one year (the default) to five years for any newly created certificate.
for file in $(grep -rl 365 /var/simp/environments/simp/FakeCA/) do sed -i 's/365/1825/' $file done
4.6. Setting up the Client¶
The following lists the steps to PXE boot the system and set up the client.
- Set up your client’s boot settings to boot off ofis enabled. This means the client will check in every 30 seconds for a signed certificate. Log on to the Puppet master and run
puppetserver ca sign --certname <puppet.client.fqdn>.
Upon successful deployment of a new client, it is highly recommended that LDAP administrative accounts be created.
4.6.1. Troubleshooting Puppet Issues¶
If the client has been kickstarted, but is not communicating with the Puppet master,
puppetserver ca clean --certname **<Client Host Name>***on the Puppet master and try again.
If you are getting permission errors, make sure the selinux context is correct on all files as well as the owner and group permissions.
4.6.2. the directory containing the CA certficates. For the FakeCA, it is
/var/simp/environments/simp/FakeCA. The directory should contain the file
default.cnf.
Run
OPENSSL_CONF=default.cnf openssl ca -revoke /var/simp/environments/simp\ /site_files/pki_files/files/keydist/*<Host to Revoke>*/*<Host to Revoke>*.pub | https://simp.readthedocs.io/en/6.3.3/user_guide/Client_Management.html | 2020-02-17T00:32:23 | CC-MAIN-2020-10 | 1581875141460.64 | [] | simp.readthedocs.io |
How to build your own model
To build your model, you can access to the Model Designer (beCPG -> Model Designer).
1- Selecting Model file and adding Imports :
First, go to Models and select your Model xml file. Here, we are selecting the file existing by default extCustomModel.xml. If you want to add a new file, go to repository -> Dictionnaire de données -> Models and upload your empty xml file there. Then you can load it with the Model Designer as we just did with extCustomModel.xml.
After selecting your file, go to Model. Do the imports you need in Imports. For example, add the Content Import like this : Imports -> New Item :
2- Creating a new Type :
You can now add a new type. To do so, go do Types -> New Item -> Name of your Type -> Specify the Association as Types -> Specify the Element Type as Type. In this example, I am creating a new Type Accounting.
2.1- Adding properties to your type :
To the Type Accounting that I have created, I am adding properties. To do so, you just have to select your type Accounting -> Click on New Item -> Name your property -> Put Association as Properties -> Put Element type as Property -> Clik Ok. After that, you have to specify the property type, its Title and also you can specify if it has a default value and/or is mandatory. Here, I added a property called "Accounts Payable". Its property Type is double and It is not mandatory.
You can add as many properties as you need. In my example, I added three properties: Item, Accounts Payable and Accounts Receivable.
3- Creating a new aspect :
To create an aspect, go to Aspect and click New Item. An aspect is a group of properties or/and associations that you can add to any type. For example, if you have a type Task and a Type project. Every type has its own properties but both of them have Start date and End date. For that, I can add an aspect called "Time" having two properties, Start and End date. Then, I have just to add the aspect to the Task and Project Type. To do so, you have to select your type and add the aspect in "Mandatory Aspects".
In my previous example, I am creating an aspect Finance. After that I am adding to this aspect some properties, exactly the same way I added properties to the type "Accounting". The properties I added are budget and assets.
Then, I added the Finance aspect to my type Accounting as a mandatory aspect :
After this, do not forget to publish your model. If you don't, nothing will be taken into consideration. Select your model file and click publish.
After building your model, you have to create your forms. Please refer to the page [[\_to\_create\_your\_forms?parent=Data\_Models\_and\_Forms\]\] . | http://docs.becpg.fr/en/development/data-model-and-forms-3.html | 2020-02-17T00:07:20 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['images/dev-create-models-1.1.png', None], dtype=object)
array(['images/dev-create-models-2.png', None], dtype=object)
array(['images/dev-create-models-3.png', None], dtype=object)
array(['images/dev-create-models-4.png', None], dtype=object)
array(['images/dev-create-models-5.png', None], dtype=object)
array(['images/dev-create-models-6.png', None], dtype=object)
array(['images/dev-create-models-10.png', None], dtype=object)
array(['images/dev-create-models-31.png', None], dtype=object)
array(['images/dev-create-models-32.png', None], dtype=object)
array(['images/dev-create-models-33.png', None], dtype=object)
array(['images/dev-create-models-34.png', None], dtype=object)] | docs.becpg.fr |
:
Expiration policy|Type|Description|Initialization code sample
-|-|-|-
AbsoluteExpirationPolicy|Time-base|The cache item will expire on the absolute expiration DateTime|
ExpirationPolicy.Absolute(new DateTime(21, 12, 2012))
DurationExpirationPolicy|Time-base|The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now|
ExpirationPolicy.Duration(TimeSpan.FromMinutes(5))
SlidingExpirationPolicy|Time-base|The cache item will expire using the duration TimeSpan to calculate the absolute expiration from DateTime.Now, but everytime the item is requested, it is expanded again with the specified TimeSpan|
ExpirationPolicy.Sliding(TimeSpan.FromMinutes(5))
CustomExpirationPolicy|Custom|The cache item will expire using the expire function and execute the reset action if specified. The example shows how to create a sliding expiration policy with a custom expiration policy.|
var startDateTime = DateTime.Now;var duration = TimeSpan.FromMinutes(5);ExpirationPolicy.Custom(() => DateTime.Now > startDateTime.Add(duration), () => startDateTime = DateTime.Now);
CompositeExpirationPolicy|Custom|Combines several expiration policy into a single one. It can be configured to expire when any policy expires or when all policies expire.|
new CompositeExpirationPolicy().Add(ExpirationPolicy.Sliding(TimeSpan.FromMinutes(5))).Add(ExpirationPolicy.Custom(()=>...)). } }
The base constructor has a parameter to indicate if the policy can be reset. Therefore, if you call the base constructor with false then the OnReset method will never called.
Have a question about Catel? Use StackOverflow with the Catel tag! | https://docs.catelproject.com/5.6/catel-core/caching/ | 2020-02-17T00:09:20 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.catelproject.com |
Contents
Best Practices for Views
Purpose: To provide a set of recommendations that are required in order to implement a typical view within Workspace Desktop Edition.
TAB Key--Every control in a window has the ability to have focus. Use the TAB key to move from one control to the next, or use SHIFT+TAB to move the previous control. The TAB order is determined by the order in which the controls are defined in the Extensible Application Markup Language (XAML) page.
Access Keys--A labeled control can obtain focus by pressing the ALT key and then typing the control's associated letter (label). To add this functionality, include an underscore character (_) in the content of a control. See the following sample XAML file:
[XAML]
<Label Content="_AcctNumber" />
Focus can also be given to a specific GUI control by typing a single character. Use the WPF control AccessText (the counterpart of the TextBlock control) to modify your application for this functionality. For example, you can use the code in the following XAML sample to eliminate having to press the ALT key:
[XAML]
<AccessText Text="_AcctNumber" />
Shortcut Keys--Trigger a command by typing a key combination on the keyboard. For example, press CTRL+C to copy selected text. Alarm Notification--Workspace Desktop Edition can be configured to emit a sound when an unsolicited event occurs.
Branding
To replace trademark logos, icon images and text, you must create two files, a .module-config file and a Rebranding.xml file. The RebrandingTheme.xml file is similar to a language dictionary and enables you customize the appearance of your application. The .module-config file links to the RebrandingTheme.xml file. For example, you could make a Rebranding.module-config file with the following content:
[.module-config]
<?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="themes" type="Genesyslab.Desktop.Infrastructure.Theming.ThemesSection, Genesyslab.Desktop.Infrastructure" /> </configSections> <themes> <theme name="Default"> <xmlDictionaries> <xmlDictionary name="rebranding" path=".\RebrandingTheme.xml"></xmlDictionary> </xmlDictionaries> </theme> </themes> </configuration>
The second file, which here is named RebrandingTheme.xml, is file where the new images for logos, Splash Screen, Copyrights, about window text, and so on, are defined:
[XML]
<?xml version="1.0" encoding="utf-8" ?> <Dictionary> <Value Id="Application.SplashScreen" Source="pack://application:,,,/Genesyslab.Desktop.WPFCommon;component/Images/Splash.png"/> <!--Value <Value Id="Windows.Common.Copyright" Text="2009-2014 My New Copyright."/> <Value Id="Windows.AboutWindow.TextBlockWarning" Text="Warning: "/> <Value Id="Windows.Common.Text.InteractionWorkspace" Text="NewCO"/> </Dictionary>
For information about URIs in Windows Presentation Foundation (WPF), see: [1]
Localization
To dynamically change the language in your view, modify the XAML by using the following sample:
[XAML]
<UserControl xmlns: <Expander> <Expander.Header> <TextBlock loc:Translate. </Expander.Header> <Button/> </Expander> </UserControl>
Refer to DispositionCodeView.TextBlockDisposition in the language XML file.
For English, modify the Genesyslab.Desktop.Modules.Windows.en-US.xml file as shown in the following example:
[XML]
<Dictionary EnglishName="English" CultureName="English" Culture="en-US"> <Value Id="DispositionCodeView.TextBlockDisposition" Text="The Disposition"/> </Dictionary>
For French, modify the Genesyslab.Desktop.Modules.Windows.fr-FR.xml file as shown in the following example:
[XML]
<Dictionary EnglishName="French" CultureName="France" Culture="fr-FR"> <Value Id="DispositionCodeView.TextBlockDisposition" Text="La Disposition"/> </Dictionary>
The language can also be changed within the code itself, as shown in the following example:
[C#]
string text = LanguageDictionary.Current.Translate("DispositionCodeView.TextBlockDisposition", "Text");
Parameterization
Workspace Desktop Edition is configured as a role-based application. For example, if an agent is assigned the task of TeamCommunicator, the Click-Once group file that is related to this task is downloaded when the application starts up and the associated module is loaded in RAM. The GUI that is specific to this task is then displayed only to the agents that are assigned the TeamCommunicator task.
The task section in the following example enables you to download and execute a custom module extension. If the task name (InteractionWorkspace.TeamCommunicator.canUse) is configured in Configuration Manager, the required group of files (TeamCommunicator) is downloaded, and the module (TeamCommunicatorModule) are executed.
This parameterization functionality is configured in the InteractionWorkspace.exe.config file, as shown in the following example:
[XML]
<configuration> ... <tasks> ... <task name="InteractionWorkspace.Features.TeamCommunicator" clickOnceGroupsToDownload="TeamCommunicator" modulesToLoad="TeamCommunicatorModule" /> ... </tasks> <modules> ... <module assemblyFile="Genesyslab.Desktop.Modules.TeamCommunicator.dll" moduleType="Genesyslab.Desktop.Modules.TeamCommunicator.TeamCommunicatorModule" moduleName="TeamCommunicatorModule" startupLoaded="false"/> ... </modules> ... </configuration>
Parameterization functionality can also be accomplished by loading a custom module conditioned with a task. In the Configuration Manager, a role must be configured with the name of the task. In this example, the task is named InteractionWorkspace.ExtensionSample.canUse and assigned to the agent. This custom parameterization functionality is configured in the ExtensionSample.module-config file, as shown in the following example:
[XML]
>
Internationalization
WPF and .NET work with Unicode strings, so internationalization does not normally require extra coding. However, there are some potential issues to consider when creating your custom code, such as:
- Strings coming from the server might not be in true Unicode.
- The language might not be read/written from left to right (for example, Arabic languages).
- The correct font must be installed on the agents system.
Screen Reader Compatibility
The Microsoft UI Automation API is used for WPF applications that require accessibility functionality. The following two tools are available to assist you in developing applications that are compliant with accessibility software, such as Job Access With Speech (JAWS):
- UISpy.exe (Microsoft Windows SDK)--Displays the GUI controls tree along with the UIAutomation properties of the controls (such as AccessKey, Name, and others)
- Narrator (Microsoft Windows)--Reads the content of a window
Use the following code sample to add a name to a GUI control in the XAML file:
[XAML]
<TextBox Name="textBoxUserName" AutomationProperties.
The AutomationProperties.Name of the TextBox control is automatically set with the content value of a Label control. If a GUI control already has a Label control the XAML file looks similar to the following example:
[XAML]
<Label Target="{Binding ElementName=textBoxUserName}" Content="_UserName" /> <TextBox Name="textBoxUserName" />
Note: The AutomationProperties.Name must be localized.
Themes
Genesys recommends that you place the control styles and color resources that are used in the application into an XAML file containing a WPF ResourceDictionary. This enables you to modify and extend an existing theme. To make the themes extensible, use ThemeManager to register all the available themes in the application. When a theme is changed, ThemeManager copies this ResourceDictionary to the global application ResourceDictionary. All previously copied styles and brushes are overwritten with the new ones. Note: The XAML file that you create to contain the control styles and color resources is not a Microsoft Composite Application Library (CAL) module.
To add a new theme, you must first create a new theme in a .module-config file, for example:
[.module-config]
<?xml version="1.0" encoding="utf-8"?> <configuration> <configSections> <section name="themes" type="Genesyslab.Desktop.Infrastructure.Theming.ThemesSection, Genesyslab.Desktop.Infrastructure" /> </configSections> <themes> <theme name="CustomTheme" displayNameKey="Theme.Custom.DisplayName" mainResourceDictionary="/Genesyslab.Desktop.Modules.CustomThemeSample;component/Resources/themes/CustomTheme.xaml"> <xmlDictionaries> <xmlDictionary name="NewTheme" path=".\Resources\ResourcesDefinitionCustom.xml"></xmlDictionary> </xmlDictionaries> </theme> </themes> </configuration>
The CustomTheme.xaml file must declare the main resource dictionary of the new style and Custom Color dictionary, for example:
[XAML]
<ResourceDictionary xmlns="" xmlns: <ResourceDictionary.MergedDictionaries> <!-- New IW Style --> <ResourceDictionary Source="/Genesyslab.Desktop.WPFCommon;component/Resources/NewStyles/NewStylesResourceLibrary.xaml"/> <ResourceDictionary Source="/Genesyslab.Desktop.Modules.CustomThemeSample;component/Resources/ColorBrushes/CustomDefaultColorTheme.xaml"/> </ResourceDictionary.MergedDictionaries> </ResourceDictionary>
Use the gui.themes option to add the new theme name.
Loosely-coupled Application Library and Standard Controls
Workspace Desktop Edition is a modular Windows Presentation Foundation (WPF) client application and uses the standard WPF controls. This section provides information about these controls. The Loosely-coupled Application Library is part of the Composite Application Guidance which aims to produce a flexible WPF client application that is loosely coupled. The following graphical tree shows a typical composite application built with loosely-coupled applications:
Shell Region1 View11 View12 Region2 View21 Region21 View211 View212 Shell
The typical GUI is composed of a shell, region(s), and view(s). The shell is the main window of the application where the primary user interface (UI) content is contained. The shell is usually a single main window that contains multiple views. The shell can contain named regions where modules can add views. A region is a rectangular graphical area that is embedded in a shell or a view and can contain one or more views. Views are the composite portions of the user interface that are contained in the window(s) of the shell. Views are the elementary pieces of UI, such as a user control that defines a rectangular portion of the client area in the main window.
Views
A view contains controls that display data. The logic that is used to retrieve the data, handle user events, and submit the changes to the data is often included in the view. When this functionality is included in the View, the class becomes complex, and is difficult to maintain and test. You can resolve these issues by using Presentation Patterns and Data Binding.
Presentation Patterns
Use patterns to separate the responsibilities of the display and the behavior of the application into different classes, named the View and the View Model. Genesys suggests the following presentation patterns:
- Model-View-ViewModel (MVVM)
- Model-View-PresentationModel (Presentation Model)
The MVVM pattern is used in Genesys samples.
- The Model is similar to having several data sources (InteractionService from Enterprise Services, Statistics from the Platform SDK, or any other data).
- The View is a stateless UserControl; a graphical interface with no behavior.
- The ViewModel is an adaptation layer between the Model and the View. It offers the Model data to the View. The behavior of the View is defined in this layer. For instance, the View launches the commands, but the commands are implemented in the ViewModel.
Each view consists of several classes. The VoiceView is described in the following table:
Data Binding
When you use presentation patterns in application development you have the option of using the data-binding capabilities that are provided by the WPF. Data-binding is used to bind elements to application data. The bound elements automatically reflect changes when the data changes its value. For example, if the DataContext property of the VoiceView class is set to an instance of the VoiceViewModel class, then the Text property of a TextBlock control can have a DataBinding toward the PhoneNumber property of the VoiceViewModel class. By default it is a two-way binding. If the value of either the VoiceViewModel.PhoneNumber or the TextBlock display changes then the other changes as well. The following example also shows how the command VoiceViewModel.AnswerCallCommand can be initiated from the VoiceView:
<TextBlock Text="{Binding PhoneNumber}"/> <Button Command="{Binding AnswerCallCommand}">Answer Call</Button>
Note: Modularity requires that each interface is registered in the module initialization. See Customize Views and Regions for details on how to register an interface.
Tips and Tricks
When you need to control several Views, you can use a Controller class to coordinate the activities of multiple Views (and others controllers). The ViewModel is created by the View, and the Views are created and managed by the Controllers. The following logical tree is a depiction of the relationship between the instantiated classes:
Controller1 Controller11 View111 ViewModel111 View112 ViewModel112 View12 ViewModel12 Controller2 View21 ViewModel21 View22 ViewModel22
Use the information provided in this section along with the information in the Customizing Workspace Desktop Edition topic to create your own view.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/IW/latest/Developer/BestPracticesforViews | 2020-02-17T01:53:19 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.genesys.com |
Xml
Writer Settings Class
Definition
public ref class XmlWriterSettings sealed
public sealed class XmlWriterSettings
type XmlWriterSettings = class
Public NotInheritable Class XmlWriterSettings
- Inheritance
-
Examples();
Dim settings As:
<order orderID="367A54" date="2001-05-03"> <price>19.95</price> </order>
Remarks
The Create method is the preferred mechanism for obtaining XmlWriter instances. The Create method uses the XmlWriterSettings class to specify which features to implement in the XmlWriter object that is created.
Note
If you're using the XmlWriter object with the Transform method, you should use the OutputSettings property to obtain an XmlWriterSettings object with the correct settings. This ensures that the created XmlWriter object has the correct output settings.
The XmlWriterSettings class provides properties that control data conformance and output format.
For data conformance checks and auto-corrections, use these properties:
To specify output format, use these properties: | https://docs.microsoft.com/en-gb/dotnet/api/system.xml.xmlwritersettings?view=netframework-4.8 | 2020-02-17T00:58:20 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.microsoft.com |
Class
HolderManages relationship to another object.
Template Parameters
Interface Function Overview
void assignValue(object, value);Assigns value to item.
void clear(holder);Clear/destruct the Holder's value.
void create(holder[, object]);Makes an object to owner of its content.
void detach(holder);Makes an object independent from other objects.
bool empty(holder);Test a Holder for being empty.
TGetValue getValue(holder);Return the get-value of the holder.
void moveValue(holder, value);Move a value of into a holder.
void setValue(holder, object);Makes holder dependent.
TReference value(holder);Return a reference to the value of the holder.
Interface Metafunction Overview
GetValue<THolder>::Type;Return get-value type of Holder.
Reference<THolder>::Type;Return the reference type of a Holder.
Spec<THolder>::Type;Return the specialization tag for a Holder.
Value<THolder>::Type;Return value type of Holder.
Detailed Description
Remarks
The main purpose of this class is to facilitate the handling of member objects. If we want class A to be dependent on or the owner of another object of class B, then we add a data member of type Holder<B> to A. Holder offers some useful access functions and stores the kind of relationship between A and B.
See Also
Interface Functions Detail
void assignValue(object, value);
Parameters
This function is similar to assign. The difference is, that assignValue just changes a value stored in object or the value object points to, while assign changes the whole object.
If object is a container (that is pos is not specified), the whole content of object is replaced by value.
If value is not used again after calling this function, then consider to use moveValue that could be faster in some cases instead.
Data Races
See Also
void clear(holder);
Parameters
Data Races
void create(holder[, object]);
Parameters
After this operation, holder will be in state 'owner'. If object is specified, holder will hold a copy of object at the end of this function. If object is not specified, the action depends on the former state of holder:
- If the state of holder was 'empty', a new object is default constructed and stored into holder.
- If the state of holder was 'dependent', a copy of the former object is made and stored into holder.
- If the state of holder was already 'owner', nothing happens.
It is guaranteed, that after calling this function source and target can be used independently.
Data Races
void detach(holder);
Parameters
Remarks
After this function, holder does not depends from any other entity outside of holder, like a source or a host, and dependent(holer) returns false
Data Races
bool empty(holder);
Parameters
Returns
Remarks
empty(x) is guaranteed to be at least as fast as length(me) == 0, but can be significantly faster in some cases.
Data Races
See Also
TGetValue getValue(holder);
Parameters
Returns
Data Races
void moveValue(holder, value);
Parameters
Data Races
void setValue(holder, object);
Parameters
After this operation, holder will be dependent in state 'dependent'. | http://docs.seqan.de/seqan/2.0.2/class_Holder.html | 2020-02-17T00:36:08 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.seqan.de |
(at)SELECT
Syntax
@SELECT sql_query
Binds the variable to a database query that returns a recordset. Where sql_query is the actual database query. The return value is a recordset.
Example: We can use the @SELECT command the same way we would use a regular SQL SELECT statement:
Note: When using the CMS database tables you can use the [PREFIX] tag before the table name. This will append the database and table prefix to the table name. You can also use manually apply the table prefix to the table name if it's known. For example, modx_site_content, where modx_ is the table prefix used in this example..
Suggest an edit to this page. | http://www.evolution-docs.com/documentation/developers-guide/template-variables/at-binding/at-select | 2020-02-17T02:47:02 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['assets/images/docs/bind_6.gif', None], dtype=object)
array(['assets/images/docs/bind_7.gif', None], dtype=object)] | www.evolution-docs.com |
Customising news output¶
By default, a News & Blog page will simply list items, using the
article_list.html template,
while an article will use the
article_detail page.
At its most basic, all the
article_list.html template does is extend a base template and add
the article list. Similarly, the
article_detail will extend a base template.
It’s easy to override these templates to add your own components to your News & Blog pages. If you add static placeholders to the templates, then you will be able to add arbitrary plugins to it; these will then appear on all the News & Blog pages that use those templates.
Customising a news section page¶
The simplest news section page (the django CMS page that has a News & Blog apphook attached to it) is a list of articles.
The optional Aldryn Bootstrap Boilerplate templates, which will be used if you are using the Aldryn Boilerplate Bootstrap 3 components - see Aldryn Boilerplate support - offer something more sophisticated.
In that case, the article list template will extend a more complex template, that implements some static placeholders.
It makes possible the layout represented here, which was taken from the Divio website. It’s worth describing how it’s implemented, to show some of the possibilities.
The page has a static_placeholder
newsblog_feature. Into this are placed:
- a heading, Featured articles
- a Featured articles plugin, set to display the latest three articles on which the “featured” flag has been set
- a heading, Recent articles
Below the
newsblog_feature static placeholder, the
article_list.html template
simply lists the articles (4).
On the right are the items from the
newsblog_sidebar. These are:
- a Categories plugin, that lists the different categories of article that have been published
- an Authors plugin, that lists different authors of published articles
See Using plugins for more details of the different plugins available.
Customising News & Blog article templates¶
Articles can be similarly customised. For example you might add a Related articles plugin to the static placeholder of your articles, so that each article will display a list of related articles.
Section-specific content¶
If you have multiple news sections, you can also have Section-specific shared content, by using apphook-configuration-aware placeholder template tags as described in Section-specific shared content. | https://aldryn-newsblog.readthedocs.io/en/latest/how-to/customising_news_output.html | 2020-02-17T00:37:57 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['../_images/news-page-example.jpg',
'a custom news page example layout'], dtype=object)] | aldryn-newsblog.readthedocs.io |
Chorus user Permissions are here to get the right team members more involved, protect data privacy, and empower your sales team!
Here are some ways Permissions can make your life easier:
- Allow managers to invite users, if managers at your organization own new hire onboarding.
- Limit deletion or modification of conversation data to only users with the right level of access.
- Enable internal sharing only, and control data sharing by defining who can send recordings to users outside of your organization.
- Allow managers to download recordings, so they can listen to recordings offline at their leisure.
- Let Managers develop their own coaching initiatives for their teams. Get managers involved in developing a coaching culture for their teams.
- Empower managers or reps to create trackers for their team’s talk tracks. Tracker creation doesn’t have to be managed solely by Chorus Admins anymore; by setting role-based permissions, team members can contribute to tracker creation. This will enable managers to get more involved in coaching, and reps to create trackers for the key words and phrases they are working on.
- Unlock peer-to-peer coaching for Reps, or alternatively, limit access to scorecard completion to Managers or Enablement users.
Role Permissions
Each Chorus user is assigned to a specific “role” within Chorus. These roles parallel those in a typical Sales Org:
- Admin
- Enablement / Leadership
- Manager (AE / Other)
- Manager (SDR)
- Rep (AE / Other) -- Customer Success and Account Managers will have the best experience here
- Rep (SDR)
It’s important to ensure your users are assigned to the correct Chorus role for their position as their user experience is tailored to the needs of each position. For example, managers’ home page is their team’s recordings, while Reps’ homepage is the My Recordings page. (To change someone’s role in Chorus, go to Settings > User Management > Users.)
Each role can have its own set of permissions beyond homepages. Permissions settings will affect every person who is assigned to that specific role. For example, adjusting permissions for “Rep (AE/other)” would affect every individual person who is a Rep (AE/other).
The Roles and Permissions page can be found by clicking on your initials > Settings > User Management > Roles and Permissions.
By default, permissions are automatically set such that each role can:
The default settings for each role can be adjusted as needed. To change permissions for a role and everyone assigned to that role, hover the cursor over the role type you want to change and click “edit”:
From there, you’ll be taken to the permissions page for that role; here’s what it looks like for Manager (AE/Other):
Let’s go into a breakdown of what each permission block means.
1. General
At the top of the first row you’ll see two tabs: “General” and “Members”. The General tab is the page with all the permission settings on it, and the Members tab lists everyone who will be affected by changes made to permissions.
Under that is a brief description of the role next to Default License type. The license type will be auto-selected to be either Recorder or Listener, depending the role. Reps are automatically set to be recorders where as everyone else (Admin, Enablement, and Managers) are set to be listeners. As a refresher, Listeners can view calls, comment, and make playlists, but cannot record their own calls in Chorus.
To change the license type for new users in a specific role, select the one you want and click “save”.
2. Administrative settings include System Settings and User Management. By necessity, only the Admin role has baked-in access to all pieces in Administrative settings, and this cannot be changed. Should you want other role types to have access to any of these, select the ones you want for that role and click “save”.
Integrations - checking this will allow a role type to set up Chorus integrations such as CRM and Dialers.
Organization Settings - checking this will allow a role type to change organization-wide settings such as compliance and meeting rules.
Invite and Manage Users - checking this will allow non-admins to invite users with existing team assignments.
Teams and Data Access - checking this will allow a role type to create and edit teams, as well as dictate which records each team has access to in Chorus.
Roles and Permissions - checking this will allow adjustment of roles and permissions for others.
3. Recordings
This section is broken down into 4 quadrants:
Edit/Delete Recordings: decide which recordings a particular role can edit or delete: all recordings in Chorus, their own recordings, or none. Click to learn how to delete recordings
Share Recordings: decide who this role type can share your organization’s recordings with: anyone in the world, only their colleagues, or no one. Limiting this is helpful to prevent sensitive information from being shared. Click to learn more about how to share recordings
Make Recordings Private: decide which recordings a particular role can hide from the rest of your organization: anyone’s recordings, their own recordings, or none. Being able to make recordings private might be useful for Leadership and Managers for sensitive calls, but might not make sense for all reps. Disabling this for some folks can help block unwanted hiding of calls. Click to learn how to make recordings private
Download Recording Content: decide which recordings a role type can download: any recordings, only their own calls, or none. Enabling this can be helpful for managers who want to review calls while on a plane, traveling, etc. Click to learn how to download content
4. Coaching
Coaching is broken down into Initiatives and Scorecards.
Initiatives and scorecards go hand-in-hand. An initiative is a specific subject managers are interested in coaching on, and scorecards measure how well individuals are doing in that subject during an individual call. This means initiatives facilitate tracking performance and progress over time. (Initiatives are found by clicking on the Deals page with the megaphone icon).
Decide who can create initiatives and who can complete scorecards in these two columns. Allowing managers to create initiatives is great for getting managers involved in developing a coaching culture for their teams, while enabling Reps to complete scorecards for others on their team will unlock peer-to-peer coaching.
5. Tools
Monitoring talk tracks is super easy with Trackers. Letting Managers create trackers themselves is helpful to get them more involved in coaching and team management, and letting Reps create their own trackers is great for self-monitoring the key words and phrases they are working on.
6. Save your changes!
Hitting save at the bottom of the page will update the permissions for everyone in that role. These permissions can be changed as often as you like, and default permission settings can be restored if desired.
Please sign in to leave a comment. | https://docs.chorus.ai/hc/en-us/articles/360040100293-How-to-Update-Permission-Settings | 2020-02-17T00:11:37 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/hc/article_attachments/360055617934/roles_and_permissions.gif',
'roles_and_permissions.gif'], dtype=object)
array(['/hc/article_attachments/360056486333/roles_overview.png',
'roles_overview.png'], dtype=object)
array(['/hc/article_attachments/360053425793/Image_2019-12-06_at_8.17.48_AM.png',
'Image_2019-12-06_at_8.17.48_AM.png'], dtype=object)
array(['/hc/article_attachments/360053425773/Image_2019-12-06_at_8.32.34_AM.png',
'Image_2019-12-06_at_8.32.34_AM.png'], dtype=object)
array(['/hc/article_attachments/360052518074/Image_2019-12-06_at_9.06.17_AM.png',
'Image_2019-12-06_at_9.06.17_AM.png'], dtype=object)
array(['/hc/article_attachments/360053426073/Image_2019-12-06_at_9.37.48_AM.png',
'Image_2019-12-06_at_9.37.48_AM.png'], dtype=object)
array(['/hc/article_attachments/360052518214/Image_2019-12-06_at_9.39.51_AM.png',
'Image_2019-12-06_at_9.39.51_AM.png'], dtype=object) ] | docs.chorus.ai |
Updating the Linux microPlatform Core¶
Your factory platform manifest has been separated to make consuming core platform updates easier. At Foundries.io we release Linux microPlatform updates early and often in an effort to get the latest security fixes out to users.
If you would like to try out the latest, we provide a helper script in your
lmp-manifest project called
update-factory-manifest.
This script will automatically attempt to update your manifest to the latest version of the Linux microPlatform. If there are merge conflicts, it will be up to you to fix and commit them.
To run the script, run the following command from within your
lmp-manifest project:
git clone<myfactory>/lmp-manifest.git cd lmp-manifest/scripts/ ./update-factory-manifest
When the new manifest files have been successfully pushed, a new platform build will be triggered, and once published the update can be deployed.
If something goes wrong, don’t fret! This is why we use version control!:
git revert HEAD git push | https://docs.foundries.io/latest/customer-factory/updating-the-core.html | 2020-02-17T00:40:52 | CC-MAIN-2020-10 | 1581875141460.64 | [] | docs.foundries.io |
Software Download Directory
Live Forms v8.1 is no longer supported. Please visit Live Forms Latest
for our current Cloud Release. Earlier documentation is available too.
One of Live Forms' most powerful and useful features is its ability to automatically ensure that any submitted form generates a set of XML documents that are valid with respect to their corresponding XML schemas. The Live Forms application does this by:
On this page:
Live Forms controls may be marked as required or optional by setting the control's required property in edit mode. Controls that are generated from an uploaded schema will automatically be designated as required or optional depending on the schema (for example, whether the control is required based on the minOccurs value). Validation for controls you generate from schema elements depends on the element’s XSD type and other schema-specific information.
On form load, required controls that do not contain valid values have a visual indicator. For example, an invalid control has a yellowish background when a required field is missing a value, and additionally, awarning icon when a control (either required or not) contains an invalid value.
Here is a form that is missing values in required fields.
Here is the form when the user has entered valid values into the required fields.
And another that has a value in the field but the value is invalid for the given field type. Notice the error message, yellow background and the warning icon. When the user enters a valid value in the), Live Forms. Live Forms will not allow the form to be submitted due to the empty required fields. The user must provide the information for these fields but it is not immediately obvious what data is missing. The user is forced to scroll up the form to find the errors.
Live Forms provides a method to show validation errors at the time of submission. Here's how it works:
Be aware that error messages will stay visible even when you click the Submit button but you will not see them in the stored submission. until valid values are entered in the three newly required fields.
If the user changes his/her mind and removes the value from the Street field, Live Forms will recalculate the validity of the form and infer that the Address section is no longer invalid since it is optional. The generated XML instance document will also not contain an address element. Once again, Live Forms is automatically ensuring that it is not possible to submit a form that is in an invalid state and that would generate an invalid document.
You can find information about the section Required property here.
Message controls can be included inside an optional section provided the Save Value property is unchecked.
Form fields added from Live Forms' control palette have built-in validation rules. The table below details the default validation for each control type in Live Forms':
A pattern that restricts a text control to only allow strings formatted as a US zip code: ##### or #####-####:
\d{5}|\d{5}-\d{4}
The form will flag an error unless the value entered is either five digits or five digits followed by the '-' character followed by 4 digits.
This pattern validates US zip codes (##### or #####-####) and Canadian postal codes (L#L #L#).
>(\d{5}(-\d{4}))|(\d{5})|([ABCEGHJKLMNPRSTVXY]{1}\d{1}[A-Z]{1} *\d{1}[A-Z]{1}\d{1})
This pattern ensures that the user enters a valid Social Security Number:
\d{3}-\d{2}-\d{4}
You should not try to apply a pattern to a money control. If you need to apply special behavior to a money control (for example, using three decimal places), we recommend using a text field, which affords a wider range of patterns you can apply. You could also use a number field and apply rules to it — such as rounding up to three decimal places.
Patterns for number ranges are not as straight forward as you might imagine. If you want a quantity control allow to only numbers in the range 1-42 it is not sufficient to use the pattern [1-42]. Here is the pattern that will work:
([1-9]|[1-3][0-9]|4[0-2])
Live Forms' default number control supports digits followed by an optional decimal point and multiple decimal places. Suppose instead you want to allow numbers containing commas and optionally starting with a '$' and only up to 2 decimal places. For example: $1,000.50.
If you use Live Forms In-house (downloaded and installed on your computer), you can change Live Forms' built-in patterns for the Email control by editing the ''types.xsd'' file included in the <frevvo installdir>/tomcat/webapps/frevvo.war file. This file includes the email type definition shown below, and you can edit its pattern value. See the example in the phone pattern section below for instructions to modify the types.xsd file.
<xsd:simpleType <xsd:restriction <xsd:pattern </xsd:restriction> </xsd:simpleType>
frevvo OEM partners may choose this method when customizing Live Forms. The built-in values are shown in the image below:
Follow these steps to change the current buit-in pattern to ##-####-#### or ####-###-###.
If you are using Live Forms In-house (downloaded and installed on your computer), you can change Live Forms' built-in patterns for the Phone control by:
Rezip all the files in the c:\tmp\frevvo-war directory, even the ones you did not edit.
Zip will often give your zipfile a .zip extension. Make sure you change this to a .war extension.
<xsd:simpleType <xsd:restriction <xsd:pattern <xsd:pattern </xsd:restriction> </xsd:simpleType> | https://docs.frevvo.com/d/pages/viewpage.action?pageId=21538354 | 2020-02-17T01:13:49 | CC-MAIN-2020-10 | 1581875141460.64 | [array(['/d/images/icons/linkext7.gif', None], dtype=object)] | docs.frevvo.com |
This guide will help you get started with WHMCS Client Area -WCAP in 3 steps.
- Install & Configure WHMCS Module (WHMPress helper)
- Install & Configure WP Plugin (WCAP)
- Use WCAP
In order to get both WHMPress Helper and WCAP,_Client_Area_API-v.2.x.xx_Client_Area_API-v.2.x.xx > General Settings > WHMCS System URL (details)
- WHMCS Admin User/ Password: A valid WHMCS admin user & password
- api_access & auto_auth Keys: API access key and Auto auth keys are created in WHMCS configuration.php file. To create these keys, add following lines to WHMCS configuration file.
Auto Auth Key is required for WHMCS versions prior to 8.1 but Api Access Key is required in all cases.
//--- keys for API access --- $api_access_key = 'secret_key_passphrase_goes_here'; $autoauthkey = 'authCAP
step 1:
If you have configured WCAP successfully, simply paste the following shortcode in a WordPress page, It is usually named as client-area.
[whmcs_client_area]
step 2:
Open WCAP > Settings > and fill in following fields.
- Client Area URL: URL of client area page created in step-1.
- After Login Redirect URL: This can be any page where you want to send users after they log in, if you leave it empty, default is client area dashboard
- After Logout Redirect URL: Use this setting to redirect users after login, if you leave it empty, the default is client area login page.
- Click Save Settings
Head to client-area page to see how it works. | http://docs.whmpress.com/docs/wcap-whmcs-client-area-api/getting-started/getting-started-whmcs-client-area/ | 2021-01-16T00:39:06 | CC-MAIN-2021-04 | 1610703497681.4 | [array(['http://docs.whmpress.com/wp-content/uploads/2017/06/whmpress-helper-install-1024x355.png',
None], dtype=object) ] | docs.whmpress.com |
Clone & Commit via Web
Clone, edit, commit, push and pull can be performed using Git directly from the command line, by using a Git client, or via the web interface. The first option is shown in the sections Clone & Commit via HTTP and Clone & Commit via SSH. The last option is detailed below..
Copy the repo URL from the Codeberg website to your Git client using the
Copy icon.
If you want to download a copy of a specific state of the repository, without its version history, click on the
Download Repositoryicon to download either as ZIP or TAR.GZ.
Edit
Click on the file you want to edit from the list of files in the repo. Let's try it here with the
README.md file.
The pencil tool (
Edit File) will open a new window.
There you can edit the file as you wish.
The
Preview tab shows you how the file will look like, and the
Preview Changes will highlight the changes to the file (red for deletions and green for additions).
Commit
A commit is a record of the changes to the repository. This is like a snapshot of your edits.
The commit section is at the bottom of the edit window:
A commit requires a commit message. A default message is added, but do not hesitate to edit it. Make sure your commit message is informative, for you, your collaborators and anyone who might be interested in your work. Some advice on how to write a good commit message can be found on countless websites and blogs!
If you intend to start a pull request with this commit, you should choose the option
Create a new branch for this commit and start a pull request. It will make it easier to work on the different commits without mixing them if they are in different forks. Check the documentation on Pull requests and Git flow for more details.
Submit your changes by clicking on
Commit Changes.
Push and pull
Synchronizing the modifications (commit) from the local repository to the remote one on Codeberg is called pushing..
Pushing and pulling make sense only if you work locally. This is why there is no "push" or "pull" button on the Codeberg web interface; committing there already pushes to the remote repository on Codeberg, and there is therefore nothing to pull (except pull requests of course).. | https://docs.codeberg.org/git/clone-commit-via-web/ | 2021-01-15T23:06:14 | CC-MAIN-2021-04 | 1610703497681.4 | [array(['/assets/images/git/clone-commit-via-web/clone.png', 'clone'],
dtype=object)
array(['/assets/images/git/clone-commit-via-web/edit1.png', 'edit1'],
dtype=object)
array(['/assets/images/git/clone-commit-via-web/edit2.png', 'edit2'],
dtype=object)
array(['/assets/images/git/clone-commit-via-web/commit.png', 'commit'],
dtype=object) ] | docs.codeberg.org |
Template Files - What are they and how are they used?
The template files used in Zen Cart provide the structure and layout of the various pages of your cart. They make use of the definitions from your language files.
These files are located in
includes/templates/template_default.
Template files are of three types; common, specific pages and sideboxes.
The files can be identified by
a
tpl prefix (e.g.
tpl_shopping_cart_default.php)
These files contain all the information necessary to construct the pages used by your shop.
Common Template Files are located in
includes/templates/template_default/common.
These files are “common” to every page used throughout Zen Cart.
These files consist of the Main Page, the Header, the Footer and the Box files used for the sideboxes.
Page Specific Template Files are located in
includes/templates/template_default/templates.
These files represent the pages of your cart and include the Index page, the Log in page and the Product Display pages.
You can change the layout of each page in your shop by editing these page-specific template files.
Sidebox Template Files are located in
includes/templates/template_default/sideboxes.
These files contain the instructions for placing content into the sideboxes you are using. | https://docs.zen-cart.com/user/new_user_topics/template_files/ | 2021-01-16T00:22:57 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.zen-cart.com |
Fluid Equations¶
The starting point is the fluid equations, comprising the conservation laws for mass
and momentum
the heat equation
and Poisson’s equation
Here, \(\rho\), \(P\), \(T\), \(S\) and \(\vv\) are the fluid density, pressure, temperature, specific entropy and velocity; \(\Phi\) is the gravitational potential; \(\epsnuc\) is the specific nuclear energy generation rate; and \(\vFrad\) and \(\vFcon\) are the radiative and convective energy fluxes. An explicit expression for the radiative flux is provided by the radiative diffusion equation,
where \(\kappa\) is the opacity and \(a\) the radiation constant.
The fluid equations are augmented by the thermodynamic relationships between the four state variables (\(P\), \(T\), \(\rho\) and \(S\)). Only two of these are required to uniquely specify the state (we assume that the composition remains fixed over an oscillation cycle). In GYRE, \(P\) and \(S\) are adopted as these primary variables1, and the other two are presumed to be derivable from them:
The nuclear energy generation rate and opacity are likewise presumed to be functions of the pressure and entropy:
Footnotes | https://gyre.readthedocs.io/en/latest/ref-guide/osc-equations/fluid-equations.html | 2021-01-15T23:10:57 | CC-MAIN-2021-04 | 1610703497681.4 | [] | gyre.readthedocs.io |
-
-
!
Application usage and anomalies
As an administrator, you must ensure how the application is getting utilized. The application key metrics can help you identify the application usage. Since the traffic range to the application is unpredictable, some unusual application performance deviations might occur for a specific duration. In such scenarios, as an administrator, you might want to view and analyze such sudden anomalies, and ensure if any immediate troubleshoot is required.
Citrix ADM detects such anomalies and provides necessary details. Navigate to Applications > Dashboard, click an application, and select the Key Metrics tab. Citrix ADM monitors the traffic pattern and analyzes if the key metrics are in the expected range. If there is any deviation than the expected range, Citrix ADM reports those deviations as anomalies.
You can view anomalies for the following key metrics:
Response Time
Throughput
Data Volume
Requests per second
Click the Metrics with Anomalies tab to view details.
In each metric, you can also click the See more option to view details. The following example is for the application data volume:
You can view:
The graph indicating the maximum value, total value, expected range, and anomalies
The Recommended Actions to troubleshoot the issue
The time and anomaly details under. | https://docs.citrix.com/en-us/citrix-application-delivery-management-service/application-analytics-and-management/app-usage-anomalies.html | 2021-01-16T00:51:03 | CC-MAIN-2021-04 | 1610703497681.4 | [array(['/en-us/citrix-application-delivery-management-service/media/anomaly-metrics1.png',
'Metrics with anomalies'], dtype=object)
array(['/en-us/citrix-application-delivery-management-service/media/see-more-data-volume.png',
'See more'], dtype=object) ] | docs.citrix.com |
Error when you access file shares on a SOFS-configured server: Not enough server storage is available to process this command
This article provides a solution to an issue that occurs when you access file shares on a SMB server that has the Scale-Out File Server role configured.
Original product version: Windows Server 2012 R2
Original KB number: 3101545
Symptoms
Consider the following scenario:
- You configure the Scale-Out File Server (SOFS) role on a server that's running Window Server 2012 R2.
- You have server applications and clients that access file shares frequently.
- The applications and clients open many short-lived sessions in which they connect, authenticate, change files, and close the session immediately.
In this scenario, after some time, access to the file shares is unsuccessful, and a STATUS_INSUFF_SERVER_RESOURCES error is recorded in a network capture.
Additionally, when users try to connect to SOFS shares, they receive the following error message:
Not enough server storage is available to process this command.
You also see a high handle count in Lsass.exe on both the coordinator and non-coordinator nodes of the cluster.
Note
If you failover the disk resource to another node, the issue temporarily does not occur.
Cause
This issue occurs because the applications create new sessions every time that they change a file instead of reusing sessions to generate many metadata changes.
The CSV File System uses the SMB protocol to keep metadata information consistent between the cluster nodes. A high volume of metadata changes generate many SMB sessions between the non-coordinator and coordinator nodes of the cluster and exhaust the SMB table on the coordinator node.
Resolution
To fix this issue for these kinds of application workloads, we recommend that you use the File Server for General Use role instead of SOFS.
Note
The SOFS role should not be used if the workload generates an exceptionally high number of metadata operations, such as opening and creating new files or renaming existing files.
More information
In a network capture between non-coordinator and coordinator nodes, you see that after an SMB Session Setup request, the coordinator node responds with a STATUS_INSUFF_SERVER_RESOURCES error. | https://docs.microsoft.com/en-us/troubleshoot/windows-server/backup-and-storage/not-enough-storage-process-this-command | 2021-01-15T23:12:22 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.microsoft.com |
Data preferences¶
Ever wondered whether one should approach problem X with data structure Y or Z? This article covers a variety of topics related to these dilemmas.
Nota
This article makes references to "[something]-time" operations. This terminology comes from algorithm analysis' Big O Notation.
Long-story short, it describes the worst-case scenario of runtime length. In laymen's terms:
"As the size of a problem domain increases, the runtime length of the algorithm..."
- Constant-time,
O(1): "...does not increase."
- Logarithmic-time,
O(log n): "...increases at a slow rate."
- Linear-time,
O(n): "...increases at the same rate."
- Etc.
Imagine if one had to process 3 million data points within a single frame. It would be impossible to craft the feature with a linear-time algorithm since the sheer size of the data would increase the runtime far beyond the time allotted. In comparison, using a constant-time algorithm could handle the operation without issue.
By and large, developers want to avoid engaging in linear-time operations as much as possible. But, if one keeps the scale of a linear-time operation small, and if one does not need to perform the operation often, then it may be acceptable. Balancing these requirements and choosing the right algorithm / data structure for the job is part of what makes programmers' skills valuable.
Array vs. Dictionary vs. Object¶
Godot stores all variables in the scripting API in the Variant class. Variants can store Variant-compatible data structures such as Array and Dictionary as well as Object s.
Godot implements Array as a
Vector<Variant>. The engine stores the Array
contents in a contiguous section of memory, i.e. they are in a row adjacent
to each other.
Nota
For those unfamiliar with C++, a Vector is the name of the
array object in traditional C++ libraries. It is a "templated"
type, meaning that its records can only contain a particular type (denoted
by angled brackets). So, for example, a
PoolStringArray would be something like
a
Vector<String>.
Contiguous memory stores imply the following operation performance:
Iterate: Fastest. Great for loops.
- Op: All it does is increment a counter to get to the next record.
Insert, Erase, Move: Position-dependent. Generally slow.
Op: Adding/removing/moving content involves moving the adjacent records over (to make room / fill space).
Fast add/remove from the end.
Slow add/remove from an arbitrary position.
Slowest add/remove from the front.
If doing many inserts/removals from the front, then...
- invert the array.
- do a loop which executes the Array changes at the end.
- re-invert the array.
This makes only 2 copies of the array (still constant time, but slow) versus copying roughly 1/2 of the array, on average, N times (linear time).
Get, Set: Fastest by position. Ex. can request 0th, 2nd, 10th record, etc. but cannot specify which record you want.
- Op: 1 addition operation from array start position up to desired index.
Find: Slowest. Identifies the index/position of a value.
Op: Must iterate through array and compare values until one finds a match.
- Performance is also dependent on whether one needs an exhaustive search.
If kept ordered, custom search operations can bring it to logarithmic time (relatively fast). Laymen users won't be comfortable with this though. Done by re-sorting the Array after every edit and writing an ordered-aware search algorithm.
Godot implements Dictionary as an
OrderedHashMap<Variant, Variant>. The engine
stores a small array (initialized to 2^3 or 8 records) of key-value pairs. When
one attempts to access a value, they provide it a key. It then hashes the
key, i.e. converts it into a number. The "hash" is used to calculate the index
into the array. As an array, the OHM then has a quick lookup within the "table"
of keys mapped to values. When the HashMap becomes too full, it increases to
the next power of 2 (so, 16 records, then 32, etc.) and rebuilds the structure.
Hashes are to reduce the chance of a key collision. If one occurs, the table must recalculate another index for the value that takes the previous position into account. In all, this results in constant-time access to all records at the expense of memory and some minor operational efficiency.
Hashing every key an arbitrary number of times.
- Hash operations are constant-time, so even if an algorithm must do more than one, as long as the number of hash calculations doesn't become too dependent on the density of the table, things will stay fast. Which leads to...
Maintaining an ever-growing size for the table.
- HashMaps maintain gaps of unused memory interspersed in the table on purpose to reduce hash collisions and maintain the speed of accesses. This is why it constantly increases in size quadratically by powers of 2.
As one might be able to tell, Dictionaries specialize in tasks that Arrays do not. An overview of their operational details is as follows:
Iterate: Fast.
- Op: Iterate over the map's internal vector of hashes. Return each key. Afterwards, users then use the key to jump to and return the desired value.
Insert, Erase, Move: Fastest.
Op: Hash the given key. Do 1 addition operation to look up the appropriate value (array start + offset). Move is two of these (one insert, one erase). The map must do some maintenance to preserve its capabilities:
- update ordered List of records.
- determine if table density mandates a need to expand table capacity.
The Dictionary remembers in what order users inserted its keys. This enables it to execute reliable iterations.
Get, Set: Fastest. Same as a lookup by key.
- Op: Same as insert/erase/move.
Find: Slowest. Identifies the key of a value.
- Op: Must iterate through records and compare the value until a match is found.
- Note that Godot does not provide this feature out-of-the-box (because they aren't meant for this task).
Godot implements Objects as stupid, but dynamic containers of data content. Objects query data sources when posed questions. For example, to answer the question, "do you have a property called, 'position'?", it might ask its script or the ClassDB. One can find more information about what objects are and how they work in the Applying object-oriented principles in Godot article.
The important detail here is the complexity of the Object's task. Every time it performs one of these multi-source queries, it runs through several iteration loops and HashMap lookups. What's more, the queries are linear-time operations dependent on the Object's inheritance hierarchy size. If the class the Object queries (its current class) doesn't find anything, the request defers to the next base class, all the way up until the original Object class. While these are each fast operations in isolation, the fact that it must make so many checks is what makes them slower than both of the alternatives for looking up data.
Nota
When developers mention how slow the scripting API is, it is this chain of queries they refer to. Compared to compiled C++ code where the application knows exactly where to go to find anything, it is inevitable that scripting API operations will take much longer. They must locate the source of any relevant data before they can attempt to access it.
The reason GDScript is slow is because every operation it performs passes through this system.
C# can process some content at higher speeds via more optimized bytecode. But, if the C# script calls into an engine class' content or if the script tries to access something external to it, it will go through this pipeline.
NativeScript C++ goes even further and keeps everything internal by default. Calls into external structures will go through the scripting API. In NativeScript C++, registering methods to expose them to the scripting API is a manual task. It is at this point that external, non-C++ classes will use the API to locate them.
So, assuming one extends from Reference to create a data structure, like an Array or Dictionary, why choose an Object over the other two options?
- Control: With objects comes the ability to create more sophisticated structures. One can layer abstractions over the data to ensure the external API doesn't change in response to internal data structure changes. What's more, Objects can have signals, allowing for reactive behavior.
- Clarity: Objects are a reliable data source when it comes to the data that scripts and engine classes define for them. Properties may not hold the values one expects, but one doesn't need to worry about whether the property exists in the first place.
- Convenience: If one already has a similar data structure in mind, then extending from an existing class makes the task of building the data structure much easier. In comparison, Arrays and Dictionaries don't fulfill all use cases one might have.
Objects also give users the opportunity to create even more specialized data structures. With it, one can design their own List, Binary Search Tree, Heap, Splay Tree, Graph, Disjoint Set, and any host of other options.
"Why not use Node for tree structures?" one might ask. Well, the Node class contains things that won't be relevant to one's custom data structure. As such, it can be helpful to construct one's own node type when building tree structures.
extends Object class_name TreeNode var _parent : TreeNode = null var _children : = [] setget func _notification(p_what): match p_what: NOTIFICATION_PREDELETE: # Destructor. for a_child in _children: a_child.free()
// Can decide whether to expose getters/setters for properties later public class TreeNode : Object { private TreeNode _parent = null; private object[] _children = new object[0]; public override void Notification(int what) { if (what == NotificationPredelete) { foreach (object child in _children) { TreeNode node = child as TreeNode; if (node != null) node.Free(); } } } }
From here, one can then create their own structures with specific features, limited only by their imagination.
Enumerations: int vs. string¶
Most languages offer an enumeration type option. GDScript is no different, but
unlike most other languages, it allows one to use either integers or strings for
the enum values (the latter only when using the
export keyword in GDScript).
The question then arises, "which should one use?"
The short answer is, "whichever you are more comfortable with." This is a feature specific to GDScript and not Godot scripting in general; The languages prioritizes usability over performance.
On a technical level, integer comparisons (constant-time) will happen faster than string comparisons (linear-time). If one wants to keep up other languages' conventions though, then one should use integers.
The primary issue with using integers comes up when one wants to print
an enum value. As integers, attempting to print MY_ENUM will print
5 or what-have-you, rather than something like
"MyEnum". To
print an integer enum, one would have to write a Dictionary that maps the
corresponding string value for each enum.
If the primary purpose of using an enum is for printing values and one wishes to group them together as related concepts, then it makes sense to use them as strings. That way, a separate data structure to execute on the printing is unnecessary.
AnimatedTexture vs. AnimatedSprite vs. AnimationPlayer vs. AnimationTree¶
Under what circumstances should one use each of Godot's animation classes? The answer may not be immediately clear to new Godot users.
AnimatedTexture is a texture that the engine draws as an animated loop rather than a static image. Users can manipulate...
- the rate at which it moves across each section of the texture (fps).
- the number of regions contained within the texture (frames).
Godot's VisualServer then draws the regions in sequence at the prescribed rate. The good news is that this involves no extra logic on the part of the engine. The bad news is that users have very little control.
Also note that AnimatedTexture is a Resource unlike the other Node objects discussed here. One might create a Sprite node that uses AnimatedTexture as its texture. Or (something the others can't do) one could add AnimatedTextures as tiles in a TileSet and integrate it with a TileMap for many auto-animating backgrounds that all render in a single batched draw call.
The AnimatedSprite node, in combination with the SpriteFrames resource, allows one to create a variety of animation sequences through spritesheets, flip between animations, and control their speed, regional offset, and orientation. This makes them well-suited to controlling 2D frame-based animations.
If one needs trigger other effects in relation to animation changes (for example, create particle effects, call functions, or manipulate other peripheral elements besides the frame-based animation), then will need to use an AnimationPlayer node in conjunction with the AnimatedSprite.
AnimationPlayers are also the tool one will need to use if they wish to design more complex 2D animation systems, such as...
- Cut-Out animations: editing sprites' transforms at runtime.
- 2D Mesh animations: defining a region for the sprite's texture and rigging a skeleton to it. Then one animates the bones which stretch and bend the texture in proportion to the bones' relationships to each other.
- A mix of the above.
While one needs an AnimationPlayer to design each of the individual animation sequences for a game, it can also be useful to combine animations for blending, i.e. enabling smooth transitions between these animations. There may also be a hierarchical structure between animations that one plans out for their object. These are the cases where the AnimationTree shines. One can find an in-depth guide on using the AnimationTree here. | https://docs.godotengine.org/pt_BR/stable/getting_started/workflow/best_practices/data_preferences.html | 2021-01-15T23:53:51 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.godotengine.org |
:
See Also
Interface Recipe: Display a User's Tasks in a Grid With Task Links: Renders a task report with
a!queryProcessAnalytics()
Interface Recipe: Display Processes by Process Model with Status Icons: Shows process information with
a!queryProcessAnalytics()
Process Reports: Create and configure process reports to pass to
a!queryProcessAnalytics().
Query: The Query data type defines any paging and extra filters to apply when querying data.
On This Page | https://docs.appian.com/suite/help/18.4/fnc_system_a_queryprocessanalytics.html | 2021-01-16T00:21:13 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.appian.com |
This manual gives you a walk-through on how to use the ScreenMD command line tool:
Tanimoto Dissimilarity metrics
Tanimoto for integer valued descriptors
Scaled Tanimoto metric for fingerprints
Scaled Tanimoto metric for integer descriptors
Asymmetric Tanimoto metric for integer descriptors
Weighted Euclidean metric
Asymmetric Euclidean metric
Dissimilarity of Molecular Descriptor Sets
2D pharmacophore fingerprints
ScreenMD performs fast virtual screening of large compound libraries using molecular descriptor sets. Virtual screening aims to find compounds that exhibit required chemical, structural, pharmacological or other properties. Such properties are represented as molecular descriptor sets and these descriptor sets are compared against each other by calculating a dissimilarity score between them. Thus the goal of the screening procedure is often expressed as an allowed maximal dissimilarity score: structures with a dissimilarity score below such predefined threshold are accepted by the screening process, while others are rejected.
Both file and database inputs are supported, and in either case molecular structures or molecular descriptor sets (generated by GenerateMD) are accepted on input.
The output of the screening application is either a table of dissimilarity coefficients or an SDfile. The table contains the dissimilarity coefficients of the hit set , while SDfile output contains the hit molecules along with all original tags and new ones storing the dissimilarity coefficients.
ScreenMD takes two input sources, target structures and query structures . These structures, or strictly speaking their corresponding descriptor sets are compared in a pair-wise manner. It is assumed that there are significantly more target structures/descriptors than queries, for instance a few million targets can easily be handled, while normally the number of queries should not exceed few times 10.
Target structures/descriptors usually belong to a compound library with pharmacological or biological interest, while queries define the required properties the compound library is sought for. Queries are often referred to as known actives and the aim of the screening exercise is to find other structures in the target set that exhibit the same chemical, pharmacological or biological activity (e.g. they bind to the same receptor protein).
It is reasonable to suppose that compounds with the same activity share common patterns in their corresponding descriptors. These common patterns can be represented by a hypothesis , which can be regarded as a model active structure (or the model of the active site of a receptor). The target library can be scanned for structures matching this hypothesis in contrast to individual query structures. The use of the hypothesis in the screening procedure not only increases the number of hits but also facilitates the more efficient scanning of the target library, since instead of several query descriptor sets (one corresponding to each known active compound) only one descriptor set has to be used.
Descriptors offer a simple yet feasible way to create such models. A usual approach, adopted in ScreenMD too, is to calculate the intersection (common part) of query descriptors. This can be defined as the minimum of corresponding descriptor cells. ScreenMD provides further alternatives to create pharmacophore hypotheses, for instance the average of query fingerprints can also be used. Another possibility is to use the median hypothesis. In this case the median of non-zero values in the corresponding descriptor cells is taken, if the percentage of corresponding zero cells is lower, then a given threshold. If the percentage of zeros is higher, then the corresponding hypothesis cell is set to zero.
The comparison of two descriptors involves the calculation of one or more dissimilarity coefficients using dissimilarity metrics. At present the following metrics are supported: Tanimoto and Euclidean.
Values of these metrics are non-negative numbers. A zero dissimilarity value indicates that the two descriptors are identical, and the larger the value of the dissimilarity coefficient the bigger the difference between the the two structures is.
In its original form, Tanimoto metrics can be applied to binary fingerprints and it is a similarity metric:
where a and b are two binary fingerprints, & denotes binary bit-wise and-operator, | denotes binary bit-wise or-operator and B( x ) is the number of 1 bits in any binary fingerprint x:
The larger the number of common bits in a and b are, the larger the value of T. Therefore larger values represent higher similarity between a and b, 1 is total similarity, when the two descriptors are the same, while 0 represents the absolute dissimilarity. From that it is straightforward to obtain a dissimilarity measure:
However, extending binary Tanimoto dissimilarity to Molecular Descriptors other than binary fingerprints is less obvious.
The idea is to represent an integer value as a unary number, that is, replace it by as many 1 bits as its value is. This can be extended to a binary fingerprint by adding leading zeros to the series of 1 so as to make the length of all series the same. This way a binary fingerprint is generated and the original Tanimoto metric can be applied to it. For example the series 13, 4, 7, 9 can be represented as unary numbers as follows:
1111111111111, 1111, 1111111, 111111111
The binary form is:
1111111111111, 0000000001111, 0000001111111, 0000111111111
which can simple be written as a binary fingerprint:
1111111111111000000000111100000011111110000111111111
for which applying Tanimoto is simple. With the above consideration in mind, Tanimoto can be rewritten for integer valued descriptor in the form below:
According to published results the selectivity of the binary Tanimoto dissimilarity metric can be improved by scaling . Scaling, however, is feasible only when several compounds exhibiting the same pharmacological activity are known. A consensus fingerprint is created from the descriptors of these known actives:
where a1,...,ana are the known actives. The consensus fingerprint is applied to accentuate both similarities and dissimilarities between a target compound and a query structure (which, apparently should not be involved in the construction of the hypothesis). Non-zero bits of the hypothesis scale the corresponding bits of the target and the query fingerprint:
The scale factor(s) is an arbitrary integer between 1 and 10 (larger values than 10 rarely improve the hits).
The extended Tanimoto formula for integer valued descriptors can be combined with the scaled Tanimoto metric in a natural way:
Pharmacophore hypotheses offer a sophisticated approach to improve the selectivity and efficiency of screening. Yet, they suffer from an apparent deficiency when used in ordinary Euclidean or Tanimoto or other dissimilarity calculation based on a symmetrical metric. Asymmetrical metrics are sometimes mentioned as directed metrics too.
Imagine two descriptors, both at the same distance from a hypothesis, but on different "sides" of it. That is, one is 'smaller' while the other is 'larger' than the hypothesis, component-wise. In this case the dissimilarity values are exactly the same, however, the 'smaller' descriptor can be considered as one that does not satisfy requirements set by the hypothesis, while the other satisfies all these constraints. Apparently, two such descriptors should not be considered equally adequate, the smaller should be rejected while the greater should be accepted in a similarity search involving a
hypothesis. To tackle this problem asymmetrical metrics bias toward the hypothesis in the case of 'larger' descriptors with a predefined ratio α:
For simple technical reasons, an equivalent form is used in ScreenMD, since it is more natural to keep the asymmetry ratio between zero and one (and not two). The asymmetrical nature of this metric means that the role of a and h cannot be interchanged.The higher the value of the asymmetry ratio (α) the more the 'smaller' descriptor is penalized.
It is good practice to use asymmetric metrics when a compound library is sought for a hypothetical descriptor.
Asymmetrical and scaled metrics can be combined into a scaled asymmetric metric that exploits the benefits of both scaling with and directing toward a hypothesis descriptor.
All above extensions to integer descriptors work for real valued descriptors, too.
The most widely used geometrical distance function, the Euclidean distance can be used to measure the distance (dissimilarity) between two non-spatial objects, in our case between two molecular descriptors. The formulation is very straightforward:
Note that this distance is a dissimilarity function in the sense, that 0 value represents total similarity. However, the Euclidean distance of two molecular descriptors is not upper-bounded: the larger the distance the higher the dissimilarity between the two descriptors. One could think that this characteristic of the Euclidean metric allows more accurate measurement of dissimilarity, but in practice this is seldom needed. Instead, the direct comparability of dissimilarity values is important. This is hard to achieve with the use of Euclidean distance since dissimilarity values obtained for a large compound library are scattered in a wide range and one should not necessarily have a priori ideas about suitable threshold for the dissimilarity value for acceptance/rejection. (In contrast to this, it is fairly simple to give such threshold value in the case of the above discussed Tanimoto metric, for instance 0.2 is a common choice and it can be interpreted as "at most 20% dissimilarity is still accepted", or "at least 80% similarity is required".)
With these considerations in mind a natural requirement is to allow to use the Euclidean distance as a dissimilarity metrics, that is, one that computes dissimilarity ratios between 0 and 1, and not distance values between 0 and infinity. Such metric is called the Normalized Euclidean dissimilarity metric and can be defined as
Since values of dEN fall into the interval 0-1, it is a dissimilarity metric.
Euclidean distance could be normalized various ways but the above form has one important advantage over many others, namely that it makes the absolute distances between the two descriptors' components relative, that is, proportional to the value of those components. To illustrate this idea image four descriptors consisting of three components (e.g. molecular weight, pKa and polar surface area). Let the values of the first components (the molecular weight) be 44, 135, 880, 919, respectively. The absolute distance between the first two is 91 while between the last two 99. Despite that 99 is greater than 91, the latter two should be considered more similar to each other than the first two. According to the above formula the relative difference between the first two is 91 / ( 44 + 135 ) = 0.5, while between the third and the fourth it is 99 / ( 880 + 919 ) = 0.05, which indicates a high degree of similarity between these two descriptors.
In general, Euclidean is not any better than Tanimoto dissimilarity, just different. Their behavior on a certain target and query set, and molecular descriptor in question cannot be predicted, only tested. Therefore some software tools have been developed to assist the user in selecting the metric which suits the particular needs the best. However, as Tanimoto with the scaling principle, Euclidean can be improved significantly by weighting. In contrast to scaling, which is applied to the entire descriptor in a uniform way, weighting distinguishes between components of the descriptor: an independent weight factor is associated with each individual component of the descriptor. This allows the increase or decrease of the importance of each individual feature in the dissimilarity calculation:
Rich descriptors, for which n, the length of the descriptor is large, (e.g. a few hundreds) require a large number of weights. In order to achieve the best selectivity over a certain set of compounds these values need to be adjusted. Due to the large number of weights and also to their high dependency on each other, however, it is not feasible to adjust these weights manually. To alleviate the selection and tuning of weights (in a molecular descriptor specific manner) software tools are available in the molecular descriptors package. Applying such optimization techniques to set up the suitable weighting schema for a given pharmacological target can significantly improve the quality of hits found in a screening process.
The Euclidean dissimilarity metric is symmetrical which, as explained above, can be a drawback when comparing descriptors to a hypothesis. To tackle this problem an asymmetrical version of the ordinary Euclidean metric can be defined. The idea is the same as in the case of asymmetric Tanimoto, though the formulation is different (since the base metric is different):
Note that the value of α should not be larger than 0.5 since only the case when ai < hi has to be penalized.
The distance (dissimilarity) of descriptor sets can be calculated as the weighted Euclidean distance of the corresponding components:
In the formula above, the component-wise dissimilarity function is an arbitrary dissimilarity metric and the index i indicates that the dissimilarity functions are independent (that is one can use Tanimoto for the first component while weighted Euclidean for the second etc.).
Virtual screening is available through the ScreenMD command. It has two different ways of usage:
Passing parameters in the command line:
screenmd [<target input file>] <query input file>[<[options |#options]>]
Passing parameters in an XML configuration file:
screenmd [<configuration file name> |#config]
These two modes are not strictly exclusive, they can be mixed:
screenmd [<configuration file name> |#config] [<target input file>] <query input file>[<[options |#options]>]
Note that when specified, the configuration file must be the first argument after the screenmd command in the command line. Similarly, file names are positional, if input is taken from file, filenames must follow either the command name or the name of the configuration file. Also note, that the order of the filenames is definite: first the target file is specified, followed by the name of the query file.
Prepare the usage of the ScreenMD script or batch file as described in Preparing the Usage of JChem Batch Files and Shell Scripts.
The ScreenMD class can be invoked directly:
java -cp "c:\jchem\lib\jchem.jar;%CLASSPATH%" \ *chemaxon.descriptors.ScreenMD* [<target input file>] <query input file> [<[options |#options]>]
java -cp "/usr/local/jchem/lib/jchem.jar:$CLASSPATH" \ *chemaxon.descriptors.ScreenMD* [<target input file>] <query input file> [<[options |#options]>]
Options and parameters can either be defined in the command line or be specified in an XML configuration file. The command line mode is more suitable for smaller experiments. In contrast to this, configuring ScreenMD from XML is convenient even for much larger virtual screening exercises, in which, for instance, numerous descriptors are combined with various parametrized metrics. Although an example configuration file is available, users are not encouraged to write such configuration files manually. Instead, the use of an interactive XML configuration editor is highly recommended.
General options: -h, --help this help message -x, --expert-help advanced options for expert users -v, --verbose verbose -s --saveconf saves database settings Input/Output options: -a, --table-name <name> name of the structure table -q, --query <where> where clause of select statements to read targets -o, --output [ TABLE | SDF ] <filepath> output file type and name with full path Flag can be given more than once -g, --generate-id [<first>] generate unique structure identifiers an optional value for the first ID can be given -e, --precision <prec> number of decimal places after the decimal point Database options: -d, --driver <JDBC> JDBC driver -u, --dburl <url> URL of database -l, --login <login> login name -p, --password <pwd> password Descriptor options: -k, --descriptor <type> <descriptor options> create and use descriptors of the given type -k, --descriptor <name> use descriptors created and stored previously Descriptor options: -c, --config <configfile> path and name of the XML configuration file -t, --use-tag [<name>] use existing descriptor data -M, --metric \{<name>\} use the metric <name> as specified in the config file More than one metrics can be specified. Similarity options: -L, --threshold dissimilarity threshold -Q, --compare-queries compare against query descriptor sets -H, --compare-hypothesis [<name> [C]] generate hypothesis <name> and compare against it Valid names are: Minimum, Average, Median. Default hypothesis type is Minimum. 'C' indicates consensus fingerprint. This flag may occur more than once with different hypothesis types. Advanced options for expert users: SDfile options: -I, --id-tag <name> name of the tag storing unique molecule identifiers -N, --mol-name <name> name of the tag storing compound name Database options: -O, --proptable <table> name of the property table 2D pharmacophore fingerprint options: -P, --PMAP-tag [<name>] use existing PMAP data Similarity options: -C, --component-wise apply threshold for individual descriptors -r, --descriptors-and thresholds for all descriptors, default is any -m, --metrics-and thresholds for all metrics, default is any -Z, --zero-threshold percentage threshold for zero limit in median hypothesis
Merging short forms of command line options is not supported, that is, instead of -rm the form -r -m should be used..
{warning} To use ScreenMD a valid license key is needed. When no valid license key is found in the home directory, ScreenMD runs in demo mode, where the number of molecular descriptors to be processed is limited to 2000 (thus if several types of molecular descriptors are generated, then the number of structures may be limited to few hundreds).
For more information on setting connection parameters:
JDBC driver's class name ( --driver )
JDBC URL of database ( --dburl )
Login name ( --login )
Password ( --password )
please visit the Administration Guide of JChem .
The target library to be screened can be retrieved either from a database or from a file, in both cases either structures or molecular descriptor sets can be processed. In contrast to this queries are always read from and SDfile. Target structures/descriptors can either be taken from a database or from a file, while query structures are specified in a molecular file. Targets can either be molecular structures or molecular descriptors (generated earlier with GenerateMD). Most molecular file formats are accepted.The type and the source of the target set is determined by the command line flags according to the rules below:
if -a is specified target come from a database
if a descriptor type name is given after the -k flag then structures are retrieved, and descriptors are generated on-the-fly (in this case the -c flag is mandatory)
otherwise, the name of a molecular descriptor generated and stored in the database earlier is given after -k (in this case the -c flag is not allowed)
otherwise target come from a text file
if a descriptor type name is given after the -k flag then structures are read from a molecular structure file, and descriptors are generated on-the-fly (in this case the -c flag is mandatory)
otherwise, the name of a molecular descriptor file is given after -k (in this case the -c flag is not allowed)
If the target input file is an SDfile, it may already contain descriptors of molecules. This information can either be used or ignored in screening. The default behavior of ScreenMD is to ignore such information. This can be overridden with the --use-tag flag, in which case descriptors are not generated from the original molecular structures, but taken from the input file.
The default SDfile tags for storing molecular descriptors and related data are:
CF, chemical fingerprint,
PMAP, pharmacophore point type map,
PF, 2D pharmacophore fingerprint.
Other than the default tag names can be specified with the --use-tag and --PMAP-tag options.
SDfiles containing descriptors can be generated with GenerateMD. Existing descriptors are worth being reused as doing so can dramatically reduce running times (since descriptor generation is more time-consuming than the comparison of descriptors). Though SDfiles are capable of storing such data, the best practice is to store descriptors in database tables as it is more efficient, easier to share among users and easier to maintain their consistency.
When screening through a structure table in a database all structures (or their descriptors, depending on the actual command-line) are processed. To restrict the scope of screening, the WHERE clause of an SQL SELECT statement can be specified. In this case only the logical expression should be written.
Screening a structure file or structure table is typically much slower than using descriptors directly. Yet, screening structures can be important when descriptor parameters are tuned for the sake of optimal settings. For instance the minimal and maximal distances, pharmacophore point type definitions, the fuzzy smoothing factor and many other parameters have strong influence on the modeling power of the descriptor. It is a good practice to find a few promising settings in several coarse-grained screening experiments using on-the-fly descriptor generation and store only the best descriptors in the database.
ScreenMD writes its results into a text file ( --output option). If no output is specified, results are written to standard output. By default, results of the comparisons are printed in a table. Each row corresponds to one target structure (in their original order as read from the input source). The first column contains either the optional identifier of the target molecule as read from the input ( --id-tag ) or a positive integer value generated by the program. The number of further columns depends on
the number of query structures ( nq ),
the use of a hypothesis,
the number of dissimilarity metrics ( nm ).
If a hypothesis is constructed, then the next nm columns correspond to the similarity coefficients obtained from the comparison of the target structure to the hypothesis using the selected metrics. If the target structure was compared against individual queries too, further nq·nm columns follows, grouped by nm (that is each group contains nm dissimilarity coefficients): dq1m1, dq1m2, ?, dq1mnm, dq2m1, ?, dqnqmnm, where dqimj is the dissimilarity coefficient obtained from the comparison of query qi using metric mj .
An alternative way to produce the output of a screening procedure is to write the hit-set (molecular structures accepted) along with dissimilarity values into an SDfile. This output format can be specified by -o sdf foo.sdf . Note, that the output modes are not exclusive both table and SDfile output can be printed simultaneously ( -o sdf hits.sdf -o table hits.table ).
Screening uses a wide varieties of dissimilarity metrics that are specified in the configuration XML file. By default, all available metrics are used. To select one or more of these, the -M flag can be used.
Since several metrics and query structures can be used simultaneously, results (dissimilarity coefficients) of individual comparisons (i.e. one target against all queries and/or hypotheses) can be combined in filtering. The default behavior is less restrictive: if any of the coefficients calculated is under the corresponding threshold value the structure is accepted. However, if the flag -m ( --metrics-and ) is specified, dissimilarity coefficients obtained by each and every metrics must be under the threshold in the case of at least one query structure or hypothesis. Similarly, if -r ( --descriptors-and ) is set, the target is accepted only if all components of the descriptor is accepted. These two flags are independent, and they can be combined.
The default behavior behavior of ScreenMD is to compare all individual structures in the query set against all structures in the target set. However, when comparing against a hypothesis (the -H flag is specified), individual queries do not take part in the comparison process, only the hypothesis is compared against each target descriptor. This behavior can be overridden by the -Q flag, thus when -H -Q is specified together, than both hypothesis and individual queries are compared to targets.
The hypothesis (-H ) flag may take one or two optional parameters. The first of these is the name of the hypothesis, which by default is Minimum . Another available options are Average and Median . The second optional argument is the character C , which refers to consensus. When this is specified for a certain hypothesis, then that hypothesis is used as consensus descriptor for scaled metrics.
The advanced flag -Z is used to set the zero threshold for median hypothesis. This threshold is a percentage value. For each cell of the molecular descriptor the hypothesis cell is set to zero, if the percentage of zeros in the corresponding cells of the hypothesis component descriptors (descriptors, from which the hypothesis is calculated) is higher than the given threshold. If the percentage of zeros is lower, then the median of non-zero values is taken.
By default one dissimilarity value is obtained for each pair of compounds compared regardless the number of component of the descriptor sets (according to the formula defined above). However, it is also possible to get dissimilarity values for all components of the descriptor set by specifying the -C, --component-wise flag. Note, that in the case of one descriptor (one component in the descriptor set) component-wise dissimilarity is calculated.
Besides the XML configuration file that can be optionally used to specify parameter settings, the ScreenMD application takes mandatory configuration files, too. These files correspond to molecular descriptors used for screening, so there should be one file per descriptor.
Different descriptor types require different parametrization. The actual parameter settings are defined in external text (XML) files.
The pharmacophore configuration file has three main sections. One of these, the <ScreeningConfiguration> is directly related to screenmd. This section defines the metrics in <ParametrizedMetric> elements. Normally, the user of ScreenMD does not need to edit these definitions, since they are either provided as 'factory settings' or they are generated and written into the configuration file by other utilities. A brief explanation on all required and optional values (XML attributes) are given below.
Name is always required, this specifies the user defined name of the metric. This can be an arbitrary name which is printed in the outputs (hit sets), and this is the name to refer to a specific metric after the -M flag.
ActiveFamily distinguishes between different versions of the same base metric (e.g. Euclidean) applied to different therapeutic areas. Such distinction is needed because different areas need different settings for metric's parameters in order to produce optimal hits. The name of the active family helps the user to find the right metric for his or her particular needs.
Metric is the name of the base metric (dissimilarity metric). At present two base metrics are available: Tanimoto and Euclidean.
Normalized indicates that the metric is normalized or not.
Threshold sets the limit for dissimilarity ratios to be accepted.
AsymmetryFactor is used in asymmetrical metrics.
ScaleFactor is used in the scaled metrics. In the present implementation Tanimoto metric can be scaled.
Weights specify individual weight values for each fingerprint cell used by the metric. However, if the user requires to tweak the weights of the Euclidean metric, each weight value can be written as an individual element. In this case a weight values does not directly correspond to a fingerprint cell, but to a pharmacophore point type (e.g. donor, acceptor) or to a topological distance.
where f1(i ) and f2(i ) denote the two pharmacophore types associated with the i-th cell and d ( i ) is the corresponding topological distance.
There are much less weights in this case than in the cell-wise weighting of the fingerprint: the number of pharmacophore point types + the number of topological distances considered.
Chemical fingerprints take three parameters that can be defined in the <Parameters> section. These are Length which is the number of bits in the fingerprint. The default value is 512, values smaller than 128 result in poor descriptors. The number of bits should be multiple of 32. BondCount determines the longest path considered, and BitCount specifies the number of bits to be turned to 1 in the fingerprint for each feature identified.
The other two sections, <StandardizerConfiguration> and <ScreeningConfiguration> are the same as in the case of any other molecular descriptor.
It is important to know, that molecular descriptor specific configuration/parameter definition files are not always given in an explicit manner, only if both target and query sets are taken from molecular structure files. In all other cases configuration settings are stored either in a descriptor file or in the database when descriptors are generated with GenerateMD.
The following example reads the target and query molecular structures from files (targets.sdf and queries.smiles, respectively) and writes results to the standard output:
screenmd targets.sdf queries.smiles -g -k PF -c pharma-frag.xml
Target structures are defined in the SDfile named targets.sdf, and query molecules are read from queries.smiles. Pharmacophore fingerprint parameters are taken from the configuration file (specified after the -c flag). As no metrics are selected all available ones are used. The target structures are compared against each individual query structures, no pharmacophore hypothesis is constructed. The output file contains a table of the dissimilarity ratios, like the one shown below:
id q1_PF_Euclidean 2_PF_Euclidean 3_PF_Euclidean 4_PF_Euclidean 1 80.95 80.90 76.75 80.84 2 92.41 92.39 88.45 92.34 3 90.27 90.25 86.51 90.20 4 34.45 34.49 31.79 34.48 5 54.19 54.22 50.36 54.21 6 57.79 57.81 54.22 57.81 7 37.85 37.89 35.51 37.82 8 41.56 41.60 39.24 41.49 End of table
The above is the default format, where the precision of calculations and the display format is 2. The first number in each row is the index of the target molecule, which is generated by the program ( -g flag). | https://docs.chemaxon.com/display/lts-fermium/screenmd.md | 2021-01-15T23:53:58 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.chemaxon.com |
This guide has is no longer being updated. For current information and instructions, see the new Amazon S3 User Guide.
Setting S3 Object Ownership to bucket owner preferred in the console
S3 Object Ownership enables you to take ownership of new objects that other AWS accounts
upload to your bucket with the
bucket-owner-full-control canned access control list
(ACL). This section describes how to set Object Ownership using the AWS Management
Console.
Setting Object Ownership to bucket owner preferred on an S3 bucket
Sign in to the AWS Management Console and open the Amazon S3 console at
.
In the Buckets list, choose the name of the bucket that you want to enable S3 Object Ownership for.
Choose the Permissions tab.
Choose Edit under Object Ownership.
Choose Bucket owner preferred, and then choose Save.
How do I ensure that I take ownership of new objects?
With the preceding steps, Object Ownership enables you to take ownership of any new
objects that are written by other accounts with the
bucket-owner-full-control
canned ACL. For information about enforcing Object Ownership, see How do I ensure that I
take ownership of new objects? in the Amazon Simple Storage Service Developer Guide. | https://docs.aws.amazon.com/AmazonS3/latest/user-guide/add-object-ownership.html | 2021-01-16T00:59:17 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.aws.amazon.com |
WP-7328: NetApp Conversational AI Using NVIDIA Jarvis
Contributors
Download PDF of this page
Rick Huang, Sung-Han Lin, NetApp
Davide Onofrio, NVIDIA
The NVIDIA DGX family of systems is made up of the world’s first integrated artificial intelligence (AI)-based systems that are purpose-built for enterprise AI. NetApp AFF storage systems deliver extreme performance and industry-leading hybrid cloud data-management capabilities. NetApp and NVIDIA have partnered to create the NetApp ONTAP AI reference architecture, a turnkey solution for AI and machine learning (ML) workloads that provides enterprise-class performance, reliability, and support.
This white paper gives directional guidance to customers building conversational AI systems in support of different use cases in various industry verticals. It includes information about the deployment of the system using NVIDIA Jarvis. The tests were performed using an NVIDIA DGX Station and a NetApp AFF A220 storage system.
The target audience for the solution includes the following groups:
Enterprise architects who design solutions for the development of AI models and software for conversational AI use cases such as a virtual retail assistant
Data scientists looking for efficient ways to achieve language modeling development goals
Data engineers in charge of maintaining and processing text data such as customer questions and dialogue transcripts
Executive and IT decision makers and business leaders interested in transforming the conversational AI experience and achieving the fastest time to market from AI initiatives | https://docs.netapp.com/us-en/netapp-solutions/ai/cainvidia_executive_summary.html | 2021-01-15T23:55:23 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.netapp.com |
We support
- Authorize.net
- PayPal
- Stripe
- Blackbaud Merchant Service(BBMS)
- Acceptiva, BluePay, PayU, CCAvenue, EaseBuzz (Works only in India)
payment gateways to be set up as your payment accounts on the Almabase platform.
You can use these payment accounts while setting up events, donation campaigns, or to collect recurring membership fees via the platform.
Almabase is just an intermediary, which enables collecting and processing money using these payment gateways. We do not take any commission on the transactions.
The standard payment gateway charges levied by PayPal & Authorize.net etc. will apply when you use them. We recommend researching the transaction charges of different providers to see what suits you. You can take a look at these to begin with:
PayPal Transaction Charges
Authorize.net Transaction Charges
Almabase does not store or save any credit card information, and all the transactions processed through the payment gateway are encrypted.
Write to us at [email protected] in case of any queries. | https://docs.almabase.com/en/articles/1235283-which-payment-account-providers-do-you-support | 2021-01-15T23:42:30 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.almabase.com |
Create a live event in Microsoft Stream
The following short video explains how to create a live event in Microsoft Stream.
Important
China: Currently, users located in China will not be able to set up or attend Microsoft Stream, Microsoft Teams, or Yammer live events or, view videos on-demand without their IT administrator's help.
Before you start, check with your admin to see whether they need to set up a VPN to connect your corporate network so that these apps can work seamlessly in your organization. (2) hours for certain changes to propagate across Microsoft Stream, Microsoft Teams, and Microsoft Yammer. Allowing 24 hours or more can provide time for testing and making adjustments if needed.
Schedule the live event
In Microsoft Stream online, go to Create > Live event.
Fill in the details pane with a name, description, and event time. You can also upload a thumbnail as a poster image for users to see.
Note
As you fill in information, an automatic slate will generate to let your users know information about the event before it starts.
Select the permissions pane and set who you want access to the video and which groups for it to be displayed in for increased discoverability.
Optionally, you can set additional options in the Options pane. Most options will take effect when the event is complete after the transition from live to on demand.
Select Save. By selecting Publish now, those you have given access will be able to see the event page in the attendee view, but will be shown the automatically generated slate before you go live.
Note
You must publish in order to share the URL. If you didn't publish, the system will automatically publish your event when you are ready to go live and manually start your event. When the event is published, users can find the event throughout the Stream portal in browse, search and on group pages.
Stream your live event
When you save your live event, you will get the RTMP server ingest URL located in the encoder setup tab. Select an encoder from the drop down list, or choose to configure manually. Check out the list of encoders for easy setup instructions.
To set up your encoder, select Start setup on the producer controls. It may take some time to start the setup process.
When the setup is ready, copy the server ingest URL into your encoder to start sending the live encoder feed to Microsoft Stream. Learn more about setting up your encoder
Note
It is important to set up your encoder with the correct configuration, and specify both audio and video for playback. Check out the configuration requirements to make sure you set up the encoder correctly.
When you start pushing from the encoder to the server ingest point, you should see the producer preview update.
Note
Audience members won't see this until you start the live event - they will see the automatically generated slate.
After you are satisfied with your setup and can see the preview, select Start event. If you didn't previously publish your event, Stream will do so automatically when you start the event.
After the event starts, audience members can see the event.
Note
You can also choose to disconnect at this point, which will take you back to step #2 if your intent was to test before the event.
When you are finished with your event, select End event on the producer controls. This ends the event and makes the content immediately available for video-on-demand.
Important
Make sure to end the event in Stream before stopping your encoder. If you do this in reverse order, audience members will see an error. | https://docs.microsoft.com/en-us/stream/live-create-event | 2021-01-16T00:41:39 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.microsoft.com |
.213 Note
Note:
This section does not apply to land use conversions, such as the change of forestland to agricultural use. Land use conversions are not considered to be forest management.
NR 1.213(3)(c)
(c)
The cooperating forester shall use accepted methods that recognize the landowner's personal land management objectives.
NR 1.213(3)(d)
(d)
The cooperating forester shall attend a minimum of 10 hours of department–approved training annually.
NR 1.213(3)(e)
(e)
The cooperating forester agrees to submit to the department reports of timber sale stumpage volumes and values for sales he or she administers.
NR 1.213(3)(f)
(f)
Any other provisions deemed reasonable by the department to further the practice of sound forestry in the state.
NR 1.213 History
History:
Cr.
Register, July, 1989, No. 403
, eff. 8-1-89; am. (1) and (3) (intro.) to (e),
Register, February, 1996, No. 482
, eff. 3-1-96;
CR 01-030
: am. (3) (b) and (d),
Register November 2001 No. 551
, eff. 12-1-01.
NR 1.22
NR 1.22
Establishment of coniferous plantations.
The department shall encourage the establishment and intensive management of coniferous plantations planted with suitable species and spacing. The landowner shall be encouraged to maintain access ways which will aid in the management, diversified use, prevention, detection and suppression of destructive forces which might endanger such plantations
NR 1.22 History
History:
Cr.
Register, April, 1975, No. 232
, eff. 5-1-75.
NR 1.23
NR 1.23
Fire control cooperation.
The department shall assist local governments in fire emergencies whenever possible, utilizing personnel and equipment from the department.
NR 1.23 History
History:
Cr.
Register, April, 1975, No. 232
, eff. 5-1-75.
NR 1.24
NR 1.24
Management of state and county forests.
NR 1.24(1)
(1)
The natural resources board's objective for the management of state forests and other department properties where timber cutting is carried out and county forests is to grow forest crops by using silvicultural methods that will perpetuate the forest and maintain diversified plant and animal communities, protect soil, watersheds, streams, lakes, shorelines and wetlands, in a true multiple-use concept. In the management of the forests, it shall be the goal of the board to insure stability in incomes and jobs for wood producers in the communities in which the state and county forest lands are located, and to increase employment opportunities for wood producers in future years. Whenever possible, large sale contracts shall be for 4 years which will assist wood producers in dealing with uneven demand and prices for their products.
NR 1.24(2)
(2)
To achieve this objective, sale areas or cutting blocks and timber harvest operations will be planned through an intra-departmental inter-disciplinary review process when 10-year plans are developed in cooperation with the affected county to optimize management practices; to recognize the long-term values of preserving the integrity of the soil; to assure the maintenance of water quality; and to achieve multiple objectives of forest land management. Although multiple use shall be the guiding principle on state and county forests, the board recognizes that optimization of each use will not be possible on every acre. Desirable practices include:
NR 1.24(2)(a)
(a)
Fully utilizing available topographic maps, aerial photographs and soil surveys and combining these with local knowledge or field reconnaissance to ascertain on-the-ground conditions.
NR 1.24(2)(b)
(b)
Wherever practical, use perennial streams as harvest-cutting boundaries with provision for a streamside management zone to protect stream bank integrity and water quality, and with skidding planned away from these streams and the adjacent streamside management zones.
NR 1.24(2)(c)
(c)
An appropriate silvicultural system and cutting design should be planned to optimize economic skidding distances, to minimize road densities and unnecessary road construction and for efficient establishment and management of subsequent forest crops.
NR 1.24(2)(d)
(d)
Cutting boundaries should utilize topographic terrain, ridges, roads and forest type changes where ownership patterns permit and should provide a harvest area size consistent with economical skidding, available logging equipment, silvicultural requirements and other management objectives.
NR 1.24(2)(e)
(e)
Plan cutting layouts to avoid leaving narrow unmanageable strips of timber susceptible to storm damage and windthrow.
NR 1.24(3)
(3)
Department properties and county forests shall be zoned and managed primarily for aesthetic values in selected areas as identified in the master plan to recognize the importance of scenic values to the economy of the state. When clearcutting can be used to develop specialized habitat conditions within the forest, i.e., savanna type openings for sharp tail grouse management or is the appropriate silvicultural system, due consideration shall be given to the attainment of biological diversity of the future forest, the development of edge for wildlife, a variety of age classes in future growth and aesthetic quality of the area. Clearcutting is a silvicultural system usually applicable to intolerant species and is defined for purposes of this policy as a timber removal practice that results in a residual stand of less than 30 feet of basal area per acre upon completion of a timber sale. Furthermore, as the existing acreage of overmature even-aged stands change, the long-range goal of the board shall be to increase the intensities of professional management on the state and county forests.
NR 1.24(4)
(4)
Special management practices shall apply to eagle and osprey nesting sites, deer yards, to lake and stream shoreline zones, to sensitive soil types, to springs and important watersheds, to selected aesthetically managed roadsides and to land use zones identified in the master plan as managed more restrictive.
NR 1.24(5)
(5)
Block type plantings of a single species that create a monotype culture within an area shall be discouraged. Plantations shall be established to achieve a more aesthetically pleasing appearance and to provide for added diversity of type. Planting will be accomplished by varying the direction of the rows or contouring to create a more natural appearance, planting on the contour, using shallow furrows or eliminating furrows where practical. In planting adjacent to a major roadway, the first rows should be parallel to the roadway to meet aesthetic concern and provide game cover. Existing and new plantations will be thinned at the earliest opportunity and periodically thereafter to develop an understory for wildlife habitat and a more natural environment.
NR 1.24 History
History:
Cr.
Register, December, 1977, No. 264
, eff. 1-1-78.
NR 1.25
NR 1.25
Generally accepted forestry management practices.
NR 1.25(1)
(1)
Purpose.
Section
823.075 (1) (d)
, Stats., requires the department to define generally accepted forestry management practices.
NR 1.25(2)
(2)
Definitions.
In this section:
NR 1.25(2)(a)
(a)
“Department" means the Wisconsin department of natural resources.
NR 1.25(2)(b)
(b)
“Generally accepted forestry management practices" means forestry management practices that promote sound management of a forest. “Generally accepted forestry management practices" include those practices contained in the most recent version of the department publication known as Wisconsin Forest Management Guidelines and identified as PUB FR-226.
NR 1.25(2)(c)
(c)
“Sound management of a forest" means sustainably managing a forest with the application of ecological, physical, quantitative, managerial, economic, and social principles to the regeneration, management, utilization, protection and conservation of forest ecosystems to meet specified wildlife habitat, watershed, aesthetics, cultural and biological goals and objectives while maintaining the productivity of the forest.
NR 1.25(3)
(3)
Department duties.
NR 1.25(3)(a)
(a)
The department-developed Wisconsin Forest Management Guidelines, PUB FR-226, shall contain forestry management practices that are recommended and approved by the department to promote sound management of a forest.
NR 1.25 Note
Note:
Copies of Wisconsin Forest Management Guidelines, PUB FR-226, are available for inspection at the offices of the Department of Natural Resources and the Legislative Reference Bureau. Copies may be obtained from the Wisconsin Department of Natural Resources, Division of Forestry, 101 S. Webster Street, P.O. Box 7921, Madison, WI, 53707-7921. Property owners may seek advice on implementation of generally accepted forestry management practices from department foresters, county foresters and cooperating foresters.
NR 1.25(3)(b)
(b)
The department shall periodically update Wisconsin Forest Management Guidelines so that a person may readily determine what forestry management practices are recommended and approved by the department. The department shall update Wisconsin Forest Management Guidelines a minimum of every 5 years.
NR 1.25(3)(c)
(c)
The department shall use a process that incorporates public participation and public comments when updating Wisconsin Forest Management Guidelines.
NR 1.25 History
History:
CR 06-097
: cr.
Register April 2007 No. 616
, eff. 5-1-07.
NR 1.26
NR 1.26
Contracting with cooperating foresters for timber sale establishment.
NR 1.26(1)
(1)
Purpose.
The department may contract with private cooperating foresters to assist the state in the harvesting and sale of timber from state forest lands to meet the annual allowable timber harvest established under s.
28.025
, Stats.
NR 1.26(2)
(2)
Definition.
“Cooperating forester" has the meaning given in s.
NR 1.21 (2) (b)
.
NR 1.26(3)
(3)
Contracted tasks.
Tasks included in cooperating forester contracts for state land timber harvests may include updating of forest reconnaissance, marking of trees and harvest boundaries, estimating volume, preparing maps, recommending timber sale contract terms or operational specifications, providing data on cutting notices and reports, scaling cut products, and inspecting active harvests. The department shall determine which of these services are appropriate to contract for on individual timber sales.
NR 1.26(4)
(4)
Department tasks.
The department shall select areas to harvest, determine silvicultural harvest systems to be applied, and define any additional timber sale procedures or precautions necessary to achieve objectives in approved master plans or other department policies. The department shall review and approve cutting notices and reports, prepare contracts, advertise for timber sale bids, award sales, receive stumpage payments and performance bonds, and administer timber sale contracts. The department shall monitor the performance of cooperating foresters contracting on state forest timber harvests for quality of service and conformance to department standards.
NR 1.26(5)
(5)
Bids for services and payments to cooperating foresters.
Cooperating foresters shall be compensated at the department's choice of a rate per hour, acre or project established by bids. When a need for timber sale assistance is identified, the department shall issue a request for bids to cooperating foresters serving the area. Bids shall include labor, travel, equipment and any supplies such as marking paint not identified as being provided by the department that a contractor would need to do the work. Timber sale assistance contract awards shall be determined on price alone unless additional evaluation criteria such as specialized training or experience are included in the request for bids.
NR 1.26(6)
(6)
Method to allot timber sale revenue.
As provided in s.
28.05
, Stats., payments to cooperating foresters for timber harvesting and selling assistance on state-owned land shall be paid from an allocation of timber sale proceeds. The department of natural resources shall make periodic requests to the department of administration for allocations of funding to the cooperating foresters appropriation, s.
20.370 (2) (cy)
, Stats. The size of the requested allocation shall be based on outstanding purchase requisitions for the contracted timber harvest assistance. The appropriation shall be split-funded with the proportionate splits coming from the administrative function accounts where the timber sale revenues are deposited.
NR 1.26 History
History:
CR 07-011
: cr.
Register October 2007 No. 622
, eff. 11-1-07; correction in (6) made under s.
13.92 (4) (b) 7.
, Stats.,
Register November 2018 No. 755
.
NR 1.27
NR 1.27
Contracting with cooperating foresters and private contractors for regeneration services.
NR 1.27(1)
(1)
Purpose.
The department may contract with private cooperating foresters and private contractors to assist the state in the regeneration of state forest lands to meet the annual allowable timber harvest established under s.
28.025
, Stats.
NR 1.27(2)
(2)
Definition.
“Cooperating forester" has the meaning given in s.
NR 1.21 (2) (b)
.
NR 1.27(3)
(3)
Contracted tasks.
Tasks included in contracts with cooperating foresters and private contractors for state lands regeneration services may include, site preparation, invasive species control, and tree planting on harvested lands. The department shall determine which of these services are appropriate to contract for on individual timber sales.
NR 1.27(4)
(4)
Department tasks.
The department shall select areas to regenerate, determine regeneration systems to be applied, and define any additional procedures or precautions necessary to achieve objectives in approved master plans or other department policies. The department shall monitor the performance of cooperating foresters and private contractors contracting on state forest lands for quality of service and conformance to department standards.
NR 1.27(5)
(5)
Bids for services and payments to cooperating foresters and private contractors.
Cooperating foresters and private contractors shall be compensated at the department's choice of a rate per hour, acre or project established by bids for individual projects. When a need for regeneration project assistance is identified, the department shall issue a project-specific request for bids to cooperating foresters and private contractors that are experienced in the desired type of work. The department may establish pre-qualification lists of cooperating foresters and private contractors serving an area. Bids may include labor, travel, equipment and any supplies not identified as being provided by the department that a private contractor would need to do the work. As provided in s.
28.05 (3) (am)
, Stats., payments to cooperating foresters and private contractors for regeneration assistance on state-owned lands shall be paid from an appropriation of timber sale proceeds.
NR 1.27 History
History:
CR 13-023
: cr.
Register December 2013 No. 696
, eff. 1-1-14.
NR 1.29
NR 1.29
Ice Age and North Country trails.
NR 1.29(1)
(1)
Footpaths.
The Ice Age Trail and North Country Trail shall be managed primarily as footpaths for pedestrian use: walking, hiking, backpacking, snowshoeing, and ungroomed cross-country skiing.
NR 1.29(2)
(2)
Purpose.
NR 1.29(2)(a)
(a)
The purpose of the Ice Age Trail is to provide premier hiking and backpacking experiences and to preserve and interpret Wisconsin's glacial landscape and other natural and cultural resources in areas through which the trail passes.
NR 1.29(2)(b)
(b)
The purpose of the North Country Trail is to provide premier hiking and backpacking experiences as it meanders through a variety of northern landscapes, linking scenic, natural, historic, and cultural areas in seven states from New York to North Dakota.
NR 1.29(3)
(3)
Definitions.
In this section:
NR 1.29(3)(a)
(a)
“Dispersed camping area" has the meaning given in s.
NR 45.03 (9c)
.
NR 1.29(3)(b)
(b)
“Ice Age Trail" has the meaning given in s.
23.17 (2)
, Stats. When the Ice Age Trail is within a property other than a State Ice Age Trail Area, the Ice Age Trail for management purposes shall be the treadway, which is the trail tread and the land 25 feet adjacent to both sides of the trail tread.
NR 1.29(3)(c)
(c)
“Master plan" has the meaning given in s.
NR 44.03 (8)
.
NR 1.29(3)(d)
(d)
“State Ice Age Trail Areas" mean lands purchased by the department for the Ice Age Trail under the authority of s.
23.09 (2) (d) 10.
, Stats., except when purchased as part of another department project.
NR.
NR 1.29(7)(d)2.
2.
`Vegetation Management.'
Native community types existing at the time of acquisition shall be retained or enhanced.
NR 1.29(7)(d)2.a.
a.
Vegetative management shall focus on enhancing the scenic and natural values along the Ice Age Trail. Cropped lands may be planted with a permanent grass cover. Tree plantations may be thinned to create a more natural appearing condition.
NR 1.29(7)(d)2.b.
b.
Invasive species may be removed or controlled.
NR 1.29(7)(d)2.c.
c.
Any proposed forest management requires consultation with the managing bureau of the property to ensure that scenic values along the Ice Age Trail are being preserved or enhanced.
NR 1.29 History
History:
CR 04-092
: cr.
Register April 2005 No. 592
, eff. 5-1-05;
CR 07-026
: am. (1)
Register December 2007 No. 624
, eff. 1-1-08.
CR 10-118
: r. and recr.,
Register May 2011 No. 665
, eff. 6-1-11; renum. (6) (a) and (b) to be (6) and (7) under s. 13.92 (4) (b) 1., Stats., correction of (6) and (7) (titles), as renumbered, under 13.92 (4) (b) 2., Stats., corrections in (7) (b) 1. to 3. under s. 13.92 (1) (b) 7., Stats.,
Register May 2011 No. 665
, eff. 6-1-11;
CR 13-108
: am. (7) (b) 5.
Register August 2014 No. 704
, eff. 9-1-14; correction in (7) (b) 5. made under s. 35.17, Stats.,
Register August 2014 No. 704
, eff. 9-1-14; correction in (7) (b) 5. made under s. 13.92 (4) (b) 7., Stats.,
Register July 2017 No. 739
.
Down
Down
/code/admin_code/nr/001/1
true
administrativecode
/code/admin_code/nr/001/1/25/3/a
Department of Natural Resources (NR)
Chs. NR 1-99; Fish, Game and Enforcement, Forestry and Recreation
administrativecode/NR 1.25(3)(a)
administrativecode/NR 1.25(3)(a). | https://docs-preview.legis.wisconsin.gov/code/admin_code/nr/001/1/25/3/a | 2021-01-15T23:06:06 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs-preview.legis.wisconsin.gov |
Many.
Name and description. Give the trigger a name and description (optional) of your choice.
Custom data is used when exporting data to external systems. It allows you to, for example, attach some account identifier to the folder that can then be used by the data exporter.
Team membership. Teams are shared across Adnuntius Data and Adnuntius Advertising, and you will find full documentation on how to create users, teams and roles under the Adnuntius Advertising section. A team will determine which system users will have access to your folder. Once a user has access to a folder they can see the folder ID and then start sending data to that folder. | https://docs.adnuntius.com/adnuntius-data/user-interface-guide/segmentation/folders | 2021-01-16T00:43:28 | CC-MAIN-2021-04 | 1610703497681.4 | [] | docs.adnuntius.com |
:
Mylyn, a task-focused UI for Eclipse, along with task connectors for Bugzilla and Trac:
Other Eclipse projects available in Fedora include:
Subclipse, for integrating Subversion version control:
PyDev, for developing in Python:
Fedora. | http://docs.fedoraproject.org/release-notes/f9preview/ta/sn-Devel.html | 2008-05-16T15:00:06 | crawl-001 | crawl-001-009 | [] | docs.fedoraproject.org |
Timemachine X
Contents
Timemachine X is a plugin for Bludit which allows you to go back to some particular state of your system. For example, if you unintentionally deleted a page and you want to recover it, or if you edit some of the the settings and you want to recover the previous settings.
Timemchine X is included in Bludit PRO, but you can buy it separately from here:
Enable Timemachine
- Go to the admin area.
- Go to Settings > Plugins.
- Search for the plugin Timemachine X and click on the Activate button.
- Now each event on your website is stored.
How to recover a previous state
- Go to the admin area.
- Go to Settings > Plugins.
- Search for the plugin Timemachine X, and click on the Settings button.
- You can see a list of events ordered by date.
- Search for the event and click on the Go back to this point button.
- Now the system is restored to that particular state.
Video
In the following video you can see that the user creates a new page, then the page is deleted by "mistake". Then the page is recovered via Timemachine X. | https://docs.bludit.com/en/bludit-pro/timemachine-x | 2020-01-17T19:14:59 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.bludit.com |
Mouse Support
On of the most important usage of your mouse device in combination with RadTreeView is expanding and collapsing items. To expand\collapse an item you just need to click on the expander icon. For more information about the visual structure of the treeview read the topic Visual Structure.
To learn more about the RadTreeView's mouse support take a look at the Drag and Drop topic. There you will find step-by-step tutorials showing you how to perform some of the most common tasks using just your mouse. | https://docs.telerik.com/devtools/silverlight/controls/radtreeview/features/mouse-support | 2020-01-17T19:02:11 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.telerik.com |
changes.mady.by.user Nilmini Perera
Saved on May 17, 2019
Saved on Nov 29, 2019
...
Note the following when adding these configurations:
dbConfig
wso2registry
<dbConfig name="sharedregistry">
<EI_HOME>/conf/datasources/master-datasources.xml
The registry mount path denotes the type of registry. For example, ”/_system/config” refers to configuration Registry, and "/_system/governance" refers to the governance registry.
/_system/config
/_system/governance
The <dbconfig> entry enables you to identify the datasource you configured in the <EI_HOME>/conf/datasources/master-datasources.xml file. The unique name "sharedregistry" refers to that datasource entry.
haredregistry
<remoteInstance>
Also, specify the cache ID in the <remoteInstance> section. This enables caching to function properly in the clustered environment.
Cache ID is the same as the JDBC connection URL of the registry database. This value is the Cache ID of the remote instance. It should be in the format of $database_username@$database_url, where $database_username is the username of the remote instance database and $database_url is the remote instance database URL. This cacheID denotes the enabled cache. In this case, the database it should connect to is REGISTRY_DB, which is the database shared across all the nodes. You can find that in the mounting configurations of the same datasource that is being used.
$database_username@$database_url
$database_username
$database_url
REGISTRY_DB
Define a unique name in the <id> tag instance ID for each remote instance . This is then referred to from mount configurations. In the above (using the <id> tag). Be sure to refer to the same instance ID from the corresponding mount configurations (using the <instanceId> tag). In this example, the unique ID for the remote instance is "instanceId". This same ID as used as the instance ID for the config mount as well as the governance mount.
<id>
<instanceId>
instanceId
Note that registry mounting will not be successful if the registry mount configuration (specified using the <mount> section) does not have a corresponding remote registry instance (specified using the <remoteInstance> section) with the same instance ID. If you have used mismatching instance IDs under the <remoteInstance> and <mount> configurations by mistake, you need to follow the steps given below to rectify the error:
<mount>
Delete the existing local registry. If you are using a database other than the embedded H2, you need to perform the addition step of setting up a new database.
Apply the configurations in the registry.xml file.
registry.xml
Specify the actual mount path and target mount path in each of the mounting configurations. The target path can be any meaningful name. In this instance, it is "/_system/eiconfig".
/_system/eiconfig
Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today. | https://docs.wso2.com/pages/diffpagesbyversion.action?pageId=119130178&selectedPageVersions=2&selectedPageVersions=1 | 2020-01-17T18:35:06 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.wso2.com |
This Specification Macro Task will draw lines in a new image.
This property requires a Table Array with the different parameters for information about the line(s) to create. The information required relies on specific formatting. The Table items are:
When this Task is added the properties are static. To be able to build rules on a static properties see How To: Change A Static Property To A Dynamic Property. | https://docs.driveworkspro.com/Topic/IppDrawLinesInNewImage | 2020-01-17T20:47:14 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.driveworkspro.com |
Once your game is complete, you can package a release version for distribution. Whether you go through the Project Launcher in the editor or the Project Launcher tab in Unreal Frontend, the steps are the same for packaging a release-versioned project. However, depending on if you are creating a Windows game and need to create your own installer, shipping a mobile project, or targeting another platform, the steps you take with the finished packaged content will be different.
This is an example of packaging a 1.0 release of ShooterGame, aimed at Windows 64-bit, localized in English.
Open the Project Launcher, either within Unreal Editor or using Unreal Frontend.
Create a new Custom Launch Profile using the + button.
Set a name and description for your profile.
There are a number of settings for the release process.
Project
You can set the specific project to use, or use Any Project to patch the current project.
Build
Set the build configuration to Shipping.
Optionally, expand Advanced Settings if you need to build UAT as part of the release.
Cook
Select By the Book as the cooking method in the dropdown menu.
Check the boxes for all platforms you would like to cook content for. In this example for Windows testing, we have selected WindowsNoEditor.
Check the boxes for all cultures to cook localizations for.
Check the boxes for which maps to cook.
In Release/DLC/Patching Settings:
Check the Create a release version of the game for distribution. checkbox.
Enter a version number for this release.
Expand Advanced Settings and make sure the following options are enabled, as well as any others you need for your specific project's distribution method:
Compress content
Save packages without versions
Store all content in a single file (UnrealPak)
Also under Advanced Settings, set the cooker configuration to Shipping.
Package
Set the build to Package & store locally.
Deploy
Set the build to Do Not Deploy.
Once you have set all the above settings, navigate back to the main profile window using the Back button in the top right corner.
Click on the launch icon next to your Release profile.
The project launcher will go through the building, cooking, and packaging process. This may take some time depending on the complexity of your project.
Once the operation is complete, close the window or click on Done. You can test the patch now with the steps below.
Save the asset registry and pak file from
[ProjectName]\Releases[ReleaseVersion][Platform]. In this example, this is
ShooterGame\Releases\1.0\WindowsNoEditor.
The asset registry and pak file will be needed for any future patches or DLC to check against.
On Windows, you can test running the project from
[ProjectName]\Saved\StagedBuilds\WindowsNoEditor.
While Steam will allow you to upload full packages of your game and do the updating process for you, using release versions as outlined here is still the recommended practice when distributing through Steam. This will make the process smoother if you decide to add additional supported platforms or distribution methods later on. | https://docs.unrealengine.com/en-US/Engine/Deployment/Releasing/index.html | 2020-01-17T19:56:47 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.unrealengine.com |
Table of Contents
Product Index
Pleased to offer you this wonderful set of 80 high resolution Photoshop Brushes of clouds, planets, nebulae et al for Photoshop 7+.
Along with 12 spectacular fantasy space backgrounds each is 1800 x 2400px @ 300 dpi saved at maximum quality plus four backgrounds as. | http://docs.daz3d.com/doku.php/public/read_me/index/21703/start | 2020-01-17T20:19:11 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.daz3d.com |
Quotas Concepts¶
Last Updated: October 2019
This tutorial introduces Tethys Quotas API concepts for Tethys developers. The topics covered include:
Workspace Quotas
Creating custom quotas
Enforcing quotas
Managing quotas
Extended concepts from previous tutorials
0. Start From Advanced Solution (Optional)¶
If you wish to use the advanced solution as a starting point:
git clone cd tethysapp-dam_inventory git checkout -b advanced-solution advanced-3.0
1. Workspace Quotas¶
In the Advanced Concepts tutorial we refactored the Model to use an SQL database, rather than files. However, we might want to store some data as files in case we want to export them later. This will also allow us to demonstrate the use of the built-in workspace qutoas that come with the Tethys Quotas API.
Add the @user_workspace decorator and a user_workspace argument to the
assign_hydrographcontroller. Write the hydrograph CSV with the dam id prepended to the filename to the user's workspace. The prepended id will be used later when handling a user deleting a dam).all() # Defaults dam_select_options = [(dam.name, dam.id) for dam in all_dams] selected_dam = None hydrograph_file = None # Errors dam_select_errors = '' hydrograph_file_error = '' # Case where the form has been submitted if request.POST and 'add-button' in request.POST: # Get Values has_errors = False selected_dam = request.POST.get('dam-select', None) if not selected_dam: has_errors = True dam_select_errors = 'Dam is Required.' # Get File if request.FILES and 'hydrograph-file' in request.FILES: # Get a list of the files hydrograph_file = request.FILES.getlist('hydrograph-file') if not hydrograph_file and len(hydrograph_file) > 0: has_errors = True hydrograph_file_error = 'Hydrograph File is Required.' if not has_errors: # Process file here hydrograph_file = hydrograph_file[0] success = assign_hydrograph_to_dam(selected_dam, hydrograph_file) # Remove csv related to dam if exists for file in os.listdir(user_workspace.path): if file.startswith("{}_".format(selected_dam)): os.remove(os.path.join(user_workspace.path, file)) # Write csv to user_workspace to test workspace quota functionality full_filename = "{}_{}".format(selected_dam, hydrograph_file.name) with open(os.path.join(user_workspace.path, full_filename), 'wb+') as destination: for chunk in hydrograph_file.chunks(): destination.write(chunk) destination.close() # Provide feedback to user if success: messages.info(request, 'Successfully assigned hydrograph.') else: messages.info(request, 'Unable to assign hydrograph. Please try again.') return redirect(reverse('dam_inventory:home')) messages.error(request, "Please fix errors.") ...
Go to the Resource Quotas section of the admin pages and edit the
User Workspace Quotaas follows (must be done on administrator account):
Default -
2e-07(measured in GB so this converts to 214 bytes which allows for storing about 2 hydrographs to test the quota)
Active -
Enabled
Impose default -
Enabled
To test, assign
hydrograph2.csvand
hydrograph4.csv(from Sample Hydrographs) to two separate dams through the app and then try to go back and assign a third hydrograph (all of this must be done on a non-administrator account). You should get an error page that advises you to visit the storage management pages and clean workspaces. Do this now (see Manage User Storage for help) and try again to assign a hydrograph. Because your user workspace is clear you should be able to assign another hydrograph.
Note
Quotas are not enforced on administrator users (i.e. staff/superusers). To manage quotas, login as administrator, but to test them, login as a normal user.
Now that hydrograph files are stored to the user's workspace and the user can clear said workspace through their settings page, we will want to do some extra processing when they actually do clear their workspace. If the user deletes their hydrograph files we also want to remove the related hydrographs from the database.
First add
user_id = Column(Integer) as a column in the Dam model class. Also add
cascade="all,delete" as an argument to the hydrograph relationship in the
Dam model class and the points relationship in the
Hydrograph model class.
class Dam(Base): """ SQLAlchemy Dam DB Model """ __tablename__ = 'dams' # Columns id = Column(Integer, primary_key=True) latitude = Column(Float) longitude = Column(Float) name = Column(String) owner = Column(String) river = Column(String) date_built = Column(String) user_id = Column(Integer) # Relationships hydrograph = relationship('Hydrograph', cascade="all,delete", back_populates='dam', uselist=False) class Hydrograph(Base): """ SQLAlchemy Hydrograph DB Model """ __tablename__ = 'hydrographs' # Columns id = Column(Integer, primary_key=True) dam_id = Column(ForeignKey('dams.id')) # Relationships dam = relationship('Dam', back_populates='hydrograph') points = relationship('HydrographPoint', cascade="all,delete", back_populates='hydrograph')
Note
Adding
cascade="all,delete" as an argument in an sqlalchemey model relationship causes the deletion of related records to be handled automatically. In this case, if hydrograph is removed from the database the hydrograph's points will also be deleted and if a dam is removed the connected hydrograph and its points will be removed.
Then modify the
add_new_dam function like so:
def add_new_dam(location, name, owner, river, date_built, user_id): """ Persist new dam. """ ... # Create new Dam record new_dam = Dam( latitude=latitude, longitude=longitude, name=name, owner=owner, river=river, date_built=date_built, user_id=user_id, ) # Get connection/session to database ...
Add
user_id=-1when initializing
dam1and
dam2in the
init_primary_dbfunction.
def init_primary_db(engine, first_time): ... # Initialize database with two dams dam1 = Dam( latitude=40.406624, longitude=-111.529133, name="Deer Creek", owner="Reclamation", river="Provo River", date_built="April 12, 1993", user_id=-1, ) dam2 = Dam( latitude=40.598168, longitude=-111.424055, name="Jordanelle", owner="Reclamation", river="Provo River", date_built="1941", user_id=-1, ) ...
Then make the following changes to the
add_dam controller:
@permission_required('add_dams') def add_dam(request): """ Controller for the Add Dam page. """ ... user_id = request.user.id # Only add the dam if custom setting doesn't exist or we have not exceed max_dams if not max_dams or num_dams < max_dams: add_new_dam( location=location, name=name, owner=owner, river=river, date_built=date_built, user_id=user_id ) else: ...
Now that we have changed the model for the persistent store we will need to drop the database and re-run
tethys syncstores dam_inventory through the command line. Dropping the database can be done using PGAdmin. Locate the database named dam_inventory_primary_db and delete it. Then re-run
syncstores.
Important
Don't forget to run
tethys syncstores dam_inventory!
Modify the
assign_hydrographcontroller again, this time to only allow users to assign hydrographs to dams that).filter(Dam.user_id == request.user.id) # Defaults ...
Finally, override the
pre_delete_user_workspacemethod that was added with the Tethys Quotas API. Add this to
app.py:
@classmethod def pre_delete_user_workspace(cls, user): from .model import Dam Session = cls.get_persistent_store_database('primary_db', as_sessionmaker=True) session = Session() # Delete all hydrographs connected to dams created by user dams = session.query(Dam).filter(Dam.user_id == user.id) for dam in dams: if dam.hydrograph: session.delete(dam.hydrograph) session.commit() session.close()
Finally, remove the permissions restrictions on adding dams so that any user can add dams.
controllers.py:
def add_dam(request): """ Controller for the Add Dam page. """ ...
base.html:
{% block app_navigation_items %} {% url 'dam_inventory:home' as home_url %} {% url 'dam_inventory:add_dam' as add_dam_url %} {% url 'dam_inventory:dams' as list_dam_url %} {% url 'dam_inventory:assign_hydrograph' as assign_hydrograph_url %} <li class="title">Navigation</li> <li class="{% if request.path == home_url %}active{% endif %}"><a href="{{ home_url }}">Home</a></li> <li class="{% if request.path == list_dam_url %}active{% endif %}"><a href="{{ list_dam_url }}">Dams</a></li> <li class="{% if request.path == add_dam_url %}active{% endif %}"><a href="{{ add_dam_url }}">Add Dam</a></li> <li class="{% if request.path == assign_hydrograph_url %}active{% endif %}"><a href="{{ assign_hydrograph_url }}">Assign Hydrograph</a></li> {% endblock %}
home.html:
{% block app_actions %} {% gizmo add_dam_button %} {% endblock %}
2. Custom Dam Quota¶
With the changes we made to the Dam model, we can now associate each dam with the user that created it and track how many dams each user created. In this part of the tutorial we will create a custom quota to restrict the number of dams a user can create. This will effectively replace the work we did in previous tutorials with the custom setting, max_dams. Instead of limiting the number of dams for the whole app through a custom setting we will restrict it per user with a custom quota.
Note
Restricting the number of dams over the whole app could also be achieved through a custom quota instead of a custom setting. After this tutorial, try to create a custom quota that does the same thing as the custom setting to get more experience with quotas!
Creating a custom quota is pretty simple. Create a new file called
dam_quota_handler.pyand add the following contents:
from tethys_quotas.handlers.base import ResourceQuotaHandler from .model import Dam from .app import DamInventory as app class DamQuotaHandler(ResourceQuotaHandler): """ Defines quotas for dam storage for the persistent store. inherits from ResourceQuotaHandler """ codename = "dam_quota" name = "Dam Quota" description = "Set quota on dam db entry storage for persistent store." default = 3 # number of dams that can be created per user units = "dam" help = "You have exceeded your quota on dams. Please visit the dams page and remove unneeded dams." applies_to = ["django.contrib.auth.models.User"] def get_current_use(self): """ calculates/retrieves the current number of dams in the database Returns: Int: current number of dams in database """ # Query database for count of dams Session = app.get_persistent_store_database('primary_db', as_sessionmaker=True) session = Session() current_use = session.query(Dam).filter(Dam.user_id == self.entity.id).count() session.close() return current_use
Note
See ResourceQuotaHandler for an explanation of the different parameters.
Now go into the portal's
portal_config.ymlfile and add the dot-path of the handler class you just created in the
RESOURCE_QUOTA_HANDLERSarray.
settings: RESOURCE_QUOTA_HANDLERS: - tethysapp.dam_inventory.dam_quota_handler.DamQuotaHandler
After re-starting tethys the
User Dam Quotashould be visible in the
Resource Quotasection of the admin pages. Click on it and make sure Active and Impose default are both
Enabled.
Go into the app's settings page through the portal admin pages and delete the value for
max_damsin the
CUSTOM SETTINGSsection. This will ensure that our custom quota is handling the amount of dams that can be added instead of the custom setting.
To enforce the new dam quota import the
@enforce_quotadecorator and add it to the
add_damcontroller.
from tethys_sdk.quotas import enforce_quota ... @enforce_quota('user_dam_quota') @permission_required('add_dams') def add_dam(request): """ Controller for the Add Dam page. """ ...
Note
We used the codename
user_dam_quota instead of just
dam_quota because Tethys Quotas appends what the quota
applies_to (from the ResourceQuotaHandler class parameters) to the codename to differentiate between quotas on users or on apps.
If we wanted to enforce our custom dam quota on an app as a whole we would need to add
"tethys_apps.models.TethysApp" to the
applies_to parameter in our
DamQuotaHandler and then change the codename to
tethysapp_dam_quota.
You can now test this by logging into a non-administrator account and trying to create more than 3 dams. You should be taken to another error page telling you that you have reached the limit on dams you can create.
3. Dam Quota Management¶
As is, the app would never allow a user to add a new dam once the quota was reached unless the portal administrator changed the dam quota default value (or made the quota inactive) or removed dams created by that user from the database. We will now add a way for a user to remove dams they have created through the
list_dams controller.
Create the
delete_damfunction in
controllers.py:
@user_workspace @login_required() def delete_dam(request, user_workspace, dam_id): """ Controller for the deleting a dam. """ Session = app.get_persistent_store_database('primary_db', as_sessionmaker=True) session = Session() # Delete hydrograph file related to dam if exists for file in os.listdir(user_workspace.path): if file.startswith("{}_".format(int(dam_id))): os.remove(os.path.join(user_workspace.path, file)) # Delete dam object dam = session.query(Dam).get(int(dam_id)) session.delete(dam) session.commit() session.close() messages.success(request, "{} Dam has been successfully deleted.".format(dam.name)) return redirect(reverse('dam_inventory:dams'))
Add this
delete_damurl map to
app.py:
UrlMap( name='delete_dam', url='dam-inventory/delete_dam/{dam_id}', controller='dam_inventory.controllers.delete_dam' ),
Refactor the
list_damscontroller to add a Delete button for each dam. The code will restrict user's to deleting only dams that they created.
@login_required() def list_dams(request): """ Show all dams in a table view. """ dams = get_all_dams() table_rows = [] for dam in dams: hydrograph_id = get_hydrograph(dam.id) if hydrograph_id: url = reverse('dam_inventory:hydrograph', kwargs={'hydrograph_id': hydrograph_id}) dam_hydrograph = format_html('<a class="btn btn-primary" href="{}">Hydrograph Plot</a>'.format(url)) else: dam_hydrograph = format_html('<a class="btn btn-primary disabled" title="No hydrograph assigned" ' 'style="pointer-events: auto;">Hydrograph Plot</a>') if dam.user_id == request.user.id: url = reverse('dam_inventory:delete_dam', kwargs={'dam_id': dam.id}) dam_delete = format_html('<a class="btn btn-danger" href="{}">Delete Dam</a>'.format(url)) else: dam_delete = format_html('<a class="btn btn-danger disabled" title="You are not the creator of the dam" ' 'style="pointer-events: auto;">Delete Dam</a>') table_rows.append( ( dam.name, dam.owner, dam.river, dam.date_built, dam_hydrograph, dam_delete ) ) dams_table = DataTableView( column_names=('Name', 'Owner', 'River', 'Date Built', 'Hydrograph', 'Manage'), rows=table_rows, searching=False, orderClasses=False, lengthMenu=[[10, 25, 50, -1], [10, 25, 50, "All"]], ) context = { 'dams_table': dams_table, 'can_add_dams': has_permission(request, 'add_dams') } return render(request, 'dam_inventory/list_dams.html', context)
Test by deleting a dam or two (while logged in as the non-administrator) and trying to add new dams. This time you shouldn't be redirected to the error page, but should be able to add a dam like normal because you brought the number of dams created by the current user below the quota's default value.
4. Solution¶
This concludes the Quotas Tutorial. You can view the solution on GitHub at or clone it as follows:
git clone cd tethysapp-dam_inventory git checkout -b quotas-solution quotas-3.0 | http://docs.tethysplatform.org/en/latest/tutorials/quotas.html | 2020-01-17T18:46:42 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.tethysplatform.org |
All content with label batching+distribution+gridfs+guide+infinispan+installation+interceptor+intro+query+rebalance+write_behind+缓存.
Related Labels:
podcast, expiration, publish, datagrid, coherence, server, rehash, replication, transactionmanager, dist, release, partitioning, deadlock, archetype, jbossas, lock_striping, schema, listener, state_transfer,
cache, s3, amazon, grid, test, jcache, api, ehcache, maven, documentation, wcm, youtube, userguide, ec2, s, hibernate, getting, interface, custom_interceptor, setup, clustering, eviction, out_of_memory, concurrency, examples, jboss_cache, import, index, events, configuration, hash_function, batch, buddy_replication, loader, colocation, write_through, cloud, mvcc, tutorial, notification, presentation, murmurhash2, jbosscache3x, read_committed, xml, started, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, br, websocket, transaction, async, interactive, xaresource, build, gatein, hinting, searchable, demo, scala, client, migration, filesystem, jpa, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, murmurhash, standalone, repeatable_read, hotrod, webdav, snapshot, docs, consistent_hash, jta, faq, 2lcache, as5, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - batching, - distribution, - gridfs, - guide, - infinispan, - installation, - interceptor, - intro, - query, - rebalance, - write_behind, - 缓存 )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/labels/viewlabel.action?ids=4456522&ids=4456518&ids=4456481&ids=4456550&ids=4456479&ids=4456501&ids=4456563&ids=4456537&ids=4456470&ids=4456515&ids=4456559&ids=4456590 | 2020-01-17T19:28:38 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.jboss.org |
The network ports are either physical ports or virtualized ports. VLANs and interface groups constitute the virtual ports. Interface groups treat several physical ports as a single port, while VLANs subdivide a physical port into multiple separate logical ports.
The underlying physical port or interface group ports for a VLAN port can continue to host LIFs, which transmit and receive untagged traffic.
The port naming convention is enumberlettere<number>letter:
"e" represents Ethernet.
"a" indicates the first port, "b" indicates the second port, and so on.
For example, eob indicates that an Ethernet port is the second port on the node's motherboard.
VLANs must be named by using the syntax port_name-vlan-id. "port_name" specifies the physical port or interface group and "vlan-id" specifies the VLAN identification on the network. For example, e1c-80 is a valid VLAN name. | https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-A8052ED8-5AE3-4590-AE2E-5E9B3E218CBA.html | 2020-01-17T18:52:52 | CC-MAIN-2020-05 | 1579250590107.3 | [] | docs.netapp.com |
Has Contacts¶
This Condition located on the Ministry category tab in Search Builder allows you find people who have any type of Contact on their record. The Comparison Values are either True or False.
Use Case
You want to find everyone who has visited in the past month, but has not received a Contact. You would combine the Has Contacts Condition (setting the value to false) with a Recent Attendance Condition, perhaps for a specific Program and/or Division.
Search Builder has several other Conditions relating to Contacts. So be sure to look at all the options on the Ministry tab.
See also | http://docs.touchpointsoftware.com/SearchBuilder/QB-HasContacts.html | 2019-03-18T19:21:09 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.touchpointsoftware.com |
Apps and Features Compatible with Restricted Zones
Support for restricted StorageZones affects all aspects of the ShareFile service. As a result of protocol changes required to support metadata encryption and zone authentication, some ShareFile clients and features are not supported when working with documents in a restricted StorageZone.
Contents:
- Clients and tools
- Browsers
- Features
- Sync for Windows
- Mobile Apps
- Outlook Plugin
Clients and ToolsClients and Tools
BrowsersBrowsers
FeaturesFeatures
End user actions: Working with files:
End user actions: Sharing and collaboration:
Administrative actions:
Sync for WindowsSync for Windows
Minimum version - 3.1
Mobile AppsMobile Apps
Please refer to app-specific tables below:
iOS - Minimum version 3.3
Android - Minimum version 3.4
Windows Phone 8 - Minimum version 2.3.10
Outlook PluginOutlook Plugin
Support:
Feedback and forums: | https://docs.citrix.com/en-us/storagezones-controller/5-0/restricted-storagezones/apps-and-features-compatible-with-restricted-zones.html | 2019-03-18T20:44:51 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.citrix.com |
SessionInvalid event is sent if the user session is invalid
Member of WAM (PRIM_WAM)
The SessionInvalid event is fired if a webroutine attempts to execute while the session is no valid. The SenderName parameter will contain the name of the failed routine.
All Component Classes
Technical Reference
Febuary 18 V14SP2 | https://docs.lansa.com/14/en/lansa016/prim_wam_sessioninvalid.htm | 2019-03-18T20:01:59 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.lansa.com |
Difference between revisions of "Hyrax - Administrators Interface"
Revision as of 00:21, 10 June 2011
Contents
1 Overview
2 Installation & Configuration
2.1 BES
2.2 OLFS
The HAI is a regular part of the OLFS distribution and simply needs to be enabled by configuring the OLFS to communicate with the BES admin port, and Tomcat to allow you to access the UI.
2.2.1 olfs.xml
In the olfs.xml file you will need to add (or uncomment) the following for each BES:
<adminPort>11002</adminPort>
You will need to manually verify that the value of the adminPort element is the same as the BES.DaemonPort parameter specified in the bes.conf file for that BES instance.
2.2.2 Tomcat Users
You will need to configure Tomcat to support container managed security, by connecting to an existing "database" (aka Realm) of usernames, passwords, and user roles. Tomcat supports several authentication Realms including LDAP. What follow are simple instructions for getting a Memory-Realm working.'The Memory-Realm is not for production use, and the example is provided only as a mean by which to easily demonstrate and allow one to test the HAI features.
Look here from more information on Tomcat and other suthentication Realms
2.2.2.1 how-to
- Edit the file $CATALINA_HOME/conf/tomcat-users.xml
- Add a user whose role is "manager".
<user username="admin" password="foo" roles="manager,hyrax-manager" />
And be sure to make the password something better than "foo".
- done.
2.2.3 Tomcat SSL
In order to use the HAI you will need to configure your tomcat instance to enable SSL (see. How to accomplish this is covered in detail here at the Tomcat site.
From their Quick Start section:
2.2.4 Olfs Details
The HAI servlet (as part of the Hyrax web application) utilizes a <security-constraint> element, and a <login-config> element that define> | https://docs.opendap.org/index.php?title=Hyrax_-_Administrators_Interface&diff=6499&oldid=6498 | 2019-03-18T20:48:13 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.opendap.org |
Difference between revisions of "Hyrax - Administrators Interface"
Revision as of 21:21, 6 October 2011
Contents/content/opendap/olfs.xml configuration file controls the level of access control for the HAI
- The role used by the HAI is set in the $CATALINA_HOME/content/opendap/olfs.xml configuration file using the <auth-constraint> element. You can switch roles by changing the <role-name>.. | https://docs.opendap.org/index.php?title=Hyrax_-_Administrators_Interface&diff=6726&oldid=6725 | 2019-03-18T20:45:51 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.opendap.org |
A quick start guide¶
This section aims to provide you a very high overview of the Selinon project and its configuration. If you want to get deeper, follow next sections.
Selinon concept overview¶
A system consist of flows, storages or databases and tasks. Each flow defines a directed graph (that can be even cyclic, so no DAG limitations) of well defined dependencies on nodes that compute results. A node can be either a task or another flow, so you can make flows as nested as desired. You can make decisions on when to run which nodes based on conditions that are made of predicates.
YAML configuration overview¶
Selinon is configured by easy to learn, easy to read and easy to maintain declarative configuration files written in YAML markup language.
In order to use Selinon, you have to implement tasks and define your flows in a YAML configuration file (or split it across multiple YAML configuration files).
Setting up Selinon¶
First, let’s install Selinon from PyPI:
$ pip3 install selinon
Selinon comes with extras (also known as bundles in another terminology) to reduce your dependencies. You can select desired bundles from the list bellow:
- celery - needed if you use Celery
- mongodb - needed for MongoDB storage adapter
- postgresql - needed for PostgreSQL storage adapter
- redis - needed for Redis storage adapter
- s3 - needed for S3 storage adapter
- sentry - needed for Sentry support
To install Selinon with all extras issue the following command:
$ pip3 install selinon[celery,mongodb,postgresql,redis,s3,sentry]
Note
Some terminals (such as zsh) might require quoting:
pip3 install "selinon[celery,mongodb,postgresql,redis,s3,sentry]"
Feel free to select only extras you need in your deployment.
In order to configure Selinon you need to create Celery’s
app instance, pass all Celery-related configuration to it. After that you are ready to configure Selinon:
from selinon import Config from celery import Celery from myapp.celery_settings import CelerySettings app = Celery('tasks') app.config_from_object(CelerySettings) Config.set_celery_app(app) Config.set_config_yaml('path/to/nodes.yaml', ['path/to/flow1.yaml', 'path/to/flow2.yaml'])
Please refer to Celery configuration or Selinon demo for Celery-related pieces. You can also find an example in Selinon demo configuration.
Naming convention¶
Imagine you defined two flows (flow1 and flow2) that consist of five tasks named Task1, Task2, Task3, Task4 and Task5. Such flows are illustrated on images bellow.
In the flow flow2 (shown above) we start node Task4 on condition that is always true (we start if Selinon was requested to start flow2). After Task4 finishes, we start (always) node Task5 which ends the flow flow2. Results of tasks are stored in the database named Storage2.
The second flow is slightly more complex. We (always) start with Task1. Task1 will transparently store results in Storage1. After Task1 finishes, Selinon (to be more precise dispatcher task) checks results of Task1 in Storage1 and if condition
result['foo'] == 'bar' is evaluated as True, dispatcher starts nodes Task2 and flow2. After both Task2 and flow2 finish, dispatcher starts Task3. If the condition
result['foo'] == bar (now result of Task3) is met, Task1 is started and the whole process is iteratively done again. Results of all tasks are stored in database named Storage1 except for results computed in sub-flow flow2, where Storage2 is used (see previous flow graph above).
Note that Task2 and the whole flow2 could be executed in parallel as there are no data nor time dependencies between these two nodes. Selinon runs as many nodes as possible in parallel. This makes it really easy to scale your system - the only bottleneck you can get is number of computational nodes in your cluster or limitations on storage/database side.
Flow definitions¶
Conditions¶
Conditions are made of predicates that can be nested as desired using logical operators - and, or and not. There are predefined predicates available in
selinon.predicates. However you can define your own predicates based on your requirements.
These conditions are evaluated by dispatcher and if a condition is met, desired node or nodes are scheduled. If the condition is evaluated as false, destination nodes on the given edge are not run. Note that conditions are run only once only if all source nodes successfully finish.
If you do not state
condition in edge definition, edge condition will be evaluated always as true.
Since there could run multiple nodes of a type (name) due to cyclic dependencies, an edge condition is evaluated for each possible combination (and only once for the given combination). If you want to avoid such behaviour, check Useful flow patterns section for possible solutions.
Starting nodes¶
You can have a single or multiple starting nodes in your flow. If you define a single starting node, the result of starting node can be propagated to other nodes as arguments if
node_args_from_first is set. If you define more than one starting node, the result cannot be propagated (due to time-dependent evaluation), however you can still explicitly define arguments that are passed to the flow (or make part of your flow a sub-flow).
Flows¶
Flows can be nested as desired. The only limitation is that you cannot now inspect results of sub-flow using edge conditions in a parent flow. There is a plan to remove such limitation in next Selinon releases. Nevertheless you can still reorganize your flow (in most cases) so you are not limited with such restriction.
Running a flow¶
Once you set up Selinon and Selinon does not report any errors in your configuration files, you can run flow simply by calling the
run_flow function (see documentation of
run_flow()):
from selinon import run_flow dispatcher_id = run_flow('flow1', {'foo': 'bar'})
If you wish to do selective task runs, please refer to Selective task run documentation.
Node failures¶
You can define fallback tasks and fallback flows that are run if a node fails. These fallback tasks and flows (fallback nodes) are not prone to time-dependent evaluation (to be more precise - there is no such thing in the whole Selinon design, so you can be sure that such thing does not occur on Selinon level). These fallback nodes are scheduled on task or flow failures and their aim is to recover from a failure.
Failures are propagated from sub-flows to parent flows. You can find analogy to exceptions as known in many programming languages (like in Python). If a node fails and there is no fallback node that would handle node failure, the whole flow is marked as failed. You can than capture this failure in the parent flow, but this failure will be marked as failure of the whole flow. Note that even in this case, there is no time-dependent evaluation - so if a node in your flow fails, dispatcher can still continue scheduling nodes that are not affected by the failure and once there is nothing to do more, dispatcher marks the flow as failed.
Now let’s assume that you defined two fallbacks. One waits for Task2 failure (Fallback1) and another one waits for a failure of Task1 as well as Task2 failure (Fallback2).
Let’s say that Task1 failed. In that case the decision which fallback would be run depends on Task2 failure (not on time-dependent evaluation). Fallback evaluation is greedy, so if Task2 fails, there is run Fallback2. If Task2 succeeds, Fallback1 is run.
Results of tasks¶
Results of tasks are stored in databases transparently based on your definition in YAML configuration files. The only thing you need to provide is a database adapter that handles database connection and data storing/retrieval. See storage section for more info.
YAML configuration example¶
In this section you can find YAML configuration files that were used for generating images in the previous sections. You can separate flows into multiple files, just provide
flow-definitions key to find all flows defined in the YAML file.
--- flow-definitions: - name: 'flow1' edges: - from: to: - 'Task1' - from: - 'Task1' to: - 'Task2' - 'flow2' condition: name: 'fieldEqual' node: 'Task1' args: key: 'foo' value: 'bar' - from: - 'Task2' - 'flow2' to: - 'Task3' - from: - 'Task3' to: - 'Task1' condition: name: 'argsFieldEqual' node: 'Task3' args: key: 'foo' value: 'bar'
--- flow-definitions: - name: 'flow2' edges: - from: to: - 'Task4' - from: - 'Task4' to: - 'Task5'
Configuration for failures and failure handling fallbacks that were introduced in Node failures section can be found bellow (no storages in the example).
--- flow-definitions: - name: 'exampleFallback' edges: - from: to: 'Task1' - from: to: 'Task2' failures: - nodes: - 'Task1' - 'Task1' fallback: - 'Fallback1' - nodes: - 'Task1' fallback: - 'Fallback2'
Entities in the system¶
This configuration could be placed to
nodes.yaml:
--- tasks: - name: 'Task1' output_schema: 'path/to/schema1.json' # `classname` is omitted, it defaults to `name` # from worker.task1 import Task1 import: 'worker.task1' storage: 'Storage1' # queue name to which messages will be sent queue: 'queue_Task1_v0' - name: 'Task2' import: 'worker.task2' storage: 'Storage1' output_schema: 'path/to/schema2.json' # task names are not bound to class names (you can create aliases) # from worker.task2 import MyTask2 as Task2 classname: 'MyTask2' queue: 'queue_Task2_v1' - name: 'Task3' import: 'worker.task3' storage: 'Storage1' output_schema: 'path/to/schema3.json' classname: 'Task1' max_retry: 1 # If queue is omitted, Celery's default queue (celery) will be used #queue: 'celery' - name: 'Task4' import: 'worker.task4' storage: 'Storage2' output_schema: 'path/to/schema4.json' classname: 'Task4' max_retry: 1 - name: 'Task5' import: 'worker.task1' storage: 'Storage2' output_schema: 'path/to/schema1.json' classname: 'Task4' # in case of failure retry once after 10 seconds before marking node as failed max_retry: 1 retry_countdown: 10 flows: # state all flows you have in your system, otherwise Selinon will complain - 'flow1' - 'flow2' storages: - name: 'Storage1' # from storage.storage1 import MyStorage as Storage1 # This way you can have multiple storages of a same type with different # configuration (different reference name) classname: 'MyStorage' import: 'storage.storage1' configuration: 'put your configuration for Storage1 here' - name: 'Storage2' # classname is omitted, it defaults to `name` # from storage.storage2 import Storage2 import: 'storage.storage2' configuration: 'put your configuration for Storage2 here'
See YAML configuration section for more details. | https://selinon.readthedocs.io/en/latest/start.html | 2019-03-18T19:44:48 | CC-MAIN-2019-13 | 1552912201672.12 | [array(['_images/flow2.png', '_images/flow2.png'], dtype=object)
array(['_images/flow1.png', '_images/flow1.png'], dtype=object)
array(['_images/fallback_example.png', '_images/fallback_example.png'],
dtype=object) ] | selinon.readthedocs.io |
Contributing¶
Thanks for your interest in contributing to Kinto!
Note
We love community feedback and are glad to review contributions of any size - from typos in the documentation to critical bug fixes - so don’t be shy!
How to contribute¶ :)
Communication channels¶
- Questions tagged
kintoon Stack Overflow.
- Our IRC channel
#kintoon
irc.freenode.net— Click here to access the web client
- Our team blog
- The Kinto mailing list.
- Some #Kinto mentions on Twitter :)
Hack¶
Ready to contribute? Here’s how to set up Kinto for local development.
Fork the Kinto repo on GitHub.
Clone your fork locally:
git clone [email protected]:your_name_here/kinto.git Cliquet or Cornice),
just install them from your local folder using
pip.
For example :
cd .. git clone cd kinto/ .venv/bin/pip install -e ../cliquet/
Run load tests¶
From the
loadtests folder:
make test SERVER_URL=
Run a particular type of action instead of random:
LOAD_ACTION=batch_create make test SERVER_URL=
(See loadtests source code for an exhaustive list of available actions and their respective randomness.) protocol was updated (via Cliquet for example), update API changelog in
docs/api/index.rst
- If Cliquet was updated, update the link in
docs/configuration/production.rst
-. | http://docs.kinto-storage.org/en/1.11.1/contributing.html | 2019-03-18T19:20:43 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.kinto-storage.org |
Quotas
You can use
System Manager
to create, edit, and delete quotas.
More information
Creating quotas
Quotas enable you to restrict or track the disk space and number of files used by a user, group, or qtree. You can use the
Add Quota
wizard
in
System Manager
to create a quota and apply it to a specific volume or qtree.
Deleting quotas
You can use
System Manager
to delete one or more quotas as your users and their storage requirements and limitations change.
Editing quota limits
You can use
System Manager
to edit the disk space threshold, the hard and soft limits on the amount of disk space that the quota target can use, and the hard and soft limits on the number of files that the quota target can own.
Activating or deactivating quotas
You can use
System Manager
to activate or deactivate quotas on one or more selected volumes on your storage system, as your the qtrees to which the quota is applied, the type of quota, the user or group to which the quota is applied, and the space and file usage.
Types of quotas
Quotas can be classified on the basis of the targets they are applied to..
How qtree changes affect quotas
When you delete, rename, or change the security style of a qtree, the quotas applied by Data ONTAP might change, depending on the current quotas being applied.
How changing the security style of a qtree affects user quotas
You.
How quotas work with users and groups
When you specify a user or group as the target of a quota, the limits imposed by that quota are applied to that user or group. However, some special groups and users are handled differently. There are different ways to specify IDs for users, depending on your environment.
Quotas window
You can use the
Quotas
window
to create, display, and manage information about quotas.
Parent topic:
Managing logical storage
Part number: 215-11149-D0
June 2017
Updated for ONTAP 9.2 | http://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-900/GUID-05234F29-E7DA-4E54-A625-AB427CD975E6.html | 2019-03-18T19:57:12 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.netapp.com |
People Extra Value Field¶
This Condition located on the Extra Values category tab in Search Builder allows you to enter the name of the Extra Value field to find records with that specific Extra Value.
Extra Values are different for every church so make sure you are familiar with the ones in your database.
Note
The name in the Condition dialog box is Has Extra Value Field.
Use Case
We have a Standard Extra Value field named Country of Origin that we use for those enrolled in our ESL classes. If you use this Condition with Country of Origin as the EV text, the results will be only those with data in that Extra Value field even though every record in the database has that EV on it.
You can limit your search by adding a Condition Is Member Of and limit your search to just those enrolled in specific classes.
Tip
If you change the Comparison to Not Equal and you use an EV such as Country of Origin, the results will be everyone without any text in that EV field.
There are a number of options for finding Extra Values based on the type of EV you are looking for. So, take a look at the various options to see which best fits what you are looking for. | http://docs.touchpointsoftware.com/SearchBuilder/QB-HasPeopleExtraField.html | 2019-03-18T19:51:31 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.touchpointsoftware.com |
Specifies an enabled state for the target UI element.
Namespace: DevExpress.ExpressApp.ConditionalAppearance
Assembly: DevExpress.ExpressApp.ConditionalAppearance.v18.2.dll
If there are conditional appearance rules that change the enabled state within the rules appropriate for the target UI element, the AppearanceObject's Enabled property returns the enabled state specified by the rule with the higher priority (see IAppearance.Priority). If there are no rules that change the enabled state, the Enabled property returns null (Nothing in VB). | https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.ConditionalAppearance.AppearanceObject.Enabled | 2019-03-18T19:34:26 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.devexpress.com |
By integrating your organization's View, Horizon 6, or Horizon 7 environment with your VMware Identity Manager deployment, you give your VMware Identity Manager users the ability to use the Workspace ONE.
Supported Versions
VMware Identity Manager supports the following versions and features.
Integrating independent View pods is supported for View 5.3 and later.
Integrating pod federations, created using the Cloud Pod Architecture feature, is supported for Horizon 6.2 and later.
HTML Access is supported for Horizon 6.1.1 and later.
Certificate SSO is supported for Horizon 7.x.
Also see the VMware Product Interoperability Matrix for the latest support information. | https://docs.vmware.com/en/VMware-Identity-Manager/2.8/com.vmware.wsp-resource_28/GUID-5ED7E551-76CE-4B0F-9D30-EEE53C39BD67.html | 2019-03-18T19:52:09 | CC-MAIN-2019-13 | 1552912201672.12 | [] | docs.vmware.com |
..
Installation
Zip bundle.
Building from sources:
Usage.
Limitations. Windows Media Player
When event callbacks are supported, you will be able to subscribe to the player.statusChange event, so that you can play the wav entirely, before loading a new sample (instead of listening only to the first second of each sample). | http://docs.codehaus.org/pages/viewpage.action?pageId=17015 | 2014-04-16T16:35:36 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
Message-ID: <1025528075.84440.1397665236682.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_84439_1865165962.1397665236682" ------=_Part_84439_1865165962.1397665236682 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Jparsec is a recursive-desent parser combinator framework written for Ja= va. It constructs parsers in native Java language only.
Jparsec stands out for its combinator nature. It is no = parser generator like YACC or ANTLR. No extra grammar file is required. Gra= mmar is written in native Java /C# language, which also means you can utili= ze all the utilities in the Java/.Net community to get your parser fancy.= p>
Jparsec is an implementation of Haskell Parsec on th= e Java platform. | http://docs.codehaus.org/exportword?pageId=35206 | 2014-04-16T16:20:36 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
PG&E's first interconnection proposal is to establish a new rule, "Gas Rule 27 - Gas Transmission Facilities Connections." This new rule would address the interconnection of electric generation facilities and other large noncore customers who request service from PG&E's gas transmission system. PG&E's second interconnection proposal is to offer a new tariffed service to off-system end users who want to directly connect to PG&E's backbone facilities.
PG&E is proposing Gas Rule 27 to address the needs of large gas customers who require transmission-level service. Rule 27 would apply to transmission-level customers who are served under the following existing gas rate schedules: Schedule G-EG - Gas Transportation Service to Electric Generation; Schedule G-COG - Gas Transportation Service to Cogeneration Facilities; and Schedule G-NT - Gas Transportation Service to Noncore End-Use Customers.103 Rule 27 allows for revenue-based allowances, while ensuring recovery of costs for both reinforcements of PG&E's existing system and the extension of new facilities, through local transmission and customer access charges revenue generated by customers.
PG&E contends that Rule 27 is needed because Gas Rule 15,104 which is the only PG&E tariff applicable to gas transmission interconnections, primarily applies to distribution-level interconnections at pipe pressures less than 60 pounds per square inch, and contemplates transmission-level interconnections at PG&E's convenience. PG&E states that distribution-level facilities are rarely of sufficient capacity to serve large customers. Rule 15 also limits the allowances towards investments made by PG&E to extend transmission facilities to serve new customers. Since transmission-level customers currently pay for the majority of costs associated with any transmission-level extension, Rule 15 creates an obstacle to the citing of electric generation, as well as other noncore load. The revenue credit proposed in Rule 27 would replace the relatively small distribution-based revenue allowance in Rule 15.
If Rule 27 is not adopted, PG&E would have to file an advice letter for an extension of service as an exceptional case,105 under the provisions of Gas Rules 15 and 16,106 each time transmission-level service is sought. Such a filing would be necessary in most instances because the costs of connecting these large customers exceed the local transmission and customer access charges revenues. If Rule 27 is adopted, it will eliminate most of the exceptional case advice letter filings because the guidelines are contained in Rule 27.
Some of the parties propose that the language of Rule 27 be clarified in certain respects, and that connecting customers be provided with additional financial incentives.
The purpose behind PG&E's proposal for a new tariffed service to directly connect off-system customers to the backbone is to attract users who are interested in using PG&E's transmission service as an alternative to using interstate pipelines, a private pipeline accessing California gas production, or an alternative fuel source. This interest has occurred along the Baja path (Line 300) and near the terminus of Line 401. By allowing these end users to connect to the backbone, they will have added supply options, including California and Canadian gas sources, as well as improved service reliability.
In D.94-12-061 (58 CPUC2d 440), PG&E was authorized to offer off-system direct connect service on Line 401. However, this off-system direct connect service does not apply to the rest of PG&E's backbone facilities, and a Line 401 direct connect request requires the filing of an application under the Expedited Direct Connection Docket (EDCD) for each customer.
PG&E's proposal seeks to allow off-system end users to directly connect to PG&E's transmission facilities if they meet two eligibility requirements, which are described at page 18-7 of Exhibit 1, and in the discussion portion of this section. PG&E also proposes to allow the off-system direct connect customers to take other PG&E services, such as monthly balancing, subject to the specific terms and conditions of those services. The off-system direct connect customers will be required to sign an agreement specifying the terms of service, and a customer-specific monthly interconnection charge will be developed and assessed based on the ongoing costs to maintain the meter and interconnection.
Overall, CCC/Calpine support Rule 27. However, CCC/Calpine believe that Rule 27 should be modified to better account for the benefits that PG&E and its customers incur as the result of facilities built on behalf of a particular customer. Rather than requiring the customer to pay the entire estimated contribution in all instances, as proposed Rule 27 would do, CCC/Calpine propose that PG&E share the risk of new interconnections by waiving the unrecovered balance if the customer is able to reduce that balance to meet any of the following milestones: (1) 50% within 3 years; (2) 65% within 5 years; (3) 75% within 7 years. CCC/Calpine assert that that customers that meet these proposed milestones will have demonstrated their viability and the likelihood that PG&E will recover its margin.
CCC/Calpine also propose that Rule 27 be amended to give the interconnecting customer credit for the full costs PG&E avoids by having incremental capacity made available, and to provide for refunds if additional new customers take service from the facility. PG&E should pay for, or credit the customer for, the full costs that PG&E avoids as a result of the interconnection. PG&E's proposal to pay only the incremental cost of these additions is not appropriate.
In order to properly reflect the benefits that can be created by new interconnections, CCC/Calpine suggest that Rule 27 be modified to include a provision similar to Rule 15.E. Rule 15.E provides a customer with a refund of his contribution to a distribution main addition if additional new customers take service from that main. Adding such a provision ensures that interconnecting customers are not required to subsidize improvements enjoyed by other customers.
CCC/Calpine also propose that backbone rate revenue be included in the calculation of customer contributions toward meeting Rule 27's economic benefit test. Customers contribute to net margin to PG&E through the payment of backbone rates. Under the economic benefit test of Rule 27, only revenues from local transmission and customer access charges are considered.
PG&E contends that customers seeking connections to high-pressure facilities should be treated as Special Facilities under Rule 2, rather than under Rule 27. PG&E asserts that high-pressure facilities are a special benefit to the requesting customer that does not benefit the system at large. CCC/Calpine contend that the reality is that new gas turbine generators require high-pressure gas service, and a substantial portion of the costs of interconnections with new electric generation facilities is often the cost of providing higher-pressure service. To recognize the realities of electric generation, Rule 27 should be amended to include within a transmission facility connection, the cost of facilities to provide higher delivery pressures.
CCC/Calpine point out that proposed Rule 27.A.1.d provides that PG&E will not be required to connect with any non-PG&E pipelines. CCC/Calpine contend that in some cases, the most cost-effective way for a new generator to be served is to interconnect with a private or municipal pipeline that is or can be connected to PG&E's system. Rule 27 should not prevent customers from seeking the most cost-effective method of obtaining gas for electric generation.
Given the significant reservations about how the rule will work and its potential impact on customers, CCC/Calpine believe that it may be appropriate to defer consideration of the rule until workshops on the rule have been held.
Duke believes that Rule 27 is acceptable so long as PG&E clarifies that an applicant for new service will be charged only for the costs relevant to its service connection, and any additional costs associated with sizing the connection for future loads be borne by the utility.
LGS expressed concern about the language in proposed Rule 27.A.1.d which states: "PG&E shall not be required to serve any Applicant from transmission facilities, or any other gas pipeline facilities not owned, operated, and maintained by PG&E." That language could allow PG&E to refuse to transport gas withdrawn from storage by customers of independent storage providers and delivered via their ancillary pipelines to PG&E's transmission system. When PG&E witness Haley testified, he clarified that PG&E does not intend to prevent third-party storage providers from serving their customers via third party storage pipelines that interconnect with PG&E's system, and would work with parties to remove any ambiguities in proposed Rule 27.
LGS suggests that the Commission should be clear in any decision approving the Rule 27 proposal, that PG&E must clarify the language of the rule so that the rule has no impact on the delivery of gas to PG&E's system for third party storage. If Rule 27 is approved, the Commission should order the parties to work together to clarify the language.
Mirant is concerned that Rule 27 places a disproportionate share of the cost of additions to serve other customers that come after the initial interconnection, on the interconnecting customer. Mirant believes that a more equitable assignment would be to assign the new customer a share of the costs proportional to the new customer's average use of the new system capacity.
Although PG&E proposes tariff language in Rule 27 to shield the interconnecting customer from having to bear the costs of separate or incremental facilities included in the interconnection project to serve other customers, this language does not address the concern of Mirant about the unfairness of the incremental cost assignment approach. PG&E's incremental approach assigns a share of costs to the interconnecting customer significantly in excess of that customer's share of prospective benefits. Mirant recommends that the Commission require PG&E to amend Rule 27 to give effect either to Mirant's proposal that new customers be assigned a share of costs proportional to the new customer's average use of the new system capacity, or the suggestion of CCC/Calpine that the interconnecting customer be given credit for the full costs PG&E avoids by having incremental capacity made available.
Mirant is also concerned about PG&E's discussion of risk allocation issues. PG&E proposes that Rule 27 apply to system reinforcements, as well as service extensions. PG&E witness Haley testified that Rule 15 uses a standard form agreement that includes a provision attesting that the customer's load justifies the reinforcement work involved, and that should the applicant's load not develop as intended, that PG&E reserves the right to collect the cost of reinforcements that turn out not be necessary. Under Rule 15, the customer is off the hook if the project load is achieved, even if revenues are disappointing. However, under Rule 27, the customer is obligated to pay all of the costs of the reinforcement if the anticipated customer revenue from customer access and local transmission charges do not reach the projected level. Mirant asserts that this distinction is important because it supports the position of CCC/Calpine to waive the unrecovered balance if a sufficient proportion of anticipated revenue is achieved, for providing refunds if additional new customers take service from a new facility, and for including backbone rate revenue in the calculation of customer contributions.
NCGC points out that under proposed Rule 27, if PG&E's cost of constructing the connection or reinforcement cannot be supported by the forecasted local transmission and customer access charge revenues, the customer will be required to pay the difference.
NCGC asserts that the Rule 27 proposal is outside the scope of this proceeding. In addition, due to the slowdown in the construction of new electric generation plants, the rule is currently not necessary. NCGC also notes that connections and reinforcements have been installed in the past without Rule 27. NCGC recommends that PG&E's Rule 27 proposal be dismissed without prejudice, and that the issues regarding Rule 27 be resolved in a workshop.
If the Commission decides to adopt Rule 27, NCGC recommends that we revise proposed Rule 27 as recommended by CCC/Calpine and Mirant so that PG&E bears at least some of the risk of the new interconnection costs.
CCC/Calpine and Mirant also recommend other changes to Rule 27. These changes include the following: include a provision that parallels Rule 15.E to permit a customer to get a refund for the customer's contribution to a capacity addition if additional new customers take service from the new capacity; provide for an interconnecting customer to receive a credit based on the full costs that PG&E avoids as a result of installing an interconnection facility, not just the incremental cost; that the economic benefit test in Rule 27 be modified to recognize backbone revenues that result from interconnecting with a new customer, as well as local transmission revenues; and that Rule 27 be changed to include the cost of facilities that are needed to provide higher delivery pressures rather than continuing to treat such facilities as special facilities under Rule 2. NCGC supports all of these proposals.
ORA states that the ratepayer impacts of PG&E's proposed Rule 27 could not be properly reviewed and assessed within the time allotted, and that such a proposal should be deferred to a later proceeding.
If the Commission does not adopt the proposal for a backbone level rate, or does not adopt a higher load factor, Rule 27 should be changed to allow SMUD to credit some or all of its costs of constructing the SMUD gas pipeline system against PG&E's local transmission rates. SMUD contends this is proper because of the cost savings that PG&E's customers received as a result of SMUD building its own pipeline system to serve its gas-fired plants.
CCC/Calpine and others suggest that a new customer should only have to pay for system reinforcements net of system benefits, and that Rule 27 does not result in an equitable share of system upgrade costs. PG&E asserts that its proposed Rule 27 provides for an equitable allocation of system upgrade costs between the applicant and PG&E. When an applicant applies for service, PG&E's practice is to allocate only the costs of the applicant's interconnection to the applicant. But for the Rule 27 applicant, PG&E would not be making the interconnection. PG&E does not propose to deviate from that practice in proposed Gas Rule 27. If additional facilities are added by PG&E in conjunction with the upgrades required for an applicant, PG&E will pay those incremental costs. According to PG&E's witness Haley, although proposed Rule 27 tariff does not specify that the incremental costs would be at PG&E's expense, such language could be added to clarify the intent of Rule 27.
CCC/Calpine suggest modifying proposed Gas Rule 27 to allow for refunds to reduce a customer's "Unrecovered Balance" if additional customers take service from the interconnection facilities that PG&E installed for the applicant. PG&E recommends that this proposal be rejected as unrealistic. Based on the classification of customers that are covered by proposed Gas Rule 27, PG&E contends it is unlikely that another transmission-level customer will be able to be served from the pipe that PG&E installed to serve the applicant without additional reinforcement. In addition, tracking the other load would be cumbersome, and customers would gain little or no benefit. If sufficient excess capacity exists to accommodate other customers, that incremental capacity would be provided at PG&E's expense.
CCC/Calpine advocate changing Rule 27 to include, within a Transmission Facilities Connection, the cost of facilities to provide higher delivery pressure. PG&E is opposed to this suggestion. PG&E points out that facilities which are of special benefit to a single customer should continue to be treated as Special Facilities under Gas Rule 2, and that other ratepayers should not be expected to subsidize such facilities. Rule 2 only applies to the incremental cost increase between the volumetric design and the applicant's specific request for additional elevated pressure. To the extent special facilities are constructed, these costs are not eligible for revenue-based allowances under proposed Rule 27. PG&E's experience also shows that applicants generally select the option that provides them with the most economical delivery of elevated pressure. Instead of asking PG&E for additional elevated delivery pressure in excess of the prevailing transmission pressure already provided by PG&E in the volumetric design, the applicant may choose the option of installing compression equipment on their side of the meter to accommodate the pressure their equipment requires.
Proposed Rule 27 allows the applicant two options for connecting to PG&E's transmission system: (1) through PG&E-owned and maintained facilities from the interconnection point with PG&E gas transmission facilities to the service termination point, typically at the applicant's facility; or (2) by connecting to facilities the applicant builds, owns, and maintains, from their facility to PG&E's transmission facilities. SMUD recommends changing Rule 27 to allow for PG&E revenue credits against SMUD's costs of constructing its own gas pipeline interconnections with PG&E.
PG&E recommends that SMUD's proposal not be adopted. SMUD should not be given a PG&E revenue credit, when for a variety of business reasons, the party chooses to build, own and maintain its own pipeline facility. PG&E contends that providing such a credit would require remaining ratepayers to unfairly subsidize private ventures. The remaining ratepayers should not be held captive to pay for facilities that are not owned, operated, and maintained by the utility.
CCC/Calpine propose that the remaining Unrecovered Balance be waived if the customer is able to reduce the balance according to certain milestones. PG&E contends it should not be required to waive its right to collect the Unrecovered Balance for investments whose expected average service life for new transmission mains is 45 years. PG&E asserts that it is already assuming a certain level of risk under Rule 27, which limits the cost recovery guarantee period from the new customer to only ten years.
CCC/Calpine suggest that the credits against the Unrecovered Balance include contributions to the backbone and customer access charges, and that the contract that a customer executes under proposed Rule 27 be for backbone level service. PG&E asserts that proposed Rule 27 already adequately and reasonably handles this situation. PG&E points out that proposed Rule 27 contemplates and accommodates interconnections from all of PG&E's transmission systems, and it is highly unlikely that an individual generation facility will cause a need for reinforcements to the backbone. In the event that backbone reinforcement is required, PG&E and the applicant would likely need to negotiate a special agreement to allocate the costs of such reinforcement. Proposed Rule 27.E.3. allows for this possibility by providing a method for filing an exception to the tariff with the Commission for approval.
Where there are connections to the backbone, PG&E will still credit LT and CAC paid by the customer against the costs of the interconnection, and in the form of a contract developed for use with proposed Rule 27. PG&E contends that these credits reflect a reasonable and adequate amount of credit against the costs of the interconnections.
CCC/Calpine propose that backbone revenues be included in the customer credit. PG&E states this proposal should be rejected because the backbone capacity may not be held by an end-user, making it impossible to attribute revenues to that customer. Also, as new more efficient gas-fired electric generation is brought on line, it is likely it will displace older, less efficient generating facilities. Thus, the amount of backbone revenue attributed to the new facility may not be incremental, but actually decremental. Also PG&E has the obligation to serve, and it assumes the risk for costs associated with facilities not supported by revenue.
CCC/Calpine and Mirant object to proposed Gas Rule 27 because it removes from the utility any risk that PG&E will fail to recover the costs of its interconnection with a new electric generation customer. They contend that under Rule 15, PG&E has always borne some of the risk that a new distribution-level customer will remain on the PG&E system long enough to pay off its interconnection costs.
PG&E contends that CCC/Calpine and Mirant mischaracterize the risk allocation between the customer and PG&E, and Rule 15 and proposed Rule 27. PG&E asserts that the risk allocation methodology is the same under Rule 15 and Rule 27. To the extent the applicant generates revenue, PG&E credits that for both new and reinforced facilities. Under both rules, at the end of 10 years, if the customer does not generate the revenue sufficient to cover the costs of facilities, the customer is liable for the balance between the costs of the interconnection and the revenue generated. PG&E asserts that it would be inequitable to encumber the remaining ratepayers with the risk that an individual customer would generate enough revenue to support the costs of such interconnection, while allowing the customer to be the sole recipient of any reward if it does not. As long as PG&E has an obligation to serve, it is reasonable to expect its investments to be supported by revenue.
CCC/Calpine witness Beach asserts that Rule 27 does not require PG&E to serve any pipelines that are not maintained or owned by PG&E, and that it allows only one pipeline to pipeline interconnection. Beach proposes that Gas Rule 27 be modified to allow for multiple interconnections with private pipelines. PG&E points out that under its proposed Gas Rule 27, PG&E would not be required to serve an applicant via a third-party owned section of pipe inserted between PG&E's interconnection point and its meter. The proposed gas rule does not state, as Beach asserts, that PG&E should not be required to serve any pipelines that are not owned or maintained by PG&E.
PG&E points out that Rule 27 is not intended for pipeline-to-pipeline interconnections, where there is no retail end-use gas customers to be served. Instead, the rule is applicable to all connections for permanent transmission-level service to PG&E's gas transmission system serving facilities that qualify for service under Schedule G-EG or Schedule G-NT. PG&E should not be made to serve its customers from privately owned pipelines inserted between PG&E's interconnection point and its metering facilities.
PG&E also contends that proposed Rule 27 allows only one connection per facility. If an applicant requests multiple gas transmission services to a single generation facility, the first service would be installed under proposed Rule 27, and the second or additional services would be installed under Gas Rule 2. Thus, CCC's proposal to modify Rule 27 to allow for multiple interconnections should be denied.
NCGC has proposed that Rule 27 be addressed in workshops before it is adopted. PG&E opposes this, and asserts that the rule should be adopted now. Workshops have already been held, and that NCGC participated in the workshops.
PG&E acknowledges that although the number of proposed gas-fired electric generation plants that could use Rule 27 has gone down, that number could grow again in the future. Without Rule 27 in place, PG&E will have to use a patchwork of exceptional case provisions under other gas rules to accommodate the new gas-fired electric generation plants.
PG&E proposes the adoption of Gas Rule 27, which is set forth in Appendix 1 of Chapter 18 of Exhibit 1. Although PG&E held informal workshops to discuss the proposed rule with interested participants before the proposed tariff was submitted in this proceeding, as indicated in the positions of the parties, there are still a number of issues that the parties cannot agree upon.
PG&E and other parties have discussed a number of projects for interconnection at the transmission-level in recent years. PG&E witness Haley testified that during the last five years, there have been no exceptional case facilities agreements for transmission-level facilities. As of April 2003, there were approximately ten to fifteen requests for interconnection, several of which would fit more appropriately under Rule 27 rather than Rule 15. (RT 346-347.) This testimony is indicative of two things. First, that there has been a slowdown in new connections as a result of fewer gas-fired electric generation projects being pursued. Second, the projects that require interconnection to transmission-level service can or have used exceptional case agreements or the standard provisions of Gas Rules 2, 15 and 16.
Based on the issues that parties have with PG&E's proposed Rule 27, the reduction in the number of requests for transmission-level interconnections, and the existing ability to use exceptional case agreements or the standard provisions of Gas Rule 15 and others, there is no need to adopt PG&E's proposed Gas Rule 27 at this time.
PG&E's second interconnection proposal is to establish a new tariffed service to allow eligible off-system end users to connect directly to PG&E's backbone transmission service. In order to be eligible for this service, PG&E proposes that the end user meet both of the following tariff eligibility requirements:
"1. The customer does or can take pipeline delivery service directly from an interstate pipeline, a private pipeline, or an alternative fuel source, and such service does not in any way depend on services being provided by another CPUC-regulated Local Distribution Company (LDC), even if the customer still maintains a connection to that utility's facilities. If the customer is a new customer and the interstate or private service connections do not currently exist, the customer must verify through a legal declaration that such connections would be made, and service would not be provided by a California LDC, absent a connection to PG&E's transmission system; and
"2. The customer builds and is responsible for, maintaining the necessary facilities at the customer's cost to interconnect to the PG&E backbone transmission system, and to provide or pay for the meter set and other necessary special facilities charges. Connections to these customers will be done under the provisions of PG&E's Gas Rule 2, or another similar agreement." (Ex. 1, p. 18-7.)
Under PG&E's proposal, the off-system direct connect customer will be allowed to use other PG&E services, such as monthly balancing, subject to the specific terms and conditions of those services. The off-system direct connect customers will be required to sign an agreement specifying the terms of service. In addition, a customer-specific monthly interconnection charge will be developed and assessed based on the ongoing costs to maintain the meter and interconnection.
PG&E did not submit a sample tariff for its off-system direct connect proposal. No one objected to PG&E's second proposal, and no cross-examination on this proposal took place.
As the starting point for our analysis of this proposal, we turn to D.94-02-042 (53 CPUC2d 215) and D.94-12-061 (58 CPUC2d 440). In those decisions, we discussed the issue of direct connection to the Line 401 expansion project. In D.94-02-042, we prohibited the direct connection of customers to Line 401, except at Kern River Station. (53 CPUC2d at 245.) Petitions for modification of D.94-02-042 were filed, and the topic of direct connection was the subject of a workshop and comments in that Line 401 proceeding. (58 CPUC2d at 448.) In D.94-12-061, we authorized the direct connection to Line 401 where the customers' loads are incremental to current and future original system loads through the use of the EDCD application procedure. (58 CPUC2d at 443, 448.)
PG&E's off-system direct connect proposal must be clarified in two respects. The first clarification is that PG&E's proposal refers to an off-system customer being able to request a direct connection to "any portion of PG&E's transmission system." (Ex. 1, p. 18-7, emphasis added.) However, elsewhere in Chapter 18 of Exhibit 1, PG&E's proposal refers only to an off-system direct connect to PG&E's backbone transmission service. We clarify for the purposes of this decision that PG&E's proposal is only for off-system end users to directly connect to any portion of PG&E's backbone transmission system.
The second clarification is if this proposal is adopted, D.94-12-061 will be affected to some extent. Under D.94-12-061, both on-system and off-system users who want to directly connect to Line 401 must follow the EDCD procedure. If PG&E's tariff proposal is adopted, off-system end users who want to directly connect to any of PG&E's backbone facilities would no longer have to use the EDCD as provided for in D.94-12-061. (See 58 CPUC2d at 461, App., § 1. Eligibility.) However, new or existing loads located on-system, who seek to direct connect to PG&E's Line 401 at locations other than Kern River Station, will still be required to use the EDCD procedure. In addition, new or existing loads located on-system, who seek to direct connect to other PG&E backbone transmission lines other than Line 401 are prohibited from doing so unless another Commission decision has authorized such a connection.
In D.94-12-061, the criteria for approving a direct connection to Line 401 was developed. That criteria consists of the following: the connecting customer and its load is incremental under the definition in D.94-02-042, as further explained in D.94-12-061; the direct connection cannot displace present or future original system loads; and original system ratepayers must not lose the opportunity to serve future loads that would be served by PG&E's original system if Line 401 did not exist. Under PG&E's proposed eligibility requirements for this off-system direct connect service tariff, these criteria are met. Under the first eligibility requirement that PG&E proposes, the end user must or can be served from an interstate pipeline, a private pipeline, or an alternative fuel source, and such service cannot depend in any way on services provided by another Commission-regulated gas utility. Under the first eligibility requirement, the off-system end user's load is incremental because the other gas utility, which is most likely to be SoCalGas, must not be providing any of the services from which the off-system end user is receiving its natural gas or alternative fuel. In addition, this incremental load is not displacing any present or future loads, and original system ratepayers are not losing an opportunity to serve future load since the load is coming from off-system.
The second eligibility requirement that PG&E proposes ensures that the off-system end user must pay for the interconnection facilities, the meter set, and other necessary special facilities charges.
We authorize PG&E to file a tariff via an advice letter filing which offers off-system end users the ability to directly connect to all of PG&E's backbone transmission facilities. Such a tariff filing shall be consistent with the above discussion. End users who are within the service territory of PG&E who want to directly connect to Line 401 may continue to do so as provided for in D.94-02-042 and D.94-12-061. | http://docs.cpuc.ca.gov/published/COMMENT_DECISION/32148-16.htm | 2014-04-16T16:14:56 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.cpuc.ca.gov |
User Guide
Local Navigation
- Quick Help
- Shortcuts
- Phone
- Voice commands
- Messages
- Files
- Media
- Ring tones, sounds, and alerts
- Browser
- Browser basics
- Browsing web pages
-
- Manage Connections
- Bluetooth technology
- Power and battery
- Memory
Viewing, copying, and forwarding web addresses
- View the address for a web page
- Copy an address for a web page, link, or picture
- Send a web address
- Send a link or picture from a web page
Next topic: View the address for a web page
Previous topic: Turn on browser shortcuts
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/smartphone_users/deliverables/33546/Viewing_copying_and_forwarding_addresses_294357_11.jsp | 2014-04-16T17:09:30 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.blackberry.com |
Easing functions provide a simple way of interpolating between two values to achieve varied animations. They are used in conjunction with the transition library.
The 42 easing methods included with Corona SDK are based on Robert Penner's easing functions.
transition.to( target, { transition=easing.outExpo } ) transition.from( target, { transition=easing.inOutCirc } )
This is the default interpolation and will be used unless another easing method isn't defined.
This easing function will tween an object to its target state and then reverse back to the initial state (interpolation is linear). | http://docs.coronalabs.com/api/library/easing/index.html | 2014-04-16T17:00:49 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.coronalabs.com |
Git guide
Topics
- Getting started with Git: if you have never played with Git before, this is for you!
- Git IzPack workflow: how one can use Git to work on IzPack.
IzPack Git repositories
The "blessed" IzPack repository is available from
- Anonymous access:
- Developer access:
ssh://[email protected]/izpack.git(you will need to upload a SSH DSA2 public key to your account details).
The following is a list of official IzPack developers forks:
- (Julien Ponge, project leader, project founder and tyrannic despot)
- (Anthonin Bonnefoy, project despot and hazardous merges master)
- (David Duponchel, project developer)
The following is a list of forks from various people around the globe (do not hesitate to add yourself here):
- (Said SAID EL IMAM)
- add yourself here!
Funny tip if you are a Subversion fanatic: GitHub repositories can be accessed read-only from Subversion.
Historical considerations
A complete Git conversion of the old Subversion repository can be found at (keep in mind that because it is based on the full repository, there are no branches and tags informations, just a linear branch which maps the Subversion revisions).
IzPack first used CVS, then switch in 2004 to Subversion. It switched again, this time to Git, in 2010. | http://docs.codehaus.org/pages/viewpage.action?pageId=233052662 | 2014-04-16T16:31:29 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.codehaus.org |
import org.springframework.batch.item.file.mapping.FieldSetMapper; 20 import org.springframework.batch.item.file.transform.LineTokenizer; 21 22 23 /** 24 * Interface for mapping lines (strings) to domain objects typically used to map lines read from a file to domain objects 25 * on a per line basis. Implementations of this interface perform the actual 26 * work of parsing a line without having to deal with how the line was 27 * obtained. 28 * 29 * @author Robert Kasanicky 30 * @param <T> type of the domain object 31 * @see FieldSetMapper 32 * @see LineTokenizer 33 * @since 2.0 34 */ 35 public interface LineMapper<T> { 36 37 /** 38 * Implementations must implement this method to map the provided line to 39 * the parameter type T. The line number represents the number of lines 40 * into a file the current line resides. 41 * 42 * @param line to be mapped 43 * @param lineNumber of the current line 44 * @return mapped object of type T 45 * @throws Exception if error occured while parsing. 46 */ 47 T mapLine(String line, int lineNumber) throws Exception; 48 } | http://docs.spring.io/spring-batch/xref/org/springframework/batch/item/file/LineMapper.html | 2014-04-16T17:04:43 | CC-MAIN-2014-15 | 1397609524259.30 | [] | docs.spring.io |
.
2) Request membership if you want JIRA assignmemnt ping by email (t o m ATal fres co dot c o m) as I don't get notified automatically yet.
3) Signing a contributor agreement
4).
5) If you want suggestions for contributions, check the issues assigned to the 'Contributable' release | http://docs.codehaus.org/pages/viewpage.action?pageId=163872771&focusedCommentId=210469238 | 2014-04-16T16:10:16 | CC-MAIN-2014-15 | 1397609524259.30 | [array(['/s/en_GB-1988229788/4727/ffd10e10ff7bc0a1d7e29b4d2225707dd7f03d0b.15/_/images/icons/emoticons/smile.png',
'(smile)'], dtype=object) ] | docs.codehaus.org |
Introduction
SOAP is a lightweight protocol intended for exchanging structured information in a decentralized, distributed environment. Groovy has a SOAP implementation based on
Xfire which allows you to create a SOAP server and to make calls on remote SOAP servers.
Installation
You just need to download this jar file in your ${user.home}/.groovy/lib directory.
Example !
The Client
- Oh ... you want to test it ... two more lines.
- You're done!
More Information
Current limitations (and workaround)
- No authentication (see JIRA issue 1457)
- No proxy support (see JIRA issue 1458)
-<< | http://docs.codehaus.org/pages/viewpage.action?pageId=62856 | 2014-04-16T16:43:37 | CC-MAIN-2014-15 | 1397609524259.30 | [array(['/download/attachments/49064/GroovySOAP.png?version=1&modificationDate=1148501980402&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Setting User Access Control and Data Execution Prevention
In some cases, UAC and DEP settings might need to be modified.
To ensure smooth operation on machines that run on OS Windows Vista and above, follow these steps:
- Check to see whether User Access Control (UAC) is turned on for the computer that is running Automation Anywhere:.
- On the Windows desktop, select on Start→Control Panel→User Accounts→Change User Account Control Settings.
- Set Never Notify.
- Add Automation Anywhere to the list of exceptions under Data Execution Protection (DEP).
- On the Windows desktop, select on Start→Control Panel→System→Advanced System Settings.
- On the Advanced tab, click the Settings button.
- Click on the tab Data Execution Prevention and select the option Turn on DEP for all programs and services except those I select.
- Click on the Add button, and add the Automation Anywhere.exe (Program Files (x86)\Automation Anywhere) folder to the list.
- Click Apply and then click OK.
- Reboot the computer to ensure that the new settings take effect. | https://docs.automationanywhere.com/de-DE/bundle/enterprise-v11.3/page/enterprise/topics/aae-client/bot-creator/customizing-an-automation-client/setting-user-access-control.html | 2022-01-16T22:55:49 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.automationanywhere.com |
Matlab on Farber
For use on Farber, MATLAB projects should be developed using a Desktop installation of MATLAB and then copied to Farber to be run in batch. Here an extended MATLAB example is considered involving one simple MATLAB function, and two MATLAB scripts to execute this function in a loop, and another to execute in parallel using the Parallel Farber.
Two interactive jobs are demonstrated. One shows how to test the function by executing the function one time. A
second example shows an interactive session, which starts multiple MATLAB pool of workers to execute the function as a parallel toolbox loop,
parfor. The Parallel toolbox gives a faster time to completion, but with more memory and CPU resources consumed.
You can run MATLAB as a desktop (GUI) application on Farber, but is not recommended as the graphics is slow to display especially with a slower network connection.
Many MATLAB research projects fall in.
Matlab License Information for Grid Engine
Matlab licenses are pushed into consumable (global, per-job) integer complexes in Grid Engine and can be checked using
qhost -h global -F
to list number of unused license seats for each product.
Below is an example representing a snapshot of unused licensed seats for Matlab products on the cluster.
[traine@mills ~]$ qhost -h global -F HOSTNAME ARCH NCPU LOAD MEMTOT MEMUSE SWAPTO SWAPUS ------------------------------------------------------------------------------- global - - - - - - - gc:MLM.Compiler=50.000000 gc:MLM.Aerospace_Blockset=1.000000 gc:MLM.RTW_Embedded_Coder=2.000000 gc:MLM.Robust_Toolbox=150.000000 gc:MLM.Aerospace_Toolbox=1.000000 gc:MLM.Identification_Toolbox=50.000000 gc:MLM.XPC_Target=2.000000 gc:MLM.Econometrics_Toolbox=1.000000 gc:MLM.Real-Time_Workshop=2.000000 gc:MLM.Fuzzy_Toolbox=50.000000 gc:MLM.Video_and_Image_Blockset=1.000000 gc:MLM.Neural_Network_Toolbox=50.000000 gc:MLM.Fin_Instruments_Toolbox=1.000000 gc:MLM.Optimization_Toolbox=44.000000 gc:MLM.MATLAB_Coder=2.000000 gc:MLM.MATLAB=204.000000 gc:MLM.Database_Toolbox=1.000000 gc:MLM.SIMULINK=100.000000 gc:MLM.PDE_Toolbox=48.000000 gc:MLM.GADS_Toolbox=1.000000 gc:MLM.Symbolic_Toolbox=46.000000 gc:MLM.Signal_Toolbox=146.000000 gc:MLM.Financial_Toolbox=1.000000 gc:MLM.Data_Acq_Toolbox=2.000000 gc:MLM.Image_Acquisition_Toolbox=1.000000 gc:MLM.Curve_Fitting_Toolbox=9.000000 gc:MLM.Image_Toolbox=143.000000 gc:MLM.Distrib_Computing_Toolbox=48.000000 gc:MLM.OPC_Toolbox=1.000000 gc:MLM.MPC_Toolbox=50.000000 gc:MLM.Virtual_Reality_Toolbox=1.000000 gc:MLM.Statistics_Toolbox=43.000000 gc:MLM.Signal_Blocks=50.000000 gc:MLM.Instr_Control_Toolbox=2.000000 gc:MLM.MAP_Toolbox=12.000000 gc:MLM.Communication_Toolbox=50.000000 gc:MLM.Control_Toolbox=150.000000 gc:MLM.Wavelet_Toolbox=1.000000 gc:MLM.Bioinformatics_Toolbox=1.000000 gc:MLM.Simulink_Control_Design=50.000000 gc:MLM.Real-Time_Win_Target=1.000000
Matlab jobs can be submitted to require a certain number of license seats to be available before a job will run. If there are inter-license dependencies for toolboxes, then you should specify all the licenses including Matlab and/or Simulink.
For example, if a Matlab job requires the Financial toolbox, then you will also need to specify all the inter-related toolbox licenses required by the Financial toolbox such as the Statistics and Optimization toolboxes as well Matlab itself. See Mathworks System Requirements & Platform Availability by Product for complete details.
qsub -l MLM.MATLAB=1,MLM.Financial_Toolbox=1,MLM.Statistics_Toolbox=1,MLM.Optimization_Toolbox=1 ...
Naturally, this isn't a to-the-moment mapping because the license server is not being queried constantly. However, it's consumable, so it is keeping track of how many seats are unused every 6 minutes.
This will be most helpful when submitting many Matlab jobs that require a toolbox with a low-seat count. They will wait for a toolbox seat to become available rather than trying to run and having many getting the “License checkout failed” message from MATLAB.
Matlab function
We will using this sample function on the Farber page will use this MATLAB function to illustrate using Matlab in batch and interactively. The function will be executed interactively on multiple cores using multiple computational threads, and with 12 workers from a MATLAB pool. A MATLAB script with be run in batch to loop with multiple computational threads, and again with MATLAB pool.
Finally it will be compiled and deployed using the Matlab Compiler Runtime (MCR) environment.
isreal, but it use useless to select real values from a comples
First, write a Matlab script file. display on the screen (standard out in batch).
[compThreads,count]=sscanf(getenv('NSLOTS'),'%d'); if count == 1 warning('off','MATLAB:maxNumCompThreads:Deprecated'); autoCompThreads = maxNumCompThreads(compThreads); disp(sprintf('NumCompThreads=%d, was %d',compThreads,autoCompThreads)) end
See Setting maximum number of computational threads
job script file the same as exiting the window, which is the preferred way to exit the MATLAB GUI.
Copy the project folder
Copy the project folder to a directory on the cluster. Use any file transfer client to copy your entire queue script file and the script output file. But you may combine them, and even use the MATLAB editor to create the script file and look at the output file. If you create the file on a PC, take care to not transfer the files as binary. See Transfer Files for the appropriate cluster. job template file (
/opt/shared/templates/gridengine), for example, to submit a serial job on one core of a compute node, copy the serial template.
In your copy change the commented
vpkg_require command to
require MATLAB, and then add your shell commands to the end of the file. Your copy may contain the lines:
# Add vpkg_require commands after this line: vpkg_require matlab # Now append all of your shell commands necessary to run your program # after this line: cd project_directory matlab -nodisplay -singleCompThread -r main_script
The
project_directory should have a file named
main_script.m with your script. It could have just
one line
display 'Hello World'.
Submit batch job
Your shell must be in a workgroup environment
to submit any jobs.
Use the
qsub command to submit a batch job
and note the
«JOBID» that is assigned to your job. For example, if you queue script file name is
matlab_first.qs,
submit the job with:
qsub matlab_first.qs
This is the message you get if you are not in workgroup. Choose a workgroup with the
workgroup command.
It is true that a queue script file is (usually) a bash script, but it must be executed with the
qsub command instead of the
sh command. This way the grid engine commands with be processed, and the job will be run on a compute node.
Wait for job to complete
You can check on the status of you job with the
qstat command.
For example, to list the information for job
«JOBID», type
qstat -j <<JOBID>>
For long running jobs, you could change your queue script to notify you via an e-mail message when the job is complete.
Post process job
All MATLAB output data files will be in the project directory, but the MATLAB standard output will be in the current directory, from which you submitted the job. Look for a file ending in your assigned JOB
qlogin.
qlogin vpkg_require matlab cd project_directory matlab -nodesktop -singleCompThread
This will start a interactive command-line session in your terminal window. When done type the
quit or
exit to terminated the MATLAB session and then
exit to terminated the qlogin session.
Desktop
You should be on a compute node before you start MATLAB. To start a MATLAB desktop (GUI mode) on a cluster, you must be running an X11 server and you must have connected using X11 tunneling.
Your shell must be in a workgroup environment
to submit a job using
qlogin.
qlogin -l exclusive=1 vpkg_require matlab cd project_directory matlab
This will start a interactive DESKTOP session on you X11 screen. When done type the
quit or
exit in the command window or just close the window. When back at the terminal bash prompt, type
exit to terminate the qlogin session.
See tips on starting Matlab in an interactive session without the desktop, including executing a script.
There is an example MCR project in the
/opt/shared/templates/ directory for you to copy and try. Copy on the head node and qlogin to compile with MATLAB. Once your program is compiled you can run it interactively or in batch, without needing a MATLAB license.
Copy dev-projects template
On the head node
cp -r /opt/shared/templates/dev-projects/MCR . cd MCR
Compile with make
Now compile on the compute node by using
qlogin make
qlogin
Resulting output from the make command:
Adding package `mcr/r2014b-nojvm` to your environment make[1]: Entering directory `/home/work/it_css/traine/matlab/MCR' mcc -o maxEig -R "-nojvm,-nodesktop,-singleCompThread" -mv maxEig.m Compiler version: 5.2 (R2014b) Dependency analysis by REQUIREMENTS. Parsing file "/home/work/it_css/traine/matlab/MCR/maxEig.m" (Referenced from: "Compiler Command Line"). Deleting 0 temporary MEX authorization files. Generating file "/home/work/it_css/traine/matlab/MCR/readme.txt". Generating file "run_maxEig.sh". make[1]: Leaving directory `/home/work/it_css/traine/matlab/MCR'
Take note of the package added, and the files that are generated. You can remove these files, as they are not needed. You must add the package in your batch script or to test interactively.
test interactively
To test interactively on the same compute node.
vpkg_require mcr/r2014b-nojvm time ./maxEig 20.8
back to the head node
When done, exit the compute node.
exit
Copy array job example
cp /opt/shared/templates/gridengine/matlab-mcr.qs . vi matlab-mcr.qs diff /opt/shared/templates/gridengine/matlab-mcr.qs matlab-mcr.qs
The
diff output shows changes made in the
vi session:
46c46 < # -l m_mem_free=5G --- > #$ -l m_mem_free=3G 51c51 < # -t 1-4 --- > #$ -t 1-100 63c63,64 < vpkg_require mcr/r2014b-nojvm --- > vpkg_require mcr/r2015a-nojvm > let lambda="$SGE_TASK_ID-1" 79c80 < MCR_EXECUTABLE_FLAGS=("$RANDOM") --- > MCR_EXECUTABLE_FLAGS=("$lambda")
To submit a standby array job that has 100 tasks.
qsub -l standby=1 matlab-mcr.qs
Example
[(it_css:traine)@farber MCR]$ qsub -l standby=1 matlab-mcr.qs Your job-array 627074.1-100:1 ("matlab-mcr.qs") has been submitted [(it_css:traine)@farber MCR]$ date Mon Apr 11 14:56:26 EDT 2016 [(it_css:traine)@farber MCR]$ date Mon Apr 11 15:17:33 EDT 2016 [(it_css:traine)@farber MCR]$ ls -l matlab-mcr.qs.o627074.* | wc -l 100
There are 100 output files with the names matlab-mcr.qs.o627074.1 to matlab-mcr.qs.o627074.100 For example file 50:
[CGROUPS] UD Grid Engine cgroup setup commencing [CGROUPS] Setting 3221225472 bytes (vmem none bytes) on n106 (master) [CGROUPS] with 1 core = [CGROUPS] done. Adding package `mcr/r2015a-nojvm` to your environment GridEngine parameters: MCR_ROOT = /opt/shared/matlab/r2015a MCR executable = /home/work/it_css/traine/matlab/MCR/maxEig flags = 49 MCR_CACHE_ROOT = /tmp/627074.50.standby.q -- begin maxEig run -- maxe = 5.0243e+03 -- end maxEig run --
Compiling your code to use MATLAB engine
There is an simple example function
fengdemo.F coded in Fortran, you can copy and use as a starting point.
On the head node and in a workgroup shell:
vpkg_require matlab/r2015a gcc/4.9 cp $MATLABROOT/extern/examples/eng_mat/fengdemo.F . export LD_LIBRARY_PATH=$MATLABROOT/bin/glnxa64:$MATLABROOT/sys/os/glnx64:$LD_LIBRARY_PATH mex -client engine fengdemo.F
To start MATLAB on a compute node to test this new program:
qlogin vpkg_require matlab/r2015a gcc/4.9 export LD_LIBRARY_PATH=$MATLABROOT/bin/glnxa64:$MATLABROOT/sys/os/glnx64:$LD_LIBRARY_PATH ./fengdemo exit
Step one of the fengdemo should give the plot:
Step two should give the table:
MATLAB computed the following distances: time(s) distance(m) 1.00 -4.90 2.00 -19.6 3.00 -44.1 4.00 -78.4 5.00 -123. 6.00 -176. 7.00 -240. 8.00 -314. 9.00 -397. 10.0 -490.
Compiling your own MATLAB function
There is an simple example function
timestwo.c, coded in c, you can copy and use as a starting point.
On the head node and in a workgroup shell:
vpkg_require matlab/r2015a gcc/4.9 cp $MATLABROOT/extern/examples/refbook/timestwo.c . mex timestwo.c
To start MATLAB on a compute node to test this new function:
qlogin vpkg_require matlab/r2015a gcc/4.9 matlab -nodesktop timestwo(4) quit exit
You should get the answer
>> timestwo(4) ans = 8 >>
Batch job serial example
Second, write a shell script file to set the Matlab environment and start Matlab running your script file. The following script file will set the Matlab environment and run the command in the script.m file:
- batch.qs
#$ -N script.m #$ -m eas #$ -M [email protected] #$ -l exclusive=1 vpkg_require matlab/r2014b matlab -nodisplay -nojvm -r script
Ths
-nodisplay indicates no X11 graphics, which implies
-nosplash -nodesktop. The
-nojvm indicates no Java. (Java is needed for some functions, e.g., print graphics, but should be excluded for most computational jobs.)
The
-r is followed by a Matlab command, enclosed in quotes when there is are spaces in the command.
-l exclusive=1tells the scheduler to wait until your job can get exclusive access to the node. Since your job is the only job on the node, it can use all the memory and all the cores. Matlab assumes you want to use the full node to run as fast as possible. The goal is to reduce real time (wall clock time), not CPU time. When you use exclusive you should monitor the job to see the average core count and the maximum memory usage. With hind sight, this job should have used:
#$ -pe threads 5 #$ -l m_mem_free=1G
If everyone in your group carefully set these values, multiply jobs can run concurrently on the node.
See Setting maximum number of computational threads both
script.m and
batch.qs, submit the batch job with the command:
qsub batch.qs
In this example you will only need a license for the base Matlab, and the parallel toolbox needs one license. We are using the default local scheduler which will give you workers on the same node with one license.
Toolbox dependencies
You should include toolbox dependencies in your batch script too to help avoid a failure, which will occur if the job starts with no licenses available.
For example, the Bioinformatics toolbox only has one seat, and in addition it requires the Statistics and Machine Learning toolbox, as well as the core MATLAB. So you would add the line:
#$ -l MLM.MATLAB=1,MLM.Statistics_Toolbox=1,MLM.Bioinformatics_Toolbox=1
to your job script.
Wait for completion
Finally, wait for the mail notification, which will be sent to
[email protected]. When the job is done the output from the Matlab command will be in a file with the pattern -
script.m.oJOBID, where JOBID is the number assigned to your job.
After waiting for about 2 1/2 hours, a message was receive with subject line “Grid Engine Job Scheduler”:
Job 2362 (script.m) Complete User = traine Queue = it_css.q@n038 Host = n038.farber.hpc.udel.edu Start Time = 10/21/2014 14:45:42.100 End Time = 10/21/2014 17:09:24.782 User Time = 12:41:56 System Time = 00:11:31 Wallclock Time = 02:23:42 CPU = 12:53:27 Max vmem = 3.924G Exit Status = 0
Gather results
The results for Job 2362 are in the file
- script.m.o2362
[CGROUPS] No /cgroup/memory/UGE/2362.1 exists for this job [CGROUPS] UD Grid Engine cgroup setup commencing [CGROUPS] Setting none bytes (vmem none bytes) on n038 (master) [CGROUPS] with 20 cores = 0-19 [CGROUPS] done. Adding package `matlab/r2014b` to your environment <. maxe = 70.0220 ... //Skipping 198 similar displays of variable maxe// maxe = 67.4221 Elapsed time is 8618.393954 seconds. avgMaxEig = 69.5131
Timings and core count
Consider a batch job run with the two Grid Engine options:
-pe threads 5 -l m_mem_free=1G
The
qsub command will give you the job id, and once it starts running, the
qstat command will give you the node you are running on -
n=n038. After about 10 minutes of running:
$ ssh $n ps -eo pid,ruser,pcpu,pmem,thcount,stime,time,command | egrep '(COMMAND|matlab)' PID RUSER %CPU %MEM THCNT STIME TIME COMMAND 29207 traine 180 0.8 10 11:00 00:09:40 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm
This
ps command will give the percent CPU, which is
>100% for multi-core jobs, the percent memory, the thread count, which is > 5, the start time, the time of executions, and finally the full command used the start the job.
Given the reported PID, 29207, you can drill down and see which of the 10 threads are consuming CPU time:
$ ssh $n ps -eLf | egrep '(PID|2907)' | grep -v ' 0 ' UID PID PPID LWP C NLWP STIME TTY TIME CMD traine 29207 29076 29257 99 10 11:00 ? 00:06:55 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm traine 29207 29076 29264 22 10 11:00 ? 00:01:31 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm traine 29207 29076 29265 22 10 11:00 ? 00:01:33 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm traine 29207 29076 29266 22 10 11:00 ? 00:01:31 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm traine 29207 29076 29267 22 10 11:00 ? 00:01:32 /home/software/matlab/r2014b/bin/glnxa64/MATLAB -nodisplay -r script -nojvm
While the batch job was running on node
n=n038, the top command was run to sample the resources being used by Matlab
every second two times
-b -n 1. This
-H option was used to display each individual threads, rather than a summery of all threads in a process.
$ ssh $n top -H -b -n 1 | egrep '(COMMAND|MATLAB)' | grep -v 'S 0' PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 29257 traine 20 0 1698m 577m 73m R 99.5 0.9 112:27.82 MATLAB 29266 traine 20 0 1698m 577m 73m S 7.8 0.9 30:24.10 MATLAB 29264 traine 20 0 1698m 577m 73m S 5.9 0.9 30:24.49 MATLAB 29265 traine 20 0 1698m 577m 73m S 5.9 0.9 30:24.63 MATLAB 29267 traine 20 0 1698m 577m 73m S 5.9 0.9 30:27.43 MATLAB 29263 traine 20 0 1698m 577m 73m S 2.0 0.9 1:15.25 MATLAB
using the the PID of
$ ssh $n mpstat -P ALL 1 2 Linux 2.6.32-431.23.3.el6.x86_64 (n038) 04/28/2015 _x86_64_ (20 CPU) Average: CPU %usr %nice %sys %iowait %irq %soft %steal %guest %idle Average: all 7.06 0.00 0.18 0.00 0.00 0.00 0.00 0.00 92.77 Average: 0 100.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average: 1 10.66 0.00 0.00 0.00 0.00 0.00 0.00 0.00 89.34 Average: 2 10.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 90.00 Average: 3 9.55 0.00 0.50 0.00 0.00 0.00 0.00 0.00 89.95 Average: 4 10.45 0.00 0.00 0.00 0.00 0.00 0.00 0.00 89.55 Average: 5 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 6 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 7 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.50 Average: 8 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 9 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 10 0.00 0.00 0.50 0.00 0.00 0.00 0.00 0.00 99.50 Average: 11 0.50 0.00 2.00 0.00 0.00 0.00 0.00 0.00 97.50 Average: 12 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 13 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 14 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 15 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 16 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 17 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 18 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00 Average: 19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
qhost -h $n HOSTNAME ARCH NCPU NSOC NCOR NTHR NLOAD MEMTOT MEMUSE SWAPTO SWAPUS ---------------------------------------------------------------------------------------------- global - - - - - - - - - - n038 lx-amd64 20 2 20 20 0.10 63.0G 11.4G 2.0G 18.3M
After the job is done you can use
qacct to get a recap of resources used:
$ qacct -h n038 -j 64501 | egrep '(maxvmem|maxrss|cpu|wallclock)' ru_wallclock 9088.920 ru_maxrss 591764 cpu 18986.828 maxvmem 1.673G
Batch parallel example
The Matlab parallel toolbox uses JVM to manage the workers and communicate while you are running. You
need to setup the Matlab pools in your
script.
Matlab parallel script
Here are the slightly modified MATLAB script.
Add two
parpool commands and change
for ⇒
parfor.
Grid engine parallel script
Take out
-nojvm, which is needed for the parpool, and require the distributed computing toolbox.
Timing results
Reported usage for same job run using the parallel toolbox.
JJob 618746 (pscript) Complete User = traine Queue = spillover.q@n010 Host = n010.farber.hpc.udel.edu Start Time = 03/31/2016 11:01:53.776 End Time = 03/31/2016 11:21:28.937 User Time = 06:02:34 System Time = 00:01:00 Wallclock Time = 00:19:35 CPU = 06:03:35 Max vmem = 80.513G Exit Status = 0
Compare script vs pscript
The job script used more CPU resources with the multiple computational threads, while pscript user more memory resources with 20 single-threaded worker.
Interactive example
The basic steps to running a MATLAB interactively on a compute node.
This demo starts in your MATLAB directory and with and active workgroup.
Scheduling exclusive interactive job
$ qlogin -l exclusive=1 Your job 2493 ("QLOGIN") has been submitted waiting for interactive job to be scheduled ... Your interactive job 2493 has been successfully scheduled. Establishing /opt/shared/univa/local/qlogin_ssh session to host n036 ...
Starting a command mode matlab session
$ vpkg_require matlab/r2014b Adding package `matlab/r2014b` to your environment
$ matlab -nodesktop -nosplash MATLAB is selecting SOFTWARE OPENGL rendering. < $ exit Connection to n036 closed. /opt/shared/univa/local/qlogin_ssh exited with exit code 0
Interactive parallel toolbox example
When you plan to use the parallel toolbox, you should logon exclusively to a compute node with the command:
qlogin -l exclusive=1
This will effectively reserve the entire node for your MATLAB workers. The is default number of parallel workers is 12, but you can ask for more – up to the number of cores on the node when using the local scheduler.
Here we start 20 workers with the parpool function, and then use parfor to send a different seed to each worker. The output is from the workers, as they complete, but the order is not deterministic.
It took about 100 seconds for all 20 workers to produce on result. Since they are working in parallel the elapsed time to complete 200 results is about
>> parpool(20); Starting parallel pool (parpool) using the 'local' profile ... connected to 20 workers. >> tic; parfor sd = 1:200; maxEig(sd,5001); end; toc maxe = 70.2345 maxe = 69.9007 maxe = 71.2040
skipped lines
maxe = 70.1443 maxe = 71.2327 maxe = 66.3099 Elapsed time is 1087.729851 seconds. do not need to use the shell (
.sh file) that the compiler creates.
There are two ways to run compiled MATLAB jobs in a shared environment, such as Mills and Farber.
- Compile to produce and executable that uses a single computational thread - MATLAB option '-singleCompThread'
- Submit the job to use the nodes exclusively - Grid engine option
-l exclusive=1
You can run more jobs on each node when they compiled to use just one core (Single Comp Thread). This will give you higher throughput for an array job, but not higher performance.
Example compiler commands
The maxEig function has a conditional statement to make it work when deployed.
if (isdeployed) sd = str2num(sd) dim = str2num(dim) end
All augments. Type the commands
prog=maxEig opt='-nojvm,-nodisplay,-singleCompThread' version='r2015a' vpkg_require matlab/$version mcc -R "$opt" -mv $prog.m [ -d $WORKDIR/sw/bin ] && mv $prog $WORKDIR
[(it_css:traine)@farber matlabApr1]$ qlogin Your job 619145 ("QLOGIN") has been submitted waiting for interactive job to be scheduled ... Your interactive job 619145 has been successfully scheduled. Establishing /opt/shared/univa/local/qlogin_ssh session to host n039 ... [(it_css:traine)@n039 matlabApr1]$ . compile.sh Adding package `matlab/r2016a` to your environment Compiler version: 6.2 (R2016a) Dependency analysis by REQUIREMENTS. Parsing file "/home/work/it_css/traine/matlabApr1/maxEig.m" (Referenced from: "Compiler Command Line"). Deleting 0 temporary MEX authorization files. Generating file "/home/work/it_css/traine/matlabApr1/readme.txt". Generating file "run_maxEig.sh". [(it_css:traine)@n039 matlabApr1]$ ls compile.sh mccExcludedFiles.log run_maxEig.sh maxEig readme.txt script.m maxEig.m requiredMCRProducts.txt stackTrace.m [(it_css:traine)@n039 matlabApr1]$ exit exit Connection to n039 closed. /opt/shared/univa/local/qlogin_ssh exited with exit code 0
Example queue script file
The
mcc command will generate a
.sh file, which you can use to setup your environment and run the command. This does not use VALET and does not have any grid engine commands in it. We suggest you the gridengine template in the file
/opt/shared/templates/gridengine/matlab-mcr.qs
or modify this simple example:
#$ -N maxEig #$ -t 1-200 #$ -l m_mem_free=3.1G # # Parameter sweep array job to run the maxEig compiled MATLAB function with # lambda = 1,2. ... 200 # date "+Start %s" echo "Host $HOSTNAME" vpkg_require mcr/r2014b-nojvm export MCR_CACHE_ROOT="$TMPDIR" let seed=$SGE_TASK_ID let dim=5001 ./maxEig $seed $dim date "+Finish %s"
The two
date commands record the start and finish time in seconds for each task. These can be used to compute the elapsed time, and.
Compiled Matlab in owner queues
To test the example compiled Matlab job on the
it_css owner queues, we first compiled the code with mcc and
then submited with qsub. The job number assigned 3731. After a few minutes 200 files were created in the current directory.
maxEig.o3731.1 ... maxEig.o3731.200
They each had the output of one task. For example for taskid 125:
[CGROUPS] UD Grid Engine cgroup setup commencing [CGROUPS] Setting 5368709120 bytes (vmem 5368709120 bytes) on n036 (master) [CGROUPS] with 1 core = 5 [CGROUPS] done. Start 1414171807 Host n036 Adding package `mcr/r2014b-nojvm` to your environment sd = 125 dim = 5001 maxe = 70.4891 Finish 1414171902
Now we gather all the information from this files and write a data file with three columns:
sd dim maxe 1 5001 70.0220 2 5001 71.7546 3 5001 70.8331 4 5001 70.5714 .... 199 5001 70.7535 200 5001 67.4221
and prints the average
avgMaxEig = 69.5131125
These are the same results we got from both the matlab loop and the parallel toolbox, but they where computed in just over 3 1/2 minutes. To see this we gather the start/finish times in seconds and the host name.
SGE array job started Fri 24 Oct 2014 01:28:37 PM EDT
Used a total of 18977 CPU seconds over 219 seconds of elapsed time on 10 nodes
Using gnuplot we get a time chart of usage on the 10 nodes and total CPU usage.
Compiled Matlab in standby queue
Command to submit 200 jobs to the standby queue (must complete in 8 hours.)
qsub -l standby=1 abatch.qs | https://docs.hpc.udel.edu/software/matlab/farber | 2022-01-16T21:29:45 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.hpc.udel.edu |
Idaptive SSO
Idaptive SSO is a cloud service that allows you to track ingress authentication events and produce documents for those events in order to protect against privileged access abuse.
At this time, InsightIDR only tracks password authentications through your Idaptive data. After you complete the configuration, this event source refreshes every two hours.
Before You Begin
Use an Admin account to connect to InsightIDR with API permissions to query the
redrock/query and
/security endpoints. Read more about the Idaptive API here:
You must also gather the following information from your Idaptive application:
- TenantID
- User
Configure Idaptive SSO
Complete these tasks to configure Idaptive SSO for this event source.
Task 1: Create an authentication profile
Create an authentication profile that uses a password for the first challenge and no secondary challenge (InsightIDR only supports password authentication). The profile must also bypass multi-factor authentication.
Task 2: (Optional) Create a policy
Users who have multi-factor authentication (MFA) enabled may need to create a unique policy that allows the InsightIDR account to bypass MFA and other controls (InsightIDR does not support MFA). To create a policy:
- Log in to the admin portal using the same account as the event source.
- Click Core Services > Policies > Add Policy Set.
- Define the policy related information.
- Enter a name for the policy set.
- Enter the description you want to appear on the Admin Portal Policy page.
- Configure Set Policy to active option if necessary (this option is enabled by default).
- Specify policy assignment.
- Click the Save button.
Task 3: Verify that you can access the Redrock Query
To test access to the Redrock Query:
- Log in to the admin portal using the same account as the event source.
- Navigate to Core Services > Reports.
- Click on New Report.
- Click on Edit Script and paste:
1select ID,InternalSessionId,WhenOccurred,EventType,EventMessage,NormalizedUser,FromIPAddress,DirectoryServiceName from event where whenoccurred >= datefunc('now','-23:59') order by whenoccurred asc
- Click Preview.
If the preview returns records, there is access to the Redrock Query endpoint..
- Choose the timezone that matches the location of your event source logs.
- Optionally choose to send unfiltered logs.
- Create and name a new credential for the Admin account used for the Idaptive API.
- In the “Username” field, enter your Admin account username.
- In the “Password” field, enter the password for the admin account.
- In the “Tenant ID” field, enter the tenant ID for your Idaptive appliance. For example, if your Idaptive URL is
tenantID.my.idaptive.app, your tenant ID is
tenantID.
- “Idaptive SSO” if you did not name the event source.
Idaptive SSO logs flow into the log set:
- Ingress Authentication
- Perform a Log Search to make sure Idaptive SSO events are coming through.
The following is a sample of input logs that Idaptive SSO sends to InsightIDR.
json
1{2"FromIPAddress": "149.14.220.2",3"ID": "7729851cecdcfa97.W1a.f478.bdec1d8678e62ddd",4"EventType": "Cloud.Core.LoginFail",5"EventMessage": "Failed login attempt as bob from 149.14.220.2",6"NormalizedUser": "bob",7"InternalSessionId": "2669c4fd-34c2-4e01-9add-13a0a5062de1",8"WhenOccurred": "/Date(1547554501673)/",9"DirectoryServiceName": "UNKNOWN"10} | https://docs.rapid7.com/insightidr/idaptive-sso/ | 2022-01-16T21:45:39 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.rapid7.com |
AF-PACKET¶
AF-PACKET is built into the Linux kernel and includes fanout capabilities enabling it to act as a flow-based load balancer. This means, for example, if you configure Suricata for 4 AF-PACKET threads then each thread would receive about 25% of the total traffic that AF-PACKET is seeing.
Warning
If you try to test AF-PACKET fanout using tcpreplay locally, please note that load balancing will not work properly and all (or most) traffic will be handled by the first worker in the AF-PACKET cluster. If you need to test AF-PACKET load balancing properly, you can run tcpreplay on another machine connected to your AF-PACKET machine.
The following processes use AF-PACKET for packet acquisition:
More Information¶
See also
For more information about AF-PACKET, please see: | https://docs.securityonion.net/en/2.3/af-packet.html | 2022-01-16T21:52:02 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.securityonion.net |
By the end of this guide, you will be familiar with:
Building the Texture Share SDK from source.
Integrating the C++ SDK with DirectX11 and DirectX12 applications.
Using the Unreal Engine plugin and Blueprints.
Sending the project's rendered viewport in Unreal to an external application.
Sending and receiving textures from an external application during your Unreal session.
Step 1 - Getting Started with the SDK
In order to use the Texture Share SDK, you must download and build the engine from source. See Downloading Unreal Engine Source Code for details on where to find the engine source code.
Building the engine in Visual Studio won't build the TextureShareSDK project by default.
Follow these steps to build the Texture Share SDK:
Open the
UE4.slnfile in Visual Studio.
In the Solution Explorer panel, navigate to Programs > TextureShare. Right-click the TextureShareSDK project and build it.
When the project finishes building, it generates
.liband
.dllfiles in the folder Engine\Binaries\Win64\TextureShareSDK.
C++ SDK Integration
The Texture Share API provides granular control over the texture sharing process.
To use Texture Share, you must establish a session between exchanging parties before you can perform any read or write operations. Each step of the process provides control over a variety of synchronization states that you can customize for your project.
C++ applications using the Texture Share API adhere to the following structure:
Client application creates texture share object by specifying its name.
Textures are registered with the texture share object, which can hold multiple textures, and the texture session is started.
On each frame, data can be written and sent to the other application or read. The data buffers must be locked before read or write operations and unlocked afterwards.
Step 3 is repeated for every frame until the session ends.
The following section shows how the Texture Share API is used in the DirectX12 sample project
TextureShareD3D12Client.vcxproj located in Engine/Source/Programs/TextureShare/Samples/ThirdParty/TextureShare_ClientD3D12.
This example uses DirectX12 but the process is the same for DirectX11. You can follow along with the DirectX11 sample project in the ThirdParty folder with some modifications.
Include Files
Include the header files
TextureShareD3D12Client.hand
TextureShareDLL.h.
Link the static library file using a pragma comment. Use
TextureShareSDK.libif you're building with the Release configuration in Visual Studio and
TextureShareSDK-Win64-Debug.libif you're building with the Debug configuration.
#include "TextureShareD3D12Client.h" #include "TextureShareDLL.h" #ifdef _DEBUG #pragma comment( lib, "TextureShareSDK-Win64-Debug.lib" ) #else #pragma comment( lib, "TextureShareSDK.lib" ) #endif
Initialize
The following steps describe what to intialize and how to start a Texture Share session in C++.
Load the render pipeline and assets with the DirectX APIs:
Create a DirectX device.
Create a GraphicsCommandList.
See
D3D12HelloTexture::LoadPipeline()and
D3D12HelloTexture::LoadAssets()for examples.
Create the Texture Share item:
Set the share name. Maximum length is 128 characters.
Set the application as the client with
ETextureShareProcess::Client.
Define the sync policies for the Texture Share session. See Sync Policies for more details. In this example, all the sync policies are set to None so the synchronization events will be non-blocking.
Specify which graphics API is being used. Currently only DirectX11 and DirectX12 are supported.
FTextureShareSyncPolicy DefaultSyncPolicy; DefaultSyncPolicy.ConnectionSync = ETextureShareSyncConnect::None; DefaultSyncPolicy.FrameSync = ETextureShareSyncFrame::None; DefaultSyncPolicy.TextureSync = ETextureShareSyncSurface::None; FTextureShareInterface::CreateTextureShare(ShareName.c_str(), ETextureShareProcess::Client, DefaultSyncPolicy, ETextureShareDevice::D3D12);
Register the texture with the Texture Share item:
Set the share name of the session.
Set the texture name.
Set the texture resolution.
Define the texture format and value.
Set whether the texture is readable or writable.
ETextureShareFormat ShareFormat = ETextureShareFormat::Undefined; uint32 ShareFormatValue = 0; // Use client texture format: if (InFormat != DXGI_FORMAT_UNKNOWN) { ShareFormat = ETextureShareFormat::Format_DXGI; ShareFormatValue = InFormat; } FTextureShareInterface::RegisterTexture(ShareName.c_str(), TextureName.c_str(), Width, Height, ShareFormat, ShareFormatValue, TextureOp);
Define the start of the scope of the Texture Share session:
FTextureShareInterface::BeginSession(ShareName.c_str());
Render Thread
The following steps describe how to access the shared memory in the render thread. The steps show how to use both the read and write operations.
Define the start of the scope for the frame in the Texture Share session:
FTextureShareInterface::BeginFrame_RenderThread(ShareName.c_str());
Read the texture in the render thread:
Put a lock on the texture.
Access the texture)) { if (!FTextureShareD3D12Helper::IsTexturesEqual(SharedResource, *InOutSRVTexture)) { // Shared texture size changed on server side. Remove temp texture, and re-create new tempTexture ReleaseTextureAndSRV(InOutSRVTexture); } if (!*InOutSRVTexture) { // Create Temp texture&srv FTextureShareD3D12Helper::CreateSRVTexture(pD3D12Device, pD3D12HeapSRV, SharedResource, InOutSRVTexture, SRVIndex); } // Copy from shared to temp: if (*InOutSRVTexture) { FTextureShareD3D12Helper::CopyResource(pCmdList, SharedResource, *InOutSRVTexture); } // Unlock shared resource FTextureShareInterface::UnlockTexture_RenderThread(ShareName.c_str(), TextureName.c_str()); } else { // Release unused texture (disconnect purpose) ReleaseTextureAndSRV(InOutSRVTexture); } }
Write to the texture in the render thread:
Check if the session is valid.
Put a lock on the texture.
Access the)) { FTextureShareD3D12Helper::CopyResource(pCmdList, InTexture, SharedResource); FTextureShareInterface::UnlockTexture_RenderThread(ShareName.c_str(), TextureName.c_str()); } }
Get frame data to access the information in the auxiliary buffers, such as the projection and camera matrices:
FTextureShareSDKAdditionalData* OutFrameData; FTextureShareInterface::GetRemoteAdditionalData(ShareName.c_str(), *OutFrameData);
Define the end of the scope for the frame in the Texture Share session:
FTextureShareInterface::EndFrame_RenderThread(ShareName.c_str());
Present the frame to display.
Clean Up
The following steps describe how to end the Texture Share session when the application exits.
Define the end of the scope for the Texture Share session:
FTextureShareInterface::EndSession(ShareName.c_str());
Delete the Texture Share item and release the memory:
FTextureShareInterface::ReleaseTextureShare(ShareName.c_str());
Step 2 - Getting Started in Unreal
Follow these steps to use the Texture Share plugin and to access the Texture Share Blueprints in Unreal Engine.
In the Editor's main menu, choose Edit > Plugins to open the Plugins Editor.
In the Plugins Editor, find the Texture Share plugin in the Misc section.
Check the Enabled checkbox and restart the Editor.
In the Content Browser, expand the View Options dropdown at the bottom right of the panel. Check Show Engine Content and Show Plugin Content.
Click the folder icon at the top of the Content Browser to choose a content path. Find TextureShare Content in the list and select it.
In the Blueprints folder, there are two Blueprint objects you can add directly to your scene:
BP_TextureShare_Scene: This Blueprint shares the rendered frame of the whole Unreal scene.
BP_TextureShare_Postprocess: This Blueprint sends and receives specific Texture objects.
In the Materials folder are textures and materials you can use with the BP_TextureShare_Postprocess Blueprint:
RTT_TextureShare_Backbuffer: A Texture Render Target 2D asset.
M_TextureShare_RTTBackbuffer: A material that samples the RTT_TextureShare_Backbuffer texture and uses it as an Emissive Color.
The remaining steps in this quick start describe how to use each Blueprint and connect them to other DirectX applications.
Step 3 - Send Unreal Scene as a Texture to a DirectX Application
Follow these steps to stream an Unreal scene to an external DirectX application.
Navigate to the folder Engine/Source/Programs/TextureShare/Samples/ThirdParty/TextureShare_ClientD3D11 and open the sample project `TextureShareD3D11Client.vcxproj`in Visual Studio.
Set the Solution Configuration to Release in Visual Studio.
Build the project in Visual Studio.
Navigate to the folder Engine\Source\Programs\TextureShare\Samples\ThirdParty\TextureShare_ClientD3D11\Binaries\TextureShareD3D11Client-Win64-Release and launch the
TextureShareD3D11Client.exeapplication.
Open your Unreal project in Unreal Engine and add the Blueprint BP_TextureShare_Scene object to your scene.
Select the BP_TextureShare_Scene object to open its Details panel.
Ensure the Share Name parameter is set to the same name as the ShareName variable in the sample project: vp_1.
Press Play in Unreal. The rendered frames from your Unreal scene are streamed as a texture to the client application and applied to the rotating cube.
Step 4 - Send Textures to External DirectX Application
The previous step described how to share the Unreal scene as a texture in a separate process. Any texture object in the project can also be sent to an external application.
Follow these steps to share any texture object in your Unreal project built by the project:
TextureShareD3D12Client.exe.
Open your Unreal project in Unreal Engine and add the Blueprint BP_TextureShare_Postprocess object to your scene.
Select the BP_TextureShare_Postprocess object you added to the scene to open its Details panel.
Expand the Postprocess section.
Expand the Send section under Postprocess. There are two Send array elements. The Ids must correspond to the texture names defined in the file
D3D12HelloTexture.cppin the sample project:
// Define share and texture names std::wstring ShareName1 = L"vp_1"; std::wstring ReceiveTextureNames[] = { L"SceneDepth" , L"BackBuffer" };
Set the first Send element's Id to SceneDepth.
Set the second Send element's Id to BackBuffer.
Press Play in the Unreal Editor.
Update the Texture parameters for both Send elements with texture objects. The engine streams the textures to the client application and applies them to the rendered triangles.
Step 5 - Receive Textures from a DirectX Application and Display Them in Unreal
The previous step described how to send textures to an external DirectX application. This section describes how to receive textures from another application.
Follow these steps to receive textures the project built:
TextureShareD3D12Client.exe.
Open your Unreal project in Unreal Engine and add the Blueprint BP_TextureShare_Postprocess object to your scene.
Select the BP_TextureShare_Postprocess object you added to the scene to open its Details panel.
In the Details panel under Default, expand the Postprocess section.
Expand the Receive section under Postprocess. There is one Receive array element. The Id for this element must correspond to the texture name defined in the file
D3D12HelloTexture.cppin the sample project:
// Define share and texture names std::wstring ShareName1 = L"vp_1"; std::wstring SendBackbufferTextureName = L"InBackbuffer";
Set the Receive element's RTT parameter to the texture RTT_TextureShare_Backbuffer provided in the TextureShare plugin.
Now your Unreal project is set up to receive textures from another application using TextureShare.
In the example below, the Unreal Engine session is started. Unreal is sending a texture to the TextureShareD3D12Client application and receiving the backbuffer from that application. The pictures on the wall in the scene are using a material that's sampling the RTT_TextureShare_Backbuffer texture to display what's received in real-time.
| https://docs.unrealengine.com/4.27/ko/WorkingWithMedia/IntegratingMedia/TextureShare/TextureShareQuickStart/ | 2022-01-16T22:37:06 | CC-MAIN-2022-05 | 1642320300244.42 | [array(['./../../../../../Images/WorkingWithMedia/IntegratingMedia/TextureShare/TextureShareQuickStart/unreal_sharing_viewport.jpg',
'image alt text'], dtype=object)
array(['./../../../../../Images/WorkingWithMedia/IntegratingMedia/TextureShare/TextureShareQuickStart/sending_textures_from_unreal.jpg',
'image alt text'], dtype=object)
array(['./../../../../../Images/WorkingWithMedia/IntegratingMedia/TextureShare/TextureShareQuickStart/receive_textures_into_unreal.jpg',
'image alt text'], dtype=object) ] | docs.unrealengine.com |
6 Data wrangling, 101
Graphite does some of the work for us with regards to data processing. Note the 'count and 'prop modes in the earlier bar charts: we didn’t have to handle that. But Graphite is not a data wrangling library, and oftentimes it makes more sense to process our data first.
For example, in the last example, we got GDP per capita summary statistics for each continent. But what if we want the average, global GDP per capita over time? That’s too complex of a transformation for Graphite to do for us.
For this purpose, we need the Sawzall library, whose documentation is available at Sawzall: A grammar for chopping up data. This library is designed to take in data-frames, and produce new ones with some transformation applied – with operations chaining together using the threading library.
Take all the data in each year, ignoring country,
average the GDP per capita within each year,
then collect the results of that sum into a new data-frame.
The ~> operator is effectively "spicy function composition"; (~> h g f x) translates at compile-time to (f (g (h x))). We use it here to express the idea of "do-this-then-that".
(group-with "year") takes gapminder, and groups it with respect to the variable "year". This tells sequential operations that we want to treat each different possibility of year seperately.
(aggregate [avgGdpPercap (gdpPercap) (avg gdpPercap)]) aggregates each group into a single value. avgGdpPercap tells us what the new column name should be, (gdpPercap) tells us that we want to bind the variable gdpPercap as a vector in the body, and (avg gdpPercap) computes the average value of each vector.
This is a lot to break down, but more or less it takes each year, gets all the GDP per capita values that correspond to it, and averages them.
This also strips down the group structure, since we now only have one row for each year.
show prints out the result, and returns nothing, being the last thing in the pipeline.
This works, but isn’t a very useful example, and doesn’t teach us anything about how to work with NA values, et cetera. So, for a more complex example, we’ll take a look at the GSS again. We saw already that bar can plot counts and relative frequencies. However, oftentimes it makes more sense to get the data in the shape you want it first, and then have Graphite focus its effort on plotting the data, rather than messing with it on-the-fly.
Take our individual-level data,
group it with respect to region, and then religion within region,
summarize each religion into a count of respondents,
then calculate the percentage of each religion within region.
v/ is a helper function to divide every element of a vector by a scalar.
The first clause of this create, [frequency ([count : vector]) (v/ count (sum count))], binds the variable "count" as a vector (hence the annotation), divides each element by the sum of the entire vector, and returns the vector.
When every bound variable is of type vector, the body of that clause should return a vector.
The second clause of this create, [percentage (frequency) (round (* frequency 100))], binds the variable "frequency" as type element, which means that the body will map over each element of the vector implicitly. So, frequency is bound to a number, and we treat it as such, iterating over every element of the column.
We couldn’t do this for the above, because we needed the entire vector at once in order to get the sum.
Looks good! Some error was added by rounding, hence the 101s.
Voilà. | https://docs.racket-lang.org/graphite-tutorial/Data_wrangling__101.html | 2022-01-16T22:47:57 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.racket-lang.org |
from the console..
An action represents a single task in the workflow. These actions fall into the following categories:
When you are developing a robotic process, typically, it's in the first action where variables are initialized and everything needed for the robotic process to accomplish its task is prepared. Then the process will go through each action in the workflow, until it reaches the last one, and the execution ends. The difference between an initial action and a final action lies in their transitions. An initial action has no input transition and one output transition. A final action has at least one input transition and no output transition.
The color of the action in the workflow depends on the method you associate with the action:
In the workflow, actions connect through arrows, representing the workflow's transitions.
Within your workflow, you must associate actions with the robotic process's code so that each action corresponds to a method in the class that implements it.
In the Appian RPA console, actions can be associated with methods from low-code modules, workflow libraries, or custom code you create.
In the console, low-code modules contain methods that you can easily configure without needing to go into your source code. These low-code modules provide a user interface where you can add values to parameters and store returning values in robotic process variables.
You can associate methods from low-code modules with generic and condition actions in a workflow.
Sections allow you to break down complex actions into a set of actions in their own workflow. You can then use sections as actions in the main workflow. Sections operate in a way that's similar to sub-processes in an Appian process model, except they can only be used within the robotic process where they're created. Sections help keep the main workflow organized and make it easier to understand what's happening at a high level.
Sections can be helpful when the robotic process is set up to repeat the same actions multiple times. For example, rather than building a loop with four actions in the workflow, you can instead create a section for those four actions. Then, you can add the section as one action in the main workflow.
By default, every workflow has a Setup, Main Section, and a Clean up section. You can define multiple additional sections inside the same workflow. There's no limit to the number of additional sections that can be incorporated into the workflow. You can create a section manually using the instruction below, or create a section automatically by importing a Selenium file. you use Java methods in addition to the clean up section in the console, the Java methods execute after the low-code section.
If the same robotic process executes multiple times consecutively, you have the option to skip the setup and clean up sections for faster executions.
When saving a workflow, Appian RPA performs some checks. If the workflow fails these checks, Appian RPA saves the workflow, but displays a warning message.
Appian RPA checks that:
Required role: Developer or Administrator
Looking to speed up workflow design? Create workflow sections using Selenium IDE scripts.
You can undo the changes at any time by clicking on Undo
in the workflow toolbar.
To configure a robotic process workflow:
While configuring a workflow, you can use the following icons in the workflow toolbar:
To associate a workflow library with the workflow:
When you add an action to a workflow, no method is associated with the action until you configure the action.
To add an action to a workflow:
In the workflow toolbar, click the icon for the type of action you want to add:
You can associate the action with any of the following:
To associate an action with one or more methods:
Click the list icon
. The action configuration window displays.
In the Actions tree, methods display in the following groups:
If you selected a conditional action, only low-code modules that return boolean or string values appear in the Modules group.
Want to call specific methods of the API to execute arbitrary code? Use the Execute code or Execute code with result methods in the Robot low-code module. Only Groovy scripts are supported.
To associate a single method with the action, browse to and select a method in the Actions tree. The right-hand pane displays the method and any available parameters that can be added or values that can be stored.
Methods are executed in the order they appear in the Multiple Actions section. Drag and drop methods within the section to change the order.
To quickly duplicate a method in the Multiple Actions section, hover over the method, then click the Add action icon
that displays. To remove a method from the Multiple Actions section, hover over the method, then click
that displays.
pv!and concatenate variables as needed.
For example:
is stored as) or add the value as a multiple-value variable (
is appended to).
To associate an action with a custom section:
In the workflow, actions connect through arrows, representing the workflow's transitions.
To add a transition arrow between actions:
In the workflow, click the action where you want the arrow to originate.
To remove a transition arrow:
To create a custom section in a workflow:
To edit a custom section in a workflow:
To delete a custom section in a workflow:
Robotic processes are designed to interact with interfaces in the same way as human beings. To emulate a human being's actions on the screen, a developer needs to consider every step in the process: every click, every pause, and every text input. It can be difficult to trace every single step you take when interacting with a website or program. Selenium IDE is a browser automation tool you can use to build a script that captures these actions. You can import the Selenium script to Appian RPA to automatically create workflow actions to match the ones you recorded.
Appian doesn't support the Selenium tool itself, only the ability to import Selenium files to auto-generate a section. This documentation describes the steps and best practices to create a file to import in Appian RPA. For help and support, consult the Selenium documentation.
You can add Selenium IDE as an extension to your preferred browser for quick access. To get started, download Selenium IDE.
Before you record a workflow, keep these tips in mind:
.side) with up to 10 tests and 50 commands per test. Plan your recordings with these limitations in mind.
Use the following table to see how Selenium actions will be configured as Appian RPA actions. If a Selenium action isn't listed, it's ignored during import. Learn more about Selenium Commands and Browser module methods.
.sidefile.
Selenium lets you modify recorded commands and insert additional commands for actions that aren't easily captured during your recording. If you insert or modify commands, refer to the Supported Actions table to make sure the actions will be imported properly.
Before you import the workflow to Appian RPA, it's a good idea to use Selenium's playback tool to confirm the script acts as you expect. Make changes to the Selenium script before importing to Appian RPA to save time debugging.
Import the Selenium file to automatically create actions in a robotic process workflow in Appian RPA.
To import a Selenium file as a section in a workflow:
.sidefile, or drag and drop it in the Import section dialog. The import tool only accepts
.sidefiles with a maximum of 10 tests and 50 commands per test. Appian RPA alerts you to these errors if the file exceeds the limits, is the incorrect type, or contains invalid content.
In the main workflow, you can associate the section with an action.
To adjust the order or visual presentation of a workflow, you can use the following icons in the workflow toolbar:
You can resize an individual action by selecting the action, then clicking and dragging on the double-arrow icon
.
For example, a modified workflow could look like the following:
When you import actions into a workflow, their original position in the editor is kept. Therefore, some actions could overlap with others, even hiding those previously in the editor. In these cases, you should select them and move them apart to check if the import was successful.
To move a single action within a workflow section, drag and drop the action to a new position. After moving an action, you might need to adjust the transition arrows for that action.
You can also select multiple actions, then drag and drop the actions as a group.
You can select two or more actions at the same time, which can be useful when moving, deleting, or exporting actions in a workflow.
To select more than one action simultaneously, hold the Ctrl key (Windows) or Command key (Mac) and click each action to select. To select all actions, click Select all
in the workflow toolbar.
When you export multiple actions, Appian RPA makes the selected actions serialized, producing a string that can be stored in a file or shared by any means that allows plain text communications. You can use this string to import multiple actions at a later time. A text field will ask you to enter the values serialized by the export option.
You can change the background color and text color for an informative note action.
To configure an informative note:
You can also import a workflow from a BPMN file. When processing the BPMN file, the console reads the BPMN tags defined in the file and map them to the appropriate workflow components:
To import a workflow from a BPMN file:
On This Page | https://docs.appian.com/suite/help/21.3/rpa-7.9/create/robotic-process-workflow.html | 2022-01-16T21:27:51 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.appian.com |
A recurring, variable billing solution that enables you to bill according to your clients’ usage - accurate, easy, on time, every time. Varibill allows you to automatically collect data from your heterogeneous systems to bill according to the quantities of products and services consumed by your clients, thereby eliminating the time consuming, error-prone and manual process of invoicing.
Varibill supports your unique business model providing elastic service consumption thus allowing your business to scale, and ultimately leads to better cash flow and increased profits.
Varibill is offered as a SaaS product which is easy to use, low maintenance and supports virtually every variable billing scenario. Now you can start focusing on your business and how you can grow it.
Variable recurring billing. Accurate, easy and on time. | https://docs.varibill.com/Varibill_Overview | 2022-01-16T22:09:50 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.varibill.com |
t_systems_mms.icinga_director.icinga_director_inventory – Returns Ansible inventory from Icing_director_inventory. | https://docs.ansible.com/ansible/latest/collections/t_systems_mms/icinga_director/icinga_director_inventory_inventory.html | 2022-01-16T22:36:37 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.ansible.com |
nil it. You only need a single instance of the SDK, regardless of how many web views you have, but you can create several KlarnaHybridSDK instances.
The SDK will notify you of events via an event listener that you’ll need to implement. It acts like a conventional delegate on iOS, however, unlike a delegate, it’s not optional.
You can read more about the event listener at the end of this page.
You need to add the web views that the SDK should track. The SDK will hold weak references to these web views, so if they’re deallocated, the SDK will lose track of them.
The SDK can handle both UIWebViews and WKWebViews, which it references under a single KlarnaWebView protocol.
There are two instances at which you’ll need to notify the SDK of events in your web view (as we don’t override your UIWebViewDelegate or WKNavigationDelegate).
You should notify the SDK about upcoming navigations by calling the SDK’s shouldFollowNavigation() before a navigation occurs.
If you have a UIWebView, you should do this in your UIWebViewDelegate:
If you have a WKWebView, you should do this in your WKNavigationDelegate:
You need to notify the SDK after a page has loaded by calling the SDK’s newPageLoad() from your web view’s delegate.
If you have a UIWebView, you should do this in your UIWebViewDelegate:
If you have a WKWebView, you should do this in your WKNavigationDelegate:
Your app will need to implement the KlarnaHybridSDKListenerDKEventListener in some part of the application.
The completion handler
completion() should be called when you have performed any animations or changes to let the SDK know that your app is done with this step. | https://docs.klarna.com/in-app/inapp-ios-overview/hybrid/ | 2022-01-16T21:12:37 | CC-MAIN-2022-05 | 1642320300244.42 | [] | docs.klarna.com |
Subsets and Splits