content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
Everett¶
Everett is a Python configuration library for your app.
Goals¶
This library tries to do configuration with minimal “fanciness”.
Configuration with Everett:
- is composeable and flexible
- makes it easier to provide helpful error messages for users trying to configure your software
- supports auto-documentation of configuration with a Sphinx
autocomponentdirective
-.
Why not other libs?¶ auto-generating configuration documentation
Quick start¶
Fast start example¶
You have an app and want it to look for configuration in an
.env file then
the environment. You can do this:
from everett.manager import ConfigManager config = ConfigManager.basic_config()
Then you can use it like this:
debug_mode = config('debug', parser=bool)
When you outgrow that or need different variations of it, you can change
that to creating a
ConfigManager from scratch.
More control example¶
We have an app and want to pull configuration from an INI file stored in
a place specified by
MYAPP_INI in the environment,
~/.myapp.ini,
or
/etc/myapp.ini in that order.
We want to pull infrastructure values from the environment.
Values from the environment should override values from the INI file.
First, we set up our
ConfigManager:
import os import sys from everett.manager import ConfigManager, ConfigOSEnv, ConfigIniEnv def get_config(): return.' )
Then we use it:
def is_debug(config): return config('debug', parser=bool, doc='Switch debug mode on and off.') def main(args): config = get_config() if is_debug(config): print('DEBUG MODE ON!') if __name__ == '__main__': sys.exit(main(sys.argv[1:]))
Let’s write some tests that verify behavior based on the
debug
configuration value:
from myapp import get_config, is_debug from everett.manager import config_override @config_override(DEBUG='true') def test_debug_true(): assert is_debug(get_config()) is True @config_override(DEBUG='false') def test_debug_false(): assert is_debug(get_config()) is False
If the user sets
DEBUG wrong, they get a helpful error message with
the documentation for the configuration option and the
ConfigManager:
$ DEBUG=foo python myprogram.py <traceback> namespace=None key=debug requires a value parseable by bool Switch debug mode on and off. Check for docs.
What can you use Everett with¶
Everett works with frameworks that have configuration infrastructure like Django and Flask.
Everett works with non-web things like scripts and servers and other things.
Everett components¶
Everett supports components.-generate configuration documentation for this component in your
Sphinx docs by including the
everett.sphinxext Sphinx extension and
using the
autocomponent directive:
..¶
Contents¶
- History
- Configuration
- Components
- Using the Sphinx extension
- Recipes
- Library
- Hacking | https://everett.readthedocs.io/en/latest/ | 2018-05-20T13:21:15 | CC-MAIN-2018-22 | 1526794863570.21 | [] | everett.readthedocs.io |
Exposure & Range¶
Reference
Exposure and Range are similar to the "Color Curves" tool in Gimp or Photoshop.
These controls affect the rendered image, and the results are baked into the render. For information on achieving similar affects with render controls, see Color Management and Exposure.
Previously Blender clipped color directly with 1.0 (or 255) when it exceeded the possible RGB space. This caused ugly banding and overblown highlights when light overflowed Fig. Utah Teapot..
Using an exponential correction formula, this now can be nicely corrected.
Options¶
Exposure and Range sliders.
- Exposure
- The exponential curvature, with (0.0 to 1.0) (linear to curved).
- Range
The range of input colors that are mapped to visible colors (0.0 to 1.0).
So without Exposure we will get a linear correction of all color values:
- Range > 1.0
- The picture will become darker; with Range = 2.0, a color value of 1.0 (the brightest by default) will be clipped to 0.5 (half bright) (Range: 2.0).
- Range < 1.0
- The picture will become brighter; with Range = 0.5, a color value of 0.5 (half bright by default) will be clipped to 1.0 (the brightest) (Range: 0.5).
Examples¶
With a linear correction every color value will get changed, which is probably not what we want. Exposure brightens the darker pixels, so that the darker parts of the image will not be changed at all (Range : 2.0, Exposure : 0.3).
Hint
Try to find the best Range value, so that overexposed parts are barely not too bright. Now turn up the Exposure value until you are satisfied with the overall brightness of the image. This is especially useful with area lamps. | https://docs.blender.org/manual/zh-hans/dev/render/blender_render/world/exposure.html | 2018-05-20T13:42:20 | CC-MAIN-2018-22 | 1526794863570.21 | [array(['../../../_images/render_blender-render_world_exposure_world-panel.png',
'../../../_images/render_blender-render_world_exposure_world-panel.png'],
dtype=object) ] | docs.blender.org |
Credit Memos
A credit memoA document issued by the merchant to a customer to write off an outstanding balance because of overcharge, rebate, or return of goods. is a document that shows the amount that is due the customer for a full or partial refund..
The methods that are available to issue refunds depends on the payment method that was used for the order.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/sales/credit-memos.html | 2019-01-16T06:48:13 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
.
Procedure
- Browse to the datastore cluster in the vSphere Web Client navigator.
- Click the Manage tab and click Settings.
- Under Services, select DRS and click Edit.
- Expand Advanced Options and click Add.
- In the Option column, type IgnoreAffinityRulesForMaintenance.
- In the Value column, type 1 to enable the option.
Type 0 to disable the option.
- Click OK.
Results
The Ignore Affinity Rules for Maintenance Mode option is applied to the datastore cluster. | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.resmgmt.doc/GUID-CADF0E57-FC1A-477B-847B-0BB64A43586D.html | 2019-01-16T06:53:30 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
A characteristic or property of a product; anything that describes a product. Examples of product attributes include color, size, weight, and price. to be used in a targeted rule, the Use for Promo Rule Conditions property must be set to “Yes.”
To create a related product rule:
- Related Products
- Up-sells
- Cross-sells
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/marketing/product-related-rule-create.html | 2019-01-16T06:49:49 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
DPM protection support matrix
Published: May 8, 2017
Updated: August 4, 2017
Applies To: System Center 2012 SP1 - Data Protection Manager, System Center 2012 - Data Protection Manager, System Center 2012 R2 Data Protection Manager
Notes
Clusters—DPM can protect data in the following clustered applications:
File servers
SQL Server
Hyper-V—Note that when protecting a Hyper-V cluster using scaled-out DPM protection, you can’t add secondary protection for the protected Hyper-V workloads.
Exchange Server
DPM can protect non-shared disk clusters for supported Exchange Server versions (cluster continuous replication), and can also protect Exchange Server configured for local continuous replication.
DPM can protect cluster workloads that are located in the same domain as the DPM server, and in a child or trusted domain. If you want to protect data source in untrusted domains or workgroups you’ll need to use NTLM or certificate authentication for a single server, or certificate authentication only for a cluster.
Protection of SQL AlwaysOn with Clustering. Protection and recovery of availability groups that are built with or without clustered instances is now supported in DPM 2012 R2 with UR2 and works seamlessly exactly like it does for nonclustered availability groups.
DPM 2012 R2 UR2 supports clustered primary and secondary servers, clustered primary and stand-alone secondary servers, stand-alone primary and clustered secondary servers’ scenarios. Stand-alone primary and secondary servers’ scenario is already.
SQL Server
- DPM doesn’t support the protection of SQL Server database hosted on cluster-shared volumes (CSVs).:
fsutil hardlink create <link> <target>
For example:
fsutil hardlink create “c:\program files\microsoft\dpm\bin\eseutil.exe” “c:\program files\microsoft\Exchange\bin\eseutil.exe”
Before you can protect Exchange Server 2007 data in a Clustered Continuous Replication (CCR) configuration, you must apply KB 940006.
File Server
Hyper-V
SharePoint. | https://docs.microsoft.com/en-us/previous-versions/system-center/system-center-2012-R2/jj860400(v=sc.12) | 2019-01-16T05:59:31 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.microsoft.com |
2. Getting Started¶
In this guide, we will create a very basic Iroha network, launch it, create a couple of transactions, and check the data written in the ledger. To keep things simple, we will use Docker.
Note
Ledger is the synonym for a blockchain, and Hyperledger Iroha is known also as Distributed Ledger Technology — which in essence is the same as “blockchain framework”. You can check the rest of terminology used in the Glossary section.
2.1. Prerequisites¶
For this guide, you need a computer running Unix-like system with
docker
installed. You can read how to install it on a
Docker’s website.
Note
Please note that you can use Iroha without
docker as well. You
can read about it in other parts of documentation.
2.2. Starting Iroha Node¶
2.2.1. Creating a Docker Network¶
To operate, Iroha requires a
PostgreSQL database. Let’s start with creating
a Docker network, so containers for Postgres and Iroha can run on the same
virtual network and successfully communicate. In this guide we will call it
iroha-network, but you can use any name. In your terminal write following
command:
docker network create iroha-network
2.2.2. Starting PostgreSQL Container¶
Now we need to run
PostgreSQL in a container, attach it to the network you
have created before, and expose ports for communication:
docker run --name some-postgres \ -e POSTGRES_USER=postgres \ -e POSTGRES_PASSWORD=mysecretpassword \ -p 5432:5432 \ --network=iroha-network \ -d postgres:9.5
Note
If you already have Postgres running on a host system on default port
(5432), then you should pick another free port that will be occupied. For
example, 5433:
-p 5433:5432 \
2.2.3. Creating Blockstore¶
Before we run Iroha container, we should create persistent volume to store files, storing blocks for the chain. It is done via the following command:
docker volume create blockstore
2.2.4. Configuring Iroha Network¶
Note
To keep things simple, in this guide we will create a network containing only one node. To understand how to run several peers, follow this guide.
Now we need to configure our Iroha network. This includes creating a configuration file, generating keypairs for a users, writing a list of peers and creating a genesis block. However, we have prepared an example configuration for this guide, so you can start playing with Iroha faster. In order to get those files, you need to clone the Iroha repository from Github.
git clone -b master --depth=1
Hint
--depth=1 option allows us to download only latest commit and
save some time and bandwidth. If you want to get a full commit history, you
can omit this option.
2.2.5. Starting Iroha Container¶
We are ready to launch our Iroha container. Let’s do it with the following command
docker run -it --name iroha \ -p 50051:50051 \ -v $(pwd)/iroha/example:/opt/iroha_data \ -v blockstore:/tmp/block_store \ --network=iroha-network \ --entrypoint=/bin/bash \ hyperledger/iroha:latest
Let’s look in detail what this command does:
docker run -it --name iroha \attaches you to docker container called
iroha
- with
$(pwd)/iroha/example:/opt/iroha_data \we add a folder containing our prepared configuration to a docker container into
/opt/iroha_data.
-v blockstore:/tmp/block_store \adds a persistent block storage which we created before to a container, so our blocks won’t be lost after we stop the container
--network=iroha-network \adds our container to previously created
iroha-network, so Iroha and Postgres could see each other.
--entrypoint=/bin/bash \Because
hyperledger/irohahas the custom script which runs after starting the container, we want to override it so we can start Iroha Daemon manually.
hyperledger/iroha:latestis the image which has the
masterbranch.
2.2.6. Launching Iroha Daemon¶
Now you are in the interactive shell of Iroha’s container. To actually run
Iroha, we need to launch Iroha daemon –
irohad.
irohad --config config.docker --genesis_block genesis.block --keypair_name node0
Attention
In the usual situation, you need to provide a config file, generate genesis block and keypair. However, as a part of this guide, we provide an example configuration for you. Please do not use these settings in a production. You can read more about configuration here.
Congratulations! You have an Iroha node up and running! In the next section, we will test it by sending some transactions.
Hint
You can get more information about
irohad and its launch options
in this section
2.3. Interacting with Iroha Network¶
You can interact with Iroha using various ways. You can use our client libraries
to write code in various programming languages (e.g. Java, Python, Javascript,
Swift) which communicates with Iroha. Alternatively, you can use
iroha-cli –
our command-line tool for interacting with Iroha. As a part of this guide,
let’s get familiar with
iroha-cli
Attention
Despite that
iroha-cli is arguably the simplest way to start
working with Iroha,
iroha-cli was engineered very fast and lacks tests,
so user experience might not be the best. For example, the order of menu items
can differ from that you see in this guide. In the future, we will deliver a
better version and appreciate contributions.
Open a new terminal (note that Iroha container and
irohad should be up and
running) and attach to an
iroha docker container:
docker exec -it iroha /bin/bash
Now you are in the interactive shell of Iroha’s container again. We need to
launch
iroha-cli and pass an account name of the desired user. In our example,
the account
admin is already created in a
test domain. Let’s use this
account to work with Iroha.
iroha-cli -account_name admin@test
Note
Full account name has a
@ symbol between name and domain. Note
that the keypair has the same name.
2.3.1. Creating the First Transaction¶
You can see the interface of
iroha-cli now. Let’s create a new asset, add
some asset to the admin account and transfer it to other account. To achieve
this, please choose option
1. New transaction (tx) by writing
tx or
1 to a console.
Now you can see a list of available commands. Let’s try creating a new asset.
14. Create Asset (crt_ast). Now enter a name for your asset, for
example
coolcoin. Next, enter a Domain ID. In our example we already have a
domain
test, so let’s use it. Then we need to enter an asset precision
– the amount of numbers in a fractional part. Let’s set precision to
2.
Congratulations, you have created your first command and added it to a
transaction! You can either send it to Iroha or add some more commands
1. Add one more command to the transaction (add). Let’s add more commands,
so we can do everything in one shot. Type
add.
Now try adding some
coolcoins to our account. Select
16. Add Asset
Quantity (add_ast_qty), enter Account ID –
admin@test, asset ID –
coolcoin#test, integer part and precision. For example, to add 200.50
coolcoins, we need to enter integer part as
20050 and precision as
2, so it becomes
200.50.
Note
Full asset name has a
# symbol between name and domain.
Let’s transfer 100.50
coolcoins from
admin@test to
test@test
by adding one more command and choosing
5. Transfer Assets (tran_ast).
Enter Source Account and Destination Account, in our case
admin@test and
test@test, Asset ID (
coolcoin#test), integer part and precision
(
10050 and
2 accordingly).
Now we need to send our transaction to Iroha peer (
2. Send to Iroha peer
(send)). Enter peer address (in our case
localhost) and port (
50051).
Congratulations, your transaction is submitted and you can see your transaction
hash. You can use it to check transaction’s status.
Go back to a terminal where
irohad is running. You can see logs of your
transaction.
Congratulations! You have submitted your first transaction to Iroha.
2.3.2. Creating the First Query¶
Now let’s check if
coolcoins were successfully transferred from
admin@test to
test@test. Choose
2. New query
(qry).
7. Get Account's Assets (get_acc_ast) can help you to check if
test@test now has
coolcoin. Form a query in a similar way you did with
commands you did with commands and
1. Send to Iroha peer (send). Now you
can see information about how many
coolcoin does
test@test have.
It will look similar to this:
[2018-03-21 12:33:23.179275525][th:36][info] QueryResponseHandler [Account Assets] [2018-03-21 12:33:23.179329199][th:36][info] QueryResponseHandler -Account Id:- test@test [2018-03-21 12:33:23.179338394][th:36][info] QueryResponseHandler -Asset Id- coolcoin#test [2018-03-21 12:33:23.179387969][th:36][info] QueryResponseHandler -Balance- 100.50``
Congratulations! You have submitted your first query to Iroha and got a response!
Hint
To get information about all available commands and queries please check our API section.
2.3.3. Being Badass¶
Let’s try being badass and cheat Iroha. For example, let’s transfer more
coolcoins than
admin@test has. Try to transfer 100000.00
coolcoins
from
admin@test to
test@test. Again, proceed to
1. New transaction
(tx),
5. Transfer Assets (tran_ast), enter Source Account and Destination
Account, in our case
admin@test and
test@test, Asset ID
(
coolcoin#test), integer part and precision (
10000000 and
2
accordingly). Send a transaction to Iroha peer as you did before. Well, it says
[2018-03-21 12:58:40.791297963][th:520][info] TransactionResponseHandler Transaction successfully sent Congratulation, your transaction was accepted for processing. Its hash is fc1c23f2de1b6fccbfe1166805e31697118b57d7bb5b1f583f2d96e78f60c241
Your transaction was accepted for processing. Does it mean that we
had successfully cheated Iroha? Let’s try to see transaction’s status. Choose
3. New transaction status request (st) and enter transaction’s hash which
you can get in the console after the previous command. Let’s send it to Iroha.
It replies with:
Transaction has not passed stateful validation.
Apparently no. Our transaction was not accepted because it did not pass
stateful validation and
coolcoins were not transferred. You can check
the status of
admin@test and
test@test with queries to be sure
(like we did earlier). | https://iroha.readthedocs.io/en/latest/getting_started/index.html | 2019-01-16T05:24:48 | CC-MAIN-2019-04 | 1547583656897.10 | [] | iroha.readthedocs.io |
Package streamInterrupted = fmt.Errorf("read interrupted by channel")
type ChanReader ¶
Implements the io.Reader interface for a chan []byte
type ChanReader struct { // contains filtered or unexported fields }
func NewChanReader ¶
func NewChanReader(input <-chan *StreamChunk) *ChanReader
func (*ChanReader) Read ¶
func (c *ChanReader) Read(out []byte) (int, error)
Read from the channel into `out`. This will block until data is available, and can be interrupted with a channel using `SetInterrupt()`. If the read was interrupted, `ErrInterrupted` will be returned.
func (*ChanReader) SetInterrupt ¶
func (c *ChanReader) SetInterrupt(interrupt <-chan struct{})
Specify a channel that can interrupt a read if it is blocking.
type ChanWriter ¶
Implements the io.WriteCloser interface for a chan []byte
type ChanWriter struct { // contains filtered or unexported fields }
func NewChanWriter ¶
func NewChanWriter(output chan<- *StreamChunk) *ChanWriter
func (*ChanWriter) Close ¶
func (c *ChanWriter) Close() error
Close the output channel
func (*ChanWriter) Write ¶
func (c *ChanWriter) Write(buf []byte) (int, error)
Write `buf` as a StreamChunk to the channel. The full buffer is always written, and error will always be nil. Calling `Write()` after closing the channel will panic.
type Direction ¶
type Direction uint8
const ( Upstream Direction = iota Downstream NumDirections )
type StreamChunk ¶
Stores a slice of bytes with its receive timestmap
type StreamChunk struct { Data []byte Timestamp time.Time } | http://docs.activestate.com/activego/1.8/pkg/github.com/Shopify/toxiproxy/stream/ | 2019-01-16T05:50:03 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.activestate.com |
Free Shipping Promotion
Free shipping can be offered as a promotion, either with, or without a coupon. A free shipping coupon, or voucher, can also be applied to customer pick-up orders, so the order can be invoiced and “shipped” to complete the workflow.
Some shipping carrierA company that transports packages. Common carriers include UPS, FedEx, DHL, and USPS. configurations give you the ability of offer free shipping based on a minimum order. To expand upon this basic capability, you can use shopping cartA grouping of products that the customer wishes to purchase at the end of their shopping session. price rules to create complex conditions based on multiple product attributes, cart contents, and customer groups.
>. | https://docs.magento.com/m2/2.2/b2b/user_guide/marketing/price-rules-cart-free-shipping.html | 2019-01-16T06:43:36 | CC-MAIN-2019-04 | 1547583656897.10 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.magento.com |
The Lighting module covers:
- V-Ray Light – The most commonly used settings of the V-Ray Light.
- V-Ray and 3ds Max Lights – How to use the Standard 3ds Max Lights with V-Ray.
- V-Ray Ambient Light – An overview of the settings of the V-Ray Ambient Light.
- V-Ray Dome Light – The workflow to generate Image Based Lighting with the V-Ray Dome Light.
- V-Ray IES Light – How light profiles and V-Ray’s IES light can create realistic lighting.
- V-Ray Sun and Sky System– Set up day time illumination with the V-Ray’s Sun and Sky system. | https://docs.chaosgroup.com/display/CWVRAYMAX/Lighting | 2019-01-16T06:10:54 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.chaosgroup.com |
Applications / Security Insights application / Installing the Security Insights application / Required alertsDownload as PDF
Required alerts
An Admin user must define the alerts described in this article in order to complement the data offered by the Security Insights application. Many of the widgets in the application are based on these alerts and the data they represent depends directly on them. Not only they are essential for widgets, but also to notify you about critical problems or unexpected situations in your network. For example, an alert triggers when the number of threats grows drastically during the last 24 hours.
These alerts are based on firewall, proxy server and web server logs. Alerts based on firewall logs are mandatory, but alerts based on proxy and web logs have to be defined only if the customer is sharing proxy server and web server logs with Devo.
There are two different sets of alerts to be installed:
- Alert tier one - These are alerts directly based on firewall, web and proxy logs. The names of these alerts always start with SecInt.
- Alert tier two - These are alerts based on the alerts in tier one. The names of these alerts always start with SecIntMulti.
Some of the alerts include the
threatLevel parameter, which is an internal value used to define the default threat level for a specific type of alert.
How to define the alerts
This process must be repeated for each of the alerts included in the list of required alerts below. Go to Create a new alert to learn more about defining alerts in Devo.
- Go to Data Search → Free text Query and paste the code of the alert. In this example, we are defining the SecIntSeveralDNS alert. Select Run when you're done.
- You are taken to the query window, which displays the aggregated data defined in the LINQ query. Select New alert definition from the toolbar, then fill the Message, Description and Alert Name as specified in the article.
- Select Create.
Alert tier one
These are alerts based directly on the firewall.all.traffic, web.all.access and proxy.all.access tables, which gather all your firewall, web and proxy logs.
Firewall alerts
SecIntSeveralDNS
This alert checks the outbound traffic to port 53, which could correspond to DNS traffic. It triggers when a user (
srcIp) sends requests to more than 20 different destination servers. These are possibly DNS servers (internal cache, authoritative or root servers), but even if port 53 is being used for reasons other than DNS traffic, this is still a suspicious behavior that needs to be notified.
Message: Several DNS servers accessed
Description: This internal IP $srcIp has accessed $totalservers different DNS servers in the last hour
SecIntAnonymousNavigation
This alert checks the outbound traffic that has been hidden by Tor or other tools used to anonymize IP addresses.
Message: Anonymous connection detected
Description: IP $srcIp is connecting to IP $dstIp marked as anonymous proxy
SecIntBackdoorConnection
This alert checks the requests that try to get public IPs using backdoor ports. This alert triggers only when the connections are accepted by the firewall. The list of backdoor ports is stored in the lookup CheckBackdoorConnection, so it can be modified by any Admin user.
Message: Connection using backdoor port detected
Description: Accepted connection from $scrIp to the backdoor port $dstPort
SecIntThreatFraud
This alert checks the outbound traffic to IPs identified to be related to fraud by our system feeds. These feeds are updating constantly, and they are the result of gathering information from several public sources related to fraud.
Message: Suspicious connection to an IP related to fraud
Description: Detected connection from $srcIp to $dstIp using port $dstPort. IP $dstIp is marked as $ThreatFraud by our system
SecIntThreatMalware
This alert checks the outbound traffic to IPs identified to be related to malware by our system feeds. These feeds are updating constantly, and they are the result of gathering information from several public sources related to malware.
Message: Detected connection to suspicious IP related to malware
Description: Detected connection from $srcIp to $dstIp, marked as $ThreatMalware by our system
SecIntP2PConnection
This alert checks the connections to public IPs using peer to peer ports every 3 hours, according to the list stored in the Devo internal lookup Check P2PConnection. It also informs if the connection is accepted or denied.
Message: Peer to Peer (P2P) connection detected
Description: $action connection from $scrIp to a Peer to Peer (P2P) port
SecIntFirewallMisconfiguration
This alert checks the outbound traffic and controls the action parameter for each connection every one-day period. The alert triggers if there is any different action for the same connection. The action refers to the rule of the firewall, that can be either DENIED or ACCEPTED. A connection can only have a single action value. If a connection has different action values, it may indicate a possible firewall misconfiguration.
Message: Firewall misconfiguration
Description: Firewall $fwname misconfigured. Different rules for the same connection
SecIntPortScann
This alert is triggered when a single IP is trying to access too many different ports of a specific destination IP. When there are more than 100 requests in a 5 min period, the alert is triggered because it may indicate a port scanning attempt.
Message: Possible port scan
Description: IP $srcIpStr has tested $dstPortRound different ports for the same IP in the last 10 minutes
||sourceIP=$sourceIP,threatLevel=$threatLevel
SecIntError4xx
This alert is triggered when a single IP has caused several 4xx errors in the web server in the last 10 minutes period.
Message: Too many 4xx errors from the same IP
Description: The IP $sourceIP has caused $count 4xx errors in the last 10 minutes
Web alerts
SecIntUnusualHTTPMethods
This alert checks the
method parameter of each request to the web server. It is common to perform several requests when navigating a web page, but not so common to use different methods (POST, GET, HEAD...) in the requests. This alert triggers when one user uses more than 4 different methods every 1 hour.
Message: Suspicious behavior related to HTTP methods
Description: IP $srcIp used several HTTP methods
SecIntSeveralUSE-RAgents
This alert checks the user agent of each request to the web server. IP addresses do not usually use several user agents. The alert triggers when a single IP uses more than 10 different user agents every 30 minutes.
Message: Several user agents
Description: IP $srcIp used $userAgentCount different user agents during the last day
SecIntPossibleWebShellFile
This alert checks if the URL contains a PHP WebShell file using a lookup. Webshell files are created based on OSINT.
Message: Possible suspicious WebShell files
Description: Detected connection from IP $srcIp trying to access a possible WebShell file $UrlLimpia
SecIntUnknownMethods
This alert is triggered when an unknown method is detected.
Message: Unknown methods
Description: IP $srcIp uses unknown method $method
SecIntNoRobotAskingRobotFiles
This alert is triggered when the URL contains robot files but the request doesn't come from a robot.
Message: No robot accesses robot files
Description: IP $srcIp is asking for $url that contains robot files, but the user agent is not associated with a robot.
SecIntPasswordFilesDiscover
This alert checks if the URL contains information that could be related to password file discover.
Message: Password files discovered
Description: Access to $url from IP $srcIp is marked as suspicious
SecIntDoS
This alert is triggered when a user tries to perform a denial-of-service (DoS) attack. Source IPs are checked every 10 minutes. If a user accesses more than 1000 different ports, it may indicate an attack and the alert is triggered.
Message: Possible DoS attack
Description: IP $sourceIP has accessed the server $count times in the last 10 minutes
||sourceIP=$sourceIP,threatLevel=$threatLevel
Proxy alerts
SecIntProxySeveralAccess
This alert triggers when a user accesses 60 or more different destination servers in one hour and the proxy doesn't deny it.
Message: User accessed many different hosts
Description: User $user has accessed $dstHostRound different hosts in the last hour
SecIntProxyUserBlocked
This alert triggers when a user is blocked by a proxy more than 90 times in the last hour.
Message: User blocked by proxy
Description: User $user has been blocked $count times by proxy in the last hour
Alert tier two
This set of alerts are based on the siem.logtrust.alerts.info table, where are the alerts triggered in Devo are stored.
All these alerts contain the
threatLevel parameter, an integer with values from 1-10, 1-3 meaning low level of threat, 4-7 medium and 8-10 high.
SecIntMultiDoS4xx
This alert checks two different alerts for the same user. There is a user (
sourceIP) that have caused one or several SecIntError4xx alerts and also one or several SecIntDoS alert in a period of 20 minutes. The
threatLevel parameter for this alert is high (8 in 10).
Message: Multi Alert: several DoS and 4xx alerts by the same IP
Description: The IP $sourceIP has caused both DoS and 4xx alerts ($count total alerts) in the last 20 minutes || sourceIP=$sourceIP,threatLevel=$threatLevel
SecIntMultiDDoS
This alert is triggered when a Distributed Denial of Service attack (DDoS) is detected. This alert gathers several DoS alerts, caused by different users (srcIP). As all the other alerts in Security Insights, it is possible to change the filter parameters.
Message: Multi Alert: possible DDoS attack
Description: $count different IPs have caused denial of service alerts in the last 20 minutes || threatLevel=$threatLevel
SecIntMultiSeveralAlerts
This alert is triggered when a user (srcIP) causes more than one alert in the last hour. This is an alert over an alert; this means that one alert acts as the condition (two alerts in this case). When the conditions are fulfilled, the new alert is triggered and also stored in the table siem.logtrust.alert.info
Message: IP has caused several different alerts in the same hour
Description: IP $alertSrcIp has caused $contextRound different alerts in the last hour | https://docs.devo.com/confluence/ndt/applications/security-insights-application/installing-the-security-insights-application/required-alerts | 2019-01-16T06:19:21 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.devo.com |
float OEDetermineRingComplexity(const OEChem::OEMolBase &mol)
Note
The molecule argument is const, so this function requires that the OEFindRingAtomsAndBonds perception activity has been performed prior to calling this function.
This is a non-SSSR adaptation of the ring complexity algorithm of [Gasteiger-1979]. The values of this complexity are generally somewhat lower, but are well-correlated and follow the spirit of the SSSR method.
See also | https://docs.eyesopen.com/toolkits/java/medchemtk/OEMedChemFunctions/OEDetermineRingComplexity.html | 2019-01-16T07:08:36 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.eyesopen.com |
Navigation : Getting started Envision user guide Demand forecasting Inventory optimization Envision reference Envision sample scripts Integration - 3rd party apps -- Brightpearl - Upload and download files - SSO (single sign-on) with SAML - User roles and permissions - Automate a sequence of projects Pricing optimization 3rd party apps Native data integrations for many popular apps. Import for your historical business data in a few clicks. App Description Supported functionalities Cloud-based business management application, incorporates inventory, accounting, CRM, POS and ecommerce. Developed and run in Bristol, UK, with offices in San Francisco. Catalog, Stock, Sales Orders, Purchase Orders Multi-channel eCommerce software, helps your business work smarter, not harder. From channel integration to courier automation, the software offers powerful tools for ecommerce businesses. Catalog, Stock, Sales Orders, Purchase Orders Inventory and order management software, automates time consuming and error-prone business processes, giving merchants more time to focus on the things that matter. Catalog, Stock, Sales Orders, Purchase Orders Webgility offers QuickBooks integration software eCC to integrate your online shopping cart with QuickBooks. QuickBooks pos and QuickBooks enterprise solutions for small business accounting. Catalog, Stock, Sales Orders, Purchase Orders Stitch is an online inventory control solution that simplifies multichannel retail business. It automatically syncs inventory, orders and sales across channels, which provides retailers a holistic understanding of their operations. Catalog, Stock, Sales Orders, Purchase Orders Cloud-based point of sale provider written in HTML5. It is operated from any device or platform with a web-browser. Catalog, Stock, Sales Orders, Purchase Orders Web-based system designed for small to medium businesses to manage their stock levels and inventory. Handle stock movements from purchasing through to sales. Catalog, Stock, Sales Orders, Purchase Orders Web-based app designed for small businesses to run both their online store and their physical store. The app comes with a strong focus on ease of use. Catalog, Stock, Sales Orders Open source content management system for ecommerce websites. The depth of the platform offers the possibility to accommodate very large and very complex sites. Catalog, Stock, Sales Orders User-friendly, web-based inventory & warehouse management system for eCommerce retailers. Prevent out of stocks, improve warehouse efficiency, and reduce human error. Catalog, Stock, Sales Orders, Purchase Orders Extensive cloud business software suite encompassing ERP, financials, CRM and ecommerce. Catalog, Stock, Sales Orders, Purchase Orders Web-based inventory management and order fulfillment solutions with manufacturing and reporting features aimed at SMBs. Catalog, Stock, Sales Orders, Purchase Orders Open source web tracking analytics, now known as Matomo, keeps you in control of one of your most important assets: visitors actions on your websites. Web tracking Integration Brightpearl | https://docs.lokad.com/integration/integrations-with-3rd-party-apps/ | 2019-01-16T05:37:05 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.lokad.com |
Edit a Group Policy object from GPMC
Applies To: Windows Server 2003, Windows Server 2003 R2, Windows Server 2003 with SP1, Windows Server 2003 with SP2
To edit a Group Policy object
Open Group Policy Management.
In the console tree, double-click Group Policy Objects in the forest and domain containing the Group Policy object (GPO) that you want to edit.
Where?
- Forest name/Domains/Domain name/Group Policy Objects
Right-click the GPO, and then click Edit.
In the Group Policy Object Editor console, edit the settings as appropriate.
Important
Avoid editing the default domain policy. If you need to apply Group Policy settings to the entire domain, create a new GPO, link it to the domain, and create the settings in that GPO.
The default domain policy and default domain controllers policy are vital to the health of any domain. You should not edit the Default Domain Controller policy or the Default Domain policy, except in the following cases:
It is recommended that account policy be set in the Default Domain Policy.
If you install applications on domain controllers that require modifications to User Rights or Audit Policies, the modifications must be made in the Default Domain Controllers Policy.
Notes
To edit a GPO, you must have edit permissions for the GPO that you want to edit.
To edit IPSec policy settings from within a GPO, you must be a member of the domain administrators group. Otherwise, you can only assign or unassign an existing IPSec policy.
You can also edit a GPO by clicking any link to the GPO in the console tree and following the above steps. When using Group Policy Object Editor to edit the settings in the GPO, any changes that you make are global to the GPO, so they will impact all locations where the GPO is linked.
To edit the local Group Policy object, you must open it by clicking Start, clicking Run, typing gpedit.msc, and then clicking OK. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2003/cc759123(v=ws.10) | 2019-01-16T05:27:22 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.microsoft.com |
Plug-ins are composed of a standard set of components that expose the objects in the plugged-in technology to the Orchestrator platform.
The main components of a plug-in are the plug-in adapter, factory, and event implementations. You map the objects and operations defined in the adapter, factory, and event implementations to Orchestrator objects in an XML definition file named vso.xml. The vso.xml file maps objects and functions from the plugged in technology to JavaScript scripting objects that appear in the Orchestrator JavaScript API. The vso.xml file also maps object types from the plugged-in technology to finders, that appear in the Orchestrator Inventory tab.
Plug-ins are composed of the following components.
Plug-In Module
The plug-in itself, as defined by a set of Java classes, a vso.xml file, and packages of the workflows and actions that interact with the objects that you access through the plug-in. The plug-in module is mandatory.
Plug-In Adapter
Defines the interface between the plugged-in technology and the Orchestrator server. The adapter is the entry point of the plug-in to the Orchestrator platform. The adapter creates the plug-in factory, manages the loading and unloading of the plug-in, and manages the events that occur on the objects in the plugged-in technology. The plug-in adapter is mandatory.
Plug-In Factory
Defines how Orchestrator finds objects in the plugged-in technology and performs operations on them. The adapter creates a factory for the client session that opens between Orchestrator and a plugged-in technology. The factory allows you either to share a session between all client connections or to open one session per client connection. The plug-in factory is mandatory.
Configuration
Orchestrator does not define a standard way for the plug-in to store its configuration. You can store configuration information by using Windows Registries, static configuration files, storing information in a database, or in XML files. Orchestrator plug-ins can be configured by running configuration workflows in the Orchestrator client.
Finders
Interaction rules that define how Orchestrator locates and represents the objects in the plugged-in technology. Finders retrieve objects from the set of objects that the plugged-in technology exposes to Orchestrator. You define in the vso.xml file the relations between objects to allow you to navigate through the network of objects. Orchestrator represents the object model of the plugged-in technology in the Inventory tab. Finders are mandatory if you want to expose objects in the plugged-in technology to Orchestrator.
Scripting Objects
JavaScript object types that provide access to the objects, operations, and attributes in the plugged-in technology. Scripting objects define how Orchestrator accesses the object model of the plugged-in technology through JavaScript. You map the classes and methods of the plugged-in technology to JavaScript objects in the vso.xml file. You can access the JavaScript objects in the Orchestrator scripting API and integrate them into Orchestrator scripted tasks, actions, and workflows. Scripting objects are mandatory if you want to add scripting types, classes, and methods to the Orchestrator JavaScript API.
Inventory
Instances of objects in the plugged-in technology that Orchestrator locates by using finders appear in the Inventory view in the Orchestrator client. You can perform operations on the objects in the inventory by running workflows on them. The inventory is optional. You can create a plug-in that only adds scripting types and classes to the Orchestrator JavaScript API and does not expose any instances of objects in the inventory.
Events
Changes in the state of an object in the plugged-in technology. Orchestrator can listen passively for events that occur in the plugged-in technology. Orchestrator can also actively trigger events in the plugged-in technology. Events are optional. | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-dev.doc/GUID10A83CB5-530C-4773-9472-AB4A6C9E3D20.html | 2019-01-16T06:25:25 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
You can answer to a waiting user interaction of a workflow run by using the Orchestrator REST API.
Before you begin> | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-develop-web-services.doc/GUID-AF6DEC91-FABD-4ABA-8D77-90CA1CCB79AF.html | 2019-01-16T05:25:03 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
You can import a workflow by using the Orchestrator REST API.
Before you begin
The workflow binary content should be available as multi-part content. For details, see RFC 2387.
About this task
Depending on the library of your REST client application, you can use custom code that defines the properties of the workflow.
Procedure
- In a REST client application, add request headers to define the properties of the workflow that you want to import.
- Make a POST request at the URL of the workflow objects:
POST http://{orchestrator_host}:{port}/vco/api/workflows/
Results
If the POST request is successful, you receive the status code 202. | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-develop-web-services.doc/GUID-E436A447-6159-4BB4-A48A-C3C9350E3C43.html | 2019-01-16T05:58:08 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
The Trend Micro Vulnerability Scanner console appears.
You cannot launch the tool from Terminal Server.
The Scheduled Scan screen appears... | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/installing-the-trend/deployment-considera/vulnerability-scanne/vulnerability-scan-m/configuring-a-schedu.aspx | 2019-01-16T05:33:09 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
This page provides information on toon outline effect in V-Ray for Maya. For Toon Cel Shading, see the VRayToonMtl page.
Overview
VRayToon is a very simple node that produces cartoon-style outlines on objects in the scene. This includes both simple, solid color shading but also outlined borders on the edges of the object(s).
The result of this rendering effect can now be stored in its own V-Ray Toon Render Element.
UI Path
Create menu > V-Ray > VRayToon
Basic Parameters
On – Enables or disables the VRayToon effect.
Toon Render Elements Only – When enabled, the Toon effect is visible in all Render Elements, except the RGB channel. To take effect, the VRayToon effect should be enabled.
Line Color – The color of the outlines.
Line Color Inner – The color of the inner edges' outlines.
Line Width – Specifies the width of the outlines in pixels. You can connect a texture map here to control the line width.
Line Width Inner – Specifies the width of the inner edges' outlines (in pixels). A texture map can be connected to this parameter.
Outer Overlap Threshold – Determines when outlines are created for overlapping parts of one and the same object. Lower values reduce the outer overlapping lines, while higher values produce more overlap outer lines.
Opacity Map – The opaqueness of the outlines. You can connect a texture map here to control the opacity.
Compensate Camera Exposure – When enabled, the VRayToon compensates the Line Color brightness to adjust for any Exposure correction from a VRayPhysicalCamera.
Material Edges – Renders lines on the material boundaries, when the mesh has multiple materials. This option requires that all materials have an assigned Material ID.
Hide Inner Edges – Determines whether lines are created for parts of the same object with varying surface normals (for example, at the inside edges of a box).
Normal Threshold – Determines at what point lines are created for parts of the same object with varying surface normals (for example, at the inside edges of a box). A value of 0.0 means that only 90 degrees or larger angles will generate internal lines. Higher values mean that smoother.
Do Reflection/Refraction – Causes the outlines to appear in reflections/refractions as well. Note that this may increase render times.
Trace Bias – Depending on the scale of your scene, this determines the ray bias when the outlines are traced in reflections/refractions.
Distortion Map – The texture that are used to distort the outlines. This works similar to bump-mapping and takes the gradient of the texture as direction for distortion. Note that high output values may be required for larger distortion. Screen-mapped textures work best, although World XYZ mapping is also supported.
Affected Objects – Specifies a list of objects which are affected by the VRayToon. The expected input is a set. The arrow button automatically creates and connects a new set, but any other set can be connected here instead.
As inclusive set – Controls the meaning of the Affected Objects list. When enabled, the Affected Objects list considered as an "Include list" and when disabled - as an "Exclude list."
Depth Curve
This rollout enables a curve for controlling the Line Width based on distance from the camera.
Depth Curve On – Enables a Depth Curve to specify the Line Width.
Min Depth/ Max Depth – Defines the minimum/maximum distance where the depth-based Line Width takes effect. Edges at points closer than the Min Depth are rendered with the Line Width at position 0. Edges further than the Max Depth are rendered with Line Width at position 1.
When the Depth Curve is disabled, the outline width will be constant (in pixels) for all affected objects at any distance from the camera. Enabling the curve control, allows you to specify how the width changes for objects close or away from the camera.
Angular Curve
This rollout enables a curve for controlling the Line Width depending on the view angle.
Angular Curve On – Enables the Angular Curve.
A point at position 0 means that the angle between the normal and the view vector is 0 degrees. A point at position 1 means that the angle between the normal and the view vector is 90 degrees.
Notes
- Right-clicking on the V-Ray Shelf button provides options for selecting any Toon nodes that exist in the Maya scene.
- Please note that rendering of older scenes (saved with versions prior to V-Ray Next, Beta 2) with the VRayToon effect will look differently in V-Ray Next, due to the different calculating approach of the line width.
- You can override the parameters of this node from the VRayToonMtl, provided it is assigned to the same object. | https://docs.chaosgroup.com/display/VRAY4MAYA/VRayToon | 2019-01-16T06:05:41 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.chaosgroup.com |
Stock Options
Your catalog can be configured to display the availability of each item as “In Stock” or “Out of Stock,” The configuration setting applies to the catalog as a whole, and the message changes according to the stock status of the product. There are several display variations possible, including how “out of stock” products are managed in the catalog and in product listings.
The out of stock threshold indicates when a product needs to be reordered, and can be set to any number greater than zero. Another way you can use the stock availability threshold is to manage products that are in high demand. If you want to captureThe process of converting the authorized amount into a billable transaction. Transactions cannot be captured until authorized, and authorizations cannot be captured until the goods or services have been shipped. new customers, rather than sell to high-quantity buyers, you can set a maximum quantity to prevent a single buyer from taking out your entire inventory.
To configure stock options:
If price alerts are enabled, customers can sign up to be notified when the product is back in stock.
The message begins to appear when the quantity in stock reaches the threshold. For example, if set to 3, the message “Only 3 left” appears when the quantity in stock reaches 3. The message adjusts to reflect the quantity in stock, until the quantity reaches zero.
A quick rating takes only 3 clicks. Add a comment to help us improve Magento even more. | https://docs.magento.com/m2/2.2/b2b/user_guide/catalog/inventory-stock-options.html | 2019-01-16T06:51:29 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.magento.com |
The
<finder> element represents in the Orchestrator client a type of object found through the plug-in.
The
<finder> element identifies the Java class that defines the object the object finder represents. The
<finder> element defines how the object appears in the Orchestrator client interface. It also identifies the scripting object that the Orchestrator scripting API defines to represent this object.
Finders act as an interface between object formats used by different types of plugged-in technologies.
The
<finder> element is optional. A plug-in can have an unlimited number of
<finder> elements. The
<finder> element defines the following attributes: | https://docs.vmware.com/en/vRealize-Orchestrator/7.0/com.vmware.vrealize.orchestrator-dev.doc/GUIDC743CA64-09F0-4969-BB2D-27D0401346AC.html | 2019-01-16T05:29:41 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.vmware.com |
To ensure that agents stay protected from the latest security risks, update agent components regularly.
Before updating agents, check if their update source (OfficeScan server or a custom update source) has the latest components. For information on how to update the OfficeScan server, see OfficeScan Server Updates.
The following table lists all components that update sources deploy to agents and the components in use when using a particular scan method. | http://docs.trendmicro.com/en-us/enterprise/officescan-120-server-online-help/keeping-protection-u/trend_client_program123456789.aspx | 2019-01-16T05:28:51 | CC-MAIN-2019-04 | 1547583656897.10 | [] | docs.trendmicro.com |
Do you mean you want Photo Mechanic to launch with the same contact sheet tabs open? You can do this in the General preferences. There's an option to restore the previous contact sheets on startup. Another thing you could try would be adding folders to your favorites so it's quicker to click on them. I hope that solves your problem.
Thank You...
Not exactly what I had in mind, but it works somewhat, as long as what I want is what I had the last time.
Multiple workspaces could be set up for various numbers of columns to quickly get into the mode needed for the day's work. I really find the multiple columns brilliant when putting a project together, but the needs change with each event.
Thom
Sure. We're always looking at ways to streamline the workflow. Right now we're pretty focused on getting Photo Mechanic Plus ready for release, but we may consider workspaces in the future.
I would like--No make that I NEED-- to open 2 different contact sheets side by side at the same time so that I can compare the two. If this feature is not available, please consider adding it. If it is, how do i make it happen? -
On Windows, drag the contact sheet tab down, and it will create a new column. On macOS, use File>New Window.
Thom Hayes
Still evaluating PM6 and finding more all the time! Having more than one column open in the Contact Sheet is brilliant - I've had as many as five - but having to set it up manually every time PM is launched is time-consuming and a pain!
I am looking for a way to save the Contact Sheet as a workspace. Capture One, DxO, even PhotoShop have this available and it would seem natural for PM. I am missing it somewhere? | https://docs.camerabits.com/support/discussions/topics/48000559797 | 2020-10-23T21:23:41 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.camerabits.com |
Pachctl deploy storage amazon
pachctl deploy storage amazon¶
Deploy credentials for the Amazon S3 storage provider.
Synopsis¶
Deploy credentials for the Amazon S3 storage provider, so that Pachyderm can ingress data from and egress data to it.
pachctl deploy storage amazon <region> <access-key-id> <secret-access-key> [<session-token>] [flags]
Options¶
--block-cache-size string Size of pachd's in-memory cache for PFS files. Size is specified in bytes, with allowed SI suffixes (M, K, G, Mi, Ki, Gi, etc). --cluster-deployment-id string Set an ID for the cluster deployment. Defaults to a random value. - --disable-ssl (rarely set) Disable SSL. --dry-run --create-context Don't actually deploy pachyderm to Kubernetes, instead just print the manifest. Note that a pachyderm context will not be created, unless you also use --create-context. -). -h, --help help for amazon -") --max-upload-parts int (rarely set) Set a custom maximum number of upload parts. (default 10000) --namespace string Kubernetes namespace to deploy Pachyderm to. --new-storage-layer (feature flag) Do not set, used for testing. - on Minikube), it should normally be used for production clusters. --no-rbac Don't deploy RBAC roles for Pachyderm. (for k8s versions prior to 1.8) --no-verify-ssl (rarely set) Skip SSL certificate verification (typically used for enabling self-signed certificates). ). --part-size int (rarely set) Set a custom part size for object storage uploads. (default 5242880) --put-file-concurrency-limit int The maximum number of files to upload or fetch from remote sources (HTTP, blob storage) using PutFile concurrently. (default 100) --registry string The registry to pull images from. --require-critical-servers-only Only require the critical Pachd servers to startup and run without errors. --retries int (rarely set) Set a custom number of retries for object storage requests. (default 10) --reverse (rarely set) Reverse object storage paths. (default true) --shards int (rarely set) The maximum number of pachd nodes allowed in the cluster; increasing this number blindly can result in degraded performance. (default 16) --static-etcd-volume string Deploy etcd as a ReplicationController with one pod. The pod uses the given persistent volume. --timeout string (rarely set) Set a custom timeout for object storage requests. (default "5m") --tls string string of the form "<cert path>,<key path>" of the signed TLS certificate and private key that Pachd should use for TLS authentication (enables TLS-encrypted communication with Pachd) --upload-acl string (rarely set) Set a custom upload ACL for object storage uploads. (default "bucket-owner-full-control") --upload-concurrency-limit int The maximum number of concurrent object storage uploads per Pachd instance. (default 100) --worker-service-account string The Kubernetes service account for workers to use when creating S3 gateways. (default "pachyderm-worker")
Options inherited from parent commands¶
--no-color Turn off colors. -v, --verbose Output verbose logs
Last update: July 16, 2020 | https://docs.pachyderm.com/latest/reference/pachctl/pachctl_deploy_storage_amazon/ | 2020-10-23T22:23:40 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.pachyderm.com |
The analytics graph
The graph shows you how the completion rate and the number of unique users engaged changes over time. Select the time range that you want to analyze.
Unique targeted users
Targeted users are the users that have seen any Userlane element. This could be the Assistant, the welcome slide or a userlane.
Unique engaged users
Engaged users are the users that have clicked on a Userlane element.
Userlane plays
Userlane plays are the number of times any userlane was interacted with.
Average plays per engaged user
This tells you how many guides an engaged user interacted with.
Analytics per Userlane
Completion rate
The completion rate measures the percentage of users who finish an entire guide.
Good to know: There are valid reasons for exiting a guide:
- The user might start a guide for a process that he’s already familiar with and realizes that after a few steps.
- The user might have only searched for the right section and wants to explore the rest of the possibilities without guidance.
- The user might understand the trick behind a certain process after a couple of steps.
Users started / Users completed
The number of times the userlane was started or finished.
Detailed analytics for this userlane
Click on “Show detailed analytics for this userlane“ to see each step of that userlane and the number of users that finished or exited it. You can use these detailed analytics per step to get feedback for single steps with a high dropout rate.
Good to know
- Analytics are not static. Visit your analytics section every month to check in if significant changes occurred.
- Think of additional analytics and feedback on your side, i.e. customer surveys or else. If your users are happy with the training process, this qualitative factor should be considered in your evaluation. | https://docs.userlane.com/en/articles/2388594-understand-the-userlane-analytics | 2020-10-23T21:11:49 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.userlane.com |
Filter
Represents a user-defined filter for determining which statuses should not be shown to the user.
Example
{ "id": "8449", "phrase": "test", "context": [ "home", "notifications", "public", "thread" ], "whole_word": false, "expires_at": "2019-11-26T09:08:06.254Z", "irreversible": true }
Attributes
id
Description: The ID of the filter in the database.
Type: String (cast from an integer, but not guaranteed to be a number)
Version history: Added in 2.4.3
phrase
Description: The text to be filtered.
Type: String
Version history: Added in 2.4.3
context
Description: The contexts in which the filter should be applied.
Type: Array of String (Enumerable anyOf)
home = home timeline
notifications = notifications timeline
public = public timelines
thread = expanded thread of a detailed status
Version history: Added in 2.4.3
expires_at
Description: When the filter should no longer be applied
Type: String (ISO 8601 Datetime), or null if the filter does not expire
Version history: Added in 2.4.3
irreversible
Description: Should matching entities in home and notifications be dropped by the server?
Type: Boolean
Version history: Added in 2.4.3
whole_word
Description: Should the filter consider word boundaries?
Type: Boolean
Version history: Added in 2.4.3
Implementation notes
If
whole_word is true , client app should do:
- Define ‘word constituent character’ for your app. In the official implementation, it’s
[A-Za-z0-9_]in JavaScript, and
[[:word:]]in Ruby. Ruby uses the POSIX character class (Letter | Mark | Decimal_Number | Connector_Punctuation).
- If the phrase starts with a word character, and if the previous character before matched range is a word character, its matched range should be treated to not match.
- If the phrase ends with a word character, and if the next character after matched range is a word character, its matched range should be treated to not match.
Please check
app/javascript/mastodon/selectors/index.js and
app/lib/feed_manager.rb in the Mastodon source code for more details.
See alsofilters app/lib/feed_manager.rb app/javascript/mastodon/selectors/index.jsapp/serializers/rest/filter_serializer.rb
Last updated January 12, 2020 · Improve this page | https://docs.monado.ren/en/entities/filter/ | 2020-10-23T21:06:16 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.monado.ren |
Checks¶
Checking column properties¶
Check objects accept a function as a required argument, which is
expected to take a
pa.Series input and output a
boolean or a
Series
of boolean values. For the check to pass, all of the elements in the boolean
series must evaluate to
True, for example:
import pandera as pa check_lt_10 = pa.Check(lambda s: s <= 10) schema = pa.DataFrameSchema({"column1": pa.Column(pa.Int, check_lt_10)}) schema.validate(pd.DataFrame({"column1": range(10)}))
Multiple checks can be applied to a column:
schema = pa.DataFrameSchema({ "column2": pa.Column(pa.String, [ pa.Check(lambda s: s.str.startswith("value")), pa.Check(lambda s: s.str.split("_", expand=True).shape[1] == 2) ]), })
Built-in Checks¶
For common validation tasks, built-in checks are available in
pandera.
import pandera as pa from pandera import Column, Check, DataFrameSchema schema = DataFrameSchema({ "small_values": Column(pa.Float, Check.less_than(100)), "one_to_three": Column(pa.Int, Check.isin([1, 2, 3])), "phone_number": Column(pa.String, Check.str_matches(r'^[a-z0-9-]+$')), })
See the
Check API reference for a complete list of built-in checks.
Vectorized vs. Element-wise Checks¶
By default,
Check objects operate on
pd.Series
objects. If you want to make atomic checks for each element in the Column, then
you can provide the
element_wise=True keyword argument:
import pandas as pd import pandera as pa schema = pa.DataFrameSchema({ "a": pa.Column( pa.Int, checks=[ # a vectorized check that returns a bool pa.Check(lambda s: s.mean() > 5, element_wise=False), # a vectorized check that returns a boolean series pa.Check(lambda s: s > 0, element_wise=False), # an element-wise check that returns a bool pa.Check(lambda x: x > 0, element_wise=True), ] ), }) df = pd.DataFrame({"a": [4, 4, 5, 6, 6, 7, 8, 9]}) schema.validate(df)
element_wise == False by default so that you can take advantage of the
speed gains provided by the
pd.Series API by writing vectorized
checks.
Handling Null Values¶
By default,
pandera drops null values before passing the objects to
validate into the check function. For
Series objects null elements are
dropped (this also applies to columns), and for
DataFrame objects, rows
with any null value are dropped.
If you want to check the properties of a pandas data structure while preserving
null values, specify
Check(..., ignore_na=False) when defining a check.
Note that this is different from the
nullable argument in
Column
objects, which simply checks for null values in a column.
Column Check Groups¶
Column checks support grouping by a different column so that you
can make assertions about subsets of the column of interest. This
changes the function signature of the
Check function so that its
input is a dict where keys are the group names and values are subsets of the
series being validated.
Specifying
groupby as a column name, list of column names, or
callable changes the expected signature of the
Check
function argument to:
Callable[Dict[Any, pd.Series] -> Union[bool, pd.Series]
where the dict keys are the discrete keys in the
groupby columns.
In the example below we define a
DataFrameSchema with column checks
for
height_in_feet using a single column, multiple columns, and a more
complex groupby function that creates a new column
age_less_than_15 on the
fly.
import pandas as pd import pandera as pa schema = pa.DataFrameSchema({ "height_in_feet": pa.Column( pa.Float, [ # groupby as a single column pa.Check( lambda g: g[False].mean() > 6, groupby="age_less_than_20"), # define multiple groupby columns pa.Check( lambda g: g[(True, "F")].sum() == 9.1, groupby=["age_less_than_20", "sex"]), # groupby as a callable with signature: # (DataFrame) -> DataFrameGroupBy pa.Check( lambda g: g[(False, "M")].median() == 6.75, groupby=lambda df: ( df.assign(age_less_than_15=lambda d: d["age"] < 15) .groupby(["age_less_than_15", "sex"]))), ]), "age": pa.Column(pa.Int, pa.Check(lambda s: s > 0)), "age_less_than_20": pa.Column(pa.Bool), "sex": pa.Column(pa.String, pa.Check(lambda s: s.isin(["M", "F"]))) }) df = ( pd.DataFrame({ "height_in_feet": [6.5, 7, 6.1, 5.1, 4], "age": [25, 30, 21, 18, 13], "sex": ["M", "M", "F", "F", "F"] }) .assign(age_less_than_20=lambda x: x["age"] < 20) ) schema.validate(df)
Wide Checks¶
pandera is primarily designed to operate on long-form data (commonly known
as tidy data), where each row
is an observation and each column is an attribute associated with an
observation.
However,
pandera also supports checks on wide-form data to operate across
columns in a
DataFrame. For example, if you want to make assertions about
height across two groups, the tidy dataset and schema might look like this:
import pandas as pd import pandera as pa df = pd.DataFrame({ "height": [5.6, 6.4, 4.0, 7.1], "group": ["A", "B", "A", "B"], }) schema = pa.DataFrameSchema({ "height": pa.Column( pa.Float, pa.Check(lambda g: g["A"].mean() < g["B"].mean(), groupby="group") ), "group": pa.Column(pa.String) }) schema.validate(df)
Whereas the equivalent wide-form schema would look like this:
df = pd.DataFrame({ "height_A": [5.6, 4.0], "height_B": [6.4, 7.1], }) schema = pa.DataFrameSchema( columns={ "height_A": pa.Column(pa.Float), "height_B": pa.Column(pa.Float), }, # define checks at the DataFrameSchema-level checks=pa.Check( lambda df: df["height_A"].mean() < df["height_B"].mean() ) ) schema.validate(df)
You can see that when checks are supplied to the
DataFrameSchema
checks
key-word argument, the check function should expect a pandas
DataFrame and
should return a
bool, a
Series of booleans, or a
DataFrame of
boolean values.
Raise UserWarning on Check Failure¶
In some cases, you might want to raise a
UserWarning and continue execution
of your program. The
Check and
Hypothesis classes and their built-in
methods support the keyword argument
raise_warning, which is
False
by default. If set to
True, the check will raise a
UserWarning instead
of raising a
SchemaError exception.
Note
Use this feature carefully! If the check is for informational purposes and
not critical for data integrity then use
raise_warning=True. However,
if the assumptions expressed in a
Check are necessary conditions to
considering your data valid, do not set this option to true.
One scenario where you’d want to do this would be in a data pipeline that does some preprocessing, checks for normality in certain columns, and writes the resulting dataset to a table. In this case, you want to see if your normality assumptions are not fulfilled by certain columns, but you still want the resulting table for further analysis.
import warnings import numpy as np import pandas as pd import pandera as pa from scipy.stats import normaltest np.random.seed(1000) df = pd.DataFrame({ "var1": np.random.normal(loc=0, scale=1, size=1000), "var2": np.random.uniform(low=0, high=10, size=1000), }) normal_check = pa.Hypothesis( test=normaltest, samples="normal_variable", # null hypotheses: sample comes from a normal distribution. The # relationship function checks if we cannot reject the null hypothesis, # i.e. the p-value is greater or equal to alpha. relationship=lambda stat, pvalue, alpha=0.05: pvalue >= alpha, error="normality test", raise_warning=True, ) schema = pa.DataFrameSchema( columns={ "var1": pa.Column(checks=normal_check), "var2": pa.Column(checks=normal_check), } ) # catch and print warnings with warnings.catch_warnings(record=True) as caught_warnings: warnings.simplefilter("always") validated_df = schema(df) for warning in caught_warnings: print(warning.message)
<Schema Column: 'var2' type=None> failed series validator 0: <Check _hypothesis_check: normality test> | https://pandera.readthedocs.io/en/stable/checks.html | 2020-10-23T21:35:28 | CC-MAIN-2020-45 | 1603107865665.7 | [] | pandera.readthedocs.io |
Tutorial: Azure Active Directory single sign-on (SSO) integration with C3M Cloud Control
In this tutorial, you'll learn how to integrate C3M Cloud Control with Azure Active Directory (Azure AD). When you integrate C3M Cloud Control with Azure AD, you can:
- Control in Azure AD who has access to C3M Cloud Control.
- Enable your users to be automatically signed-in to C3M Cloud Control.
- C3M Cloud Control single sign-on (SSO) enabled subscription.
Scenario description
In this tutorial, you configure and test Azure AD SSO in a test environment.
- C3M Cloud Control supports SP initiated SSO
- C3M Cloud Control supports Just In Time user provisioning
- Once you configure C3M Cloud Control you can enforce session control, which protect exfiltration and infiltration of your organization’s sensitive data in real-time. Session control extend from Conditional Access. Learn how to enforce session control with Microsoft Cloud App Security.
Adding C3M Cloud Control from the gallery
To configure the integration of C3M Cloud Control into Azure AD, you need to add C3M Cloud Control C3M Cloud Control in the search box.
- Select C3M Cloud Control from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
Configure and test Azure AD SSO for C3M Cloud Control
Configure and test Azure AD SSO with C3M Cloud Control using a test user called B.Simon. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in C3M Cloud Control.
To configure and test Azure AD SSO with C3M Cloud Control, C3M Cloud Control SSO - to configure the single sign-on settings on application side.
- Create C3M Cloud Control test user - to have a counterpart of B.Simon in C3M Cloud Control that is linked to the Azure AD representation of user.
- Test SSO - to verify whether the configuration works.
Configure Azure AD SSO
Follow these steps to enable Azure AD SSO in the Azure portal.
In the Azure portal, on the C3M Cloud Control:
https://<C3MCLOUDCONTROL_ACCESS_URL>
b. In the Identifier (Entity ID) text box, type a URL using the following pattern:
https://<C3MCLOUDCONTROL_ACCESS_URL>/api/sso/saml
c. In the Reply URL text box, type a URL using the following pattern:
https://<C3MCLOUDCONTROL_ACCESS_URL>/api/sso/saml
Note
These values are not real. Update these values with the actual Sign on URL, Reply URL and Identifier. Contact C3M Cloud Control C3M Cloud Control C3M Cloud Control.
In the Azure portal, select Enterprise Applications, and then select All applications.
In the applications list, select C3M Cloud Control. C3M Cloud Control SSO
To configure SSO for C3M Cloud Control, please do follow the documentation.
Create C3M Cloud Control test user
In this section, a user called B.Simon is created in C3M Cloud Control. C3M Cloud Control supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in C3M Cloud Control, a new one is created after authentication.
Test SSO
In this section, you test your Azure AD single sign-on configuration using the Access Panel.
When you click the C3M Cloud Control tile in the Access Panel, you should be automatically signed in to the C3M Cloud Control C3M Cloud Control with Azure AD
What is session control in Microsoft Cloud App Security?
How to protect C3M Cloud Control with advanced visibility and controls | https://docs.microsoft.com/en-us/azure/active-directory/saas-apps/c3m-cloud-control-tutorial | 2020-10-23T23:11:56 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
As you browse a site, you can view a live stream of Events and Issues captured by Sboxr in the site you are browsing.
The items shown here auto-update and only the 100 most recent items from each category are shown at any point of time.
You have an option to pause the live update and then resume it.
This feature is useful to understand how the application behaves to your inputs in a more fine grained manner and identify issues in those behavior.
For example, if you are in a login page and submit your credentials then you can immediately see what actions the application performs to handle the authentication and session management. You can see if the credentials or other data are sent to some external party. You can see if any session tokens are stored on the client-side in IndexedDB or LocalStorage.
It can even be useful when you have analyzed an issue and are trying to validate it. For example, if Sboxr has reported that data from untrusted source is sent to the eval method. Then you can trying sending different payloads via the untrusted source and check the live events to see if any of the payloads succeed in reaching the eval, even if not in an exploitable manner. This would help you hone your payload.
High frequency events like WebSocket message and Cross-window message exchanges (also the data leaked through them) are not shown here. This is to reduce clutter. | https://docs.sboxr.com/view-live-event-stream | 2020-10-23T22:18:50 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.sboxr.com |
Welcome to Hub!: Hub > Framework
Activate your Theme Key ↑ Back to top
After you install your theme, be sure to activate your theme key by downloading and installing the WooCommerce Helper plugin. This verifies your site when contacting support and enables 2-click updates for quick theme upgrades down the road.
Updating your theme
It is important to ensure you have the latest version of your theme at all times. This is the best way to proactively troubleshoot issues on your site.
Adding your Logo ↑ Back to top
You have two options when adding a logo to your site. You can either use a text based Site Title and optional Tagline or you can use an image based logo.
To enable the text based Site Title and Tagline:
- Go to: Settings > General to enter your site title and tagline.
- Go to: Hub > Settings > General Settings > Quick Start and check the box to enable the text based site title and tagline.
- Optionally Enable the site description and adjust the typography settings.
- Save All Changes.
To upload your own image based logo:
- Go to: Hub > Settings > General Settings > Quick Start > Custom Logo.
- Upload your logo image – we recommend using either a .jpg or .png for this.
- Save All Changes.
Configure the homepage ↑ Back to top
Homepage setup ↑ Back to top
As of Hub version 1.2.0+ in order to customize the display order of the homepage components, you must first download and install the Homepage Control plugin.
With the Homepage Control plugin you can enable / disable the different content components:
- Introductory Message
- Popular Posts
- Testimonials (requires Testimonials plugin)
- Recent Posts
- Our Team (requires Our Team plugin)
Each of these components can then be configured further:
Intro section ↑ Back to top
To configure the introductory message on the homepage go to: Hub > Settings > Homepage > Introductory Message. Here you can add the heading, the message, the button label and the button destination.
Popular Posts ↑ Back to top
Popular posts are defined by the number of comments a post has.
Go to: Hub > Settings > Homepage > Popular Posts to configure the section title, byline, period of time and number of posts.
The period of time specifies how far to look into the past for popular posts.
Testimonials ↑ Back to top
To learn how to setup Testimonials please see the documentation here: Testimonials Documentation
To configure the Homepage Testimonials settings go to: Hub > Settings > Homepage > Testimonials. Here you can specify a title, byline, optional background image and number of testimonials to display. The number you set to display will display the ‘x’ most recently published testimonials.
Recent Posts ↑ Back to top
To configure the Homepage Recent Posts settings go to: Hub > Settings > Homepage > Recent Posts. Here you can specify a title, byline and the number of posts to display. Posts will display ‘x’ most recently published posts.
Our Team ↑ Back to top
To learn how to setup Our Team plugin please see the documentation here: Our Team Documentation
To configure the Homepage Our Team settings go to: Hub > Settings > Homepage > Our Team. Here you can specify a title, byline and the number of team members to display.
WooCommerce Theme Options ↑ Back to top
If you would like to setup a WooCommerce shop, you must first download and install WooCommerce.
After installing, to configure the WooCommerce Theme Options go to: Hub > Settings > WooCommerce to configure the following options:
- Upload a Custom Placeholder to be displayed when there is no product image.
- Header Cart Link to be displayed in the main navigation.
- Shop Archive Full Width – Specify whether to display product archives full width
- Product Details Full Width – Specify whether to display all product details pages full width, removing the sidebar.
Business Page Template ↑ Back to top
Hub: Hub >: Hub > Settings > Layout >_5<<
Image Dimensions ↑ Back to top
Here are the ideal image dimension to use for Hub. blog image suggested minimum width: 825px
- Testimonials Homepage Background image suggested minimum width: 1600px
- Our Team featured image suggested minimum width: 355px
- WooSlider Business Slider suggested minimum width: 1200px – height will scale to fit
Featured Blog Images ↑ Back to top
To set the Featured Blog Image size for Thumbnails and the Single Post image go to: Hub > Settings > Dynamic Images > Thumbnail Settings.
If you would like to have your featured blog images the same as the Hub demo go to: Hub > Settings > Dynamic Images > Thumbnail Settings to enter the following Thumbnail Settings:
- Thumbnail Image Dimensions: 825px x 350px
- Single Post – Thumbnail Dimensions: 825px x 350px
To learn more about Featured Images please see our tutorial here: Featured Images_6<<
- Catalog Images: 388px x 388px
- Single Product Images: 365px x 365px
- Product Thumbnails: 112px x 112px
To learn more about WooCommerce product images please see further documentation here: Adding Product Images and Galleries and here Using the Appropriate Product Image Dimensions
Subscribe & Connect ↑ Back to top
The Subscribe & Connect functionality for the Hub theme can be used on single post pages, with the Subscribe & Connect widget, as well as a special Subscribe & Connect area above the footer region.
To add social media icons to your single posts page go to: Hub > Settings > Subscribe & Connect > Setup and select Enable Subscribe & Connect – Single Post.
To setup Subscribe & Connect go to:
- Subscribe & Connect > Connect Settings to link the icons to each social media page.
- Subscribe & Connect > Setup to enable the related posts in the Subscribe & Connect box (example below).
- Subscribe & Connect > Subscribe Settings to setup the email subscription form.
- Hub > Theme Options >: Hub > Settings > Contact Page to enter the Contact Form Email address.
- From here you can also enable the information panel (see below), and enable the Subscribe & Connect panel to display your social icons.
- Cordinates, search for your location on a Google Map, right click the pin and select “What’s Here”. This will input the Google Coordinates in the search box.
- Optionally disable mousescroll Hub Widgets ↑ Back to top
Hub | https://docs.woocommerce.com/document/hub/ | 2020-10-23T22:25:28 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['http://docs.woocommerce.com/wp-content/uploads/2013/12/Screen-Shot-2013-12-19-at-09.56.06-950x311.png',
'Screen Shot 2013-12-19 at 09.56.06'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/12/hub-1.png?w=950',
None], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/12/Screen-Shot-2013-12-19-at-10.19.08-950x409.png',
'Screen Shot 2013-12-19 at 10.19.08'], dtype=object)
array(['https://docs.woocommerce.com/wp-content/uploads/2013/12/hub-2.png?w=950',
None], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/12/Screen-Shot-2013-12-19-at-10.27.08-950x296.png',
'Screen Shot 2013-12-19 at 10.27.08'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/03/ImageUploader-AttachedtoPost-950x237.png',
'ImageUploader-AttachedtoPost'], dtype=object)
array(['http://docs.woocommerce.com/wp-content/uploads/2013/12/Hub-WooCommerce-Image-Settings.png',
'Hub-WooCommerce-Image-Settings'], dtype=object) ] | docs.woocommerce.com |
3D Cameras#
3D image processing is used in particular to determine volumes, shapes, or the positioning of objects. The depth information generated can also be used in defect detection tasks where there is not enough contrast for 2D camera images but where the objects show a recognizable difference in height.
For general information about time-of-flight, see the 3D Camera Technology section. | https://docs.baslerweb.com/3d-cameras | 2020-10-23T22:04:27 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.baslerweb.com |
Automated Dependency Updates for Helmv3
Renovate supports updating Helmv3 dependencies.
File Matching
By default, Renovate will check any files matching the following regular expression:
(^|/)Chart.yaml$.
For details on how to extend a manager's
fileMatch value, please follow this link.
Additional Information
Renovate supports updating Helm Chart references within
requirements.yaml (Helm v2) and
Chart.yaml (Helm v3) files.
If your Helm charts make use of Aliases then you will need to configure an
aliases object in your config to tell Renovate where to look for them. | https://docs.renovatebot.com/modules/manager/helmv3/ | 2020-10-23T20:51:56 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.renovatebot.com |
In some cases, you might want to exclude certain Assets in your Project from publishing to the cloud. Collaborate uses a gitignore file to exclude files from publish. To exclude Assets from publish, add them to the provided .collabignore file in the root of your Project folder. This lists the files and folders to exclude during a publish to Collaborate.
To add your own exclude rules to the .collabignore file:
Note: For local edits to the .collabignore file to take effect, you must restart the Unity Editor.
Note: If you exclude a file that is already tracked in Collaborate, its existing file history is preserved.
There are Project files and folders that you can never exclude from Collaborate using the .collabignore file. These are:
Setting up Unity Collaborate | https://docs.unity3d.com/ja/2018.1/Manual/UnityCollaborateIgnoreFiles.html | 2020-10-23T21:44:58 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.unity3d.com |
TOPICS×
Export to CSV
Create CSV Report enables you to export information about your pages to a CSV file on your local system.
- The file downloaded is called export.csv
- The contents are dependent on the properties you select.
- You can define the path together with the depth of the export.
The download feature and default destination of your browser is used.
The Create CSV Export wizard allows you to select:
- Properties to export
- Metadata
- Name
- Modified
- Published
- Template
- Workflow
- Translation
- Translated
- Analytics
- Page Views
- Unique Visitors
- Time on Page
- Depth
- Parent Path
- Direct children only
- Additional levels of children
- Levels
The resulting export.csv file can be opened in Excel or any other compatible application.
The create CSV Report option is available when browsing the Sites console (in List view): it is an option of the Create drop down menu:
To create a CSV export:
- Open the Sites console, navigate to the required location if required.
- From the toolbar, select Create then CSV Report to open the wizard:
- Select the required properties to export.
- Select Create . | https://docs.adobe.com/content/help/en/experience-manager-65/authoring/authoring/csv-export.html | 2020-10-23T22:48:34 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/content/dam/help/experience-manager-65.en/help/sites-authoring/assets/etc-01.png',
None], dtype=object)
array(['/content/dam/help/experience-manager-65.en/help/sites-authoring/assets/etc-02.png',
None], dtype=object) ] | docs.adobe.com |
Custom ASP.NET Core Elastic Beanstalk Deployments
This topic describes how deployment works and what you can do customize deployments when creating ASP.NET Core applications with Elastic Beanstalk and the Toolkit for Visual Studio.
After you complete the deployment wizard in the Toolkit for Visual Studio, the toolkit
bundles
the application and sends it to Elastic Beanstalk. Your first step in creating the
application bundle is
to use the new dotnet CLI to prepare the application for publishing by using the publish command.
The framework and configuration are passed down from the settings in the wizard to
the publish
command. So if you selected Release for
configuration and netcoreapp1.0 for the
framework, the toolkit will execute the following command:
dotnet publish --configuration Release --framework netcoreapp1.0
When the publish command finishes, the toolkit writes the new deployment manifest into the publishing folder. The deployment manifest is a JSON file named aws-windows-deployment-manifest.json, which the Elastic Beanstalk Windows container (version 1.2 or later) reads to determine how to deploy the application. For example, for an ASP.NET Core application you want to be deploy at the root of IIS, the toolkit generates a manifest file that looks like this:
{ "manifestVersion": 1, "deployments": { "aspNetCoreWeb": [ { "name": "app", "parameters": { "appBundle": ".", "iisPath": "/", "iisWebSite": "Default Web Site" } } ] } }
The
appBundle property indicates where the application bits are in relation to the manifest
file. This property can point to either a directory or a ZIP archive. The
iisPath and
iisWebSite properties indicate where in IIS to host the application.
Customizing the Manifest
The toolkit only writes the manifest file if one doesn't already exist in the publishing
folder. If
the file does exist, the toolkit updates the
appBundle,
iisPath and
iisWebSite properties in the first application listed under the
aspNetCoreWeb
section of the manifest. This allows you to add the aws-windows-deployment-manifest.json to your
project and customize the manifest. To do this for an ASP.NET Core Web application
in Visual Studio
add a new JSON file to the root of the project and name it aws-windows-deployment-manifest.json.
The manifest must be named aws-windows-deployment-manifest.json and it must be at the root of the project. The Elastic Beanstalk container looks for the manifest in the root and if it finds it will invoke the deployment tooling. If the file doesn't exist, the Elastic Beanstalk container falls back to the older deployment tooling, which assumes the archive is an msdeploy archive.
To ensure the dotnet CLI
publish command includes the manifest, update the
project.json
file to include the manifest file in the include section under
include in
publishOptions.
{ "publishOptions": { "include": [ "wwwroot", "Views", "Areas/**/Views", "appsettings.json", "web.config", "aws-windows-deployment-manifest.json" ] } }
Now that you've declared the manifest so that it's included in the app bundle, you can further configure how you want to deploy the application. You can customize deployment beyond what the deployment wizard supports. AWS has defined a JSON schema for the aws-windows-deployment-manifest.json file, and when you installed the Toolkit for Visual Studio, the setup registered the URL for the schema.
When you open
windows-deployment-manifest.json, you'll see the schema URL selected in the
Schema drop down box. You can navigate to the URL to get a full description of what
can be set in the
manifest. With the schema selected, Visual Studio will provide IntelliSense while
you're editing the
manifest.
One customization you can do is to configure the IIS application pool under
which the application will run. The following example shows how you can define an
IIS Application
pool ("customPool") that recycles the process every 60 minutes, and assigns it to
the application
using
"appPool": "customPool".
{ "manifestVersion": 1, "iisConfig": { "appPools": [ { "name": "customPool", "recycling": { "regularTimeInterval": 60 } } ] }, "deployments": { "aspNetCoreWeb": [ { "name": "app", "parameters": { "appPool": "customPool" } } ] } }
Additionally, the manifest can declare Windows PowerShell scripts to run before and
after the install,
restart and uninstall actions. For example, the following manifest runs the Windows
PowerShell script
PostInstallSetup.ps1 to do further setup work after the ASP.NET Core application is
deployed to IIS. When adding scripts like this, make sure the scripts are added to
the include
section under publishOptions in the
project.json file, just as you did with the
aws-windows-deployment-manifest.json file. If you don't, the scripts won't be included as
part of the dotnet CLI publish command.
{ "manifestVersion": 1, "deployments": { "aspNetCoreWeb": [ { "name": "app", "scripts": { "postInstall": { "file": "SetupScripts/PostInstallSetup.ps1" } } } ] } }
What about .ebextensions?
The Elastic Beanstalk .ebextensions configuration files are supported as with all the other
Elastic Beanstalk containers. To include .ebextensions in an ASP.NET Core application,
add the
.ebextensions directory to the
include section under
publishOptions in the
project.json file. For further information about .ebextensions checkout the
Elastic Beanstalk Developer Guide. | https://docs.aws.amazon.com/toolkit-for-visual-studio/latest/user-guide/deployment-beanstalk-custom-netcore.html | 2020-10-23T22:23:05 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.aws.amazon.com |
Security Administration Guide
- About This Book
- About InterSystems Security
- Authentication: Establishing Identity
- Authorization: Controlling User Access
- Auditing: Knowing What Happened
- Managed Key Encryption: Protecting Data on Disk
- Managing Security with the Management Portal
- Notes on Technology, Policy, and Action
- Authentication
- Authentication Basics
- About the Different Authentication Mechanisms
- About the Different Access Modes
- Configuring Kerberos Authentication
- Configuring Operating-System–Based Authentication
- Configuring Instance Authentication
- Configuring Two-Factor Authentication
- Other Topics
- Assets and Resources
- Authorization, Assets, and Resources
- System Resources
- Database Resources
- Application Resources
- Creating or Editing a Resource
- Using Custom Resources with the Management Portal
- Privileges and Permissions
- How Privileges Work
- Public Permissions
- Checking Privileges
- Using Methods with Built-In Privilege Checks
- When Changes in Privileges Take Effect
- InterSystems IRIS Configuration Information
- Managing InterSystems IRIS Security Domains
- Security Advisor
- Effect of Changes
- Emergency Access
- Using TLS with InterSystems IRIS
- About InterSystems IRIS Support for TLS
- About TLS
- About Configurations
- Configuring the InterSystems IRIS Superserver to Use TLS
- Configuring InterSystems IRIS Telnet to Use TLS
- Configuring Java Clients to Use TLS with InterSystems IRIS
- Configuring .NET Clients to Use TLS with InterSystems IRIS
- Configuring Studio to Use TLS with InterSystems IRIS
- Connecting from a Windows Client Using a Settings File
- Configuring InterSystems IRIS to Use TLS with Mirroring
- Configuring InterSystems IRIS to Use TLS with TCP Devices
- Configuring the Web Gateway to Connect to InterSystems IRIS Using InterSystems IRIS®
- Configuring LDAP Authentication for InterSystems IRIS
- Configuring LDAP Authorization for InterSystems IRIS
- Other LDAP Topics
- Using Delegated Authorization
- Overview of Delegated Authorization
- Creating Delegated (User-defined) Authorization Code
- Configuring an Instance to Use Delegated Authorization
- After Authorization — The State of the System
- Tightening Security for an Instance
- Enabling Auditing
- Changing the Authentication Mechanism for an Application
- Limiting the Number of Public Resources
- Restricting Access to Services
- Limiting the Number of Privileged Users
- Disabling the _SYSTEM User
- Restricting Access for UnknownUser
- Configuring Third-Party Software
- Performing Encryption Management Operations
- About Encryption Management Operations
- Converting an Unencrypted Database to be Encrypted
- Converting an Encrypted Database to be Unencrypted
- Converting an Encrypted Database to Use a New Key
- Relevant Cryptographic Standards and RFCs
- About PKI (Public Key Infrastructure)
- The Underlying Need
- About Public-Key Cryptography
- Authentication, Certificates, and Certificate Authorities
- How the CA Creates a Certificate
- Limitations on Certificates: Expiration and Revocation
- Recapping PKI Functionality
- Using Character-based Security Management Routines | https://docs.intersystems.com/irislatest/csp/docbook/Doc.View.cls?KEY=GCAS | 2020-10-23T21:17:42 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.intersystems.com |
vRealize Network Insight allows you to add Infoblox Grid as a DNS data provider.
Infoblox DNS offers an advanced solution to manage and control DNS. It uses Infoblox Grid to ensure that the DNS is highly available throughout the network. The DNS data from Infoblox is used only for enriching the flows where either the source or the destination IP addresses are associated with the physical devices.
The Infoblox DNS data co-exists with the DNS data that is imported by using CSV.
If you configure an Infoblox DNS data source on a collector, you can configure other data sources also on the same collector. You do not need a dedicated collector for Infoblox.
Considerations
- vRealize Network Insight supports only single-grid mode for Infoblox in the current release.
- Only A Records are supported in the current release. Shared A Records are not supported currently.
- The DNS enrichment is supported only for the IP addresses that are marked as physical in the current release.
- If there are multiple FQDNs for a single physical IP address, all FQDNs are returned.
Procedure
- On the Settings page, click Accounts and Data Sources.
- Click Add new source.
- Click Infoblox under DNS.
- Provide the following information:
- Click Validate.Note: Ensure that you have the
API Privilegeto access the Infloblox APIs.
- Enter Nickname and Notes (if any) for the data source and click Submit to add the Infoblox DNS data source to the environment. | https://docs.vmware.com/en/VMware-vRealize-Network-Insight-Cloud/services/com.vmware.vrni.using.doc/GUID-A5CF3440-DE33-415B-A93B-6E97F8ED0DBE.html | 2020-10-23T21:38:43 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
In the Grid presentation style report, the AutoWidth property allows you to choose the option to automatically compute the width of a column. The AutoWidth property takes one of these numeric values:
0 - No AutoWidth: This is the default value.
1 - AutoWidth is computed for visible rows (monotonic) and does not decrease when the widest column is reduced when scrolling.
2 - AutoWidth is computed for visible rows (non-monotonic)
3 - AutoWidth is computed for all retrieved rows.
You can set the AutoWidth property:
In the painter - in the Properties view, select one of the values in the drop-down list for the AutoWidth property.
In scripts - set the AutoWidth property to one of the numeric values. | https://docs.appeon.com/pb2019r2/upgrading_pb_apps/ch08s06.html | 2020-10-23T21:52:29 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.appeon.com |
View your data profile.
After you upload a CSV file, it is available as a table in ThoughtSpot. Click Data in the top navigation bar and select your table. Then click Profile.
The data profile includes null values, min, max, average, and sum information for each table column. This Profile view should help you get a better sense of what’s there before searching on the data. | https://docs.thoughtspot.com/6.1/admin/loading/view-your-data-profile.html | 2020-10-23T21:21:35 | CC-MAIN-2020-45 | 1603107865665.7 | [array(['/6.1/images/data_profile.png', None], dtype=object)] | docs.thoughtspot.com |
Deprecation: #81460 - Deprecate getByTag() on cache frontends¶
See Issue #81460
Description¶
The method
getByTag($tag) on
TYPO3\CMS\Core\Cache\Frontend\FrontendInterface and all implementations have been
deprecated with no alternative planned. This is done because the concept of cache tags were originally designed for
invalidation purposes, not for identification and retrieval.
Cache frontends still support the much more efficient
flushByTag and
flushByTags methods to perform invalidation
by tag, rather than use the deprecated method to retrieve a list of identifiers and removing each.
Impact¶
Calling this method on any TYPO3 provided cache frontend implementations triggers a deprecation log entry, with the
exception of
StringFrontend which has itself been deprecated in a separate patch.
Affected Installations¶
Avoid usage of the method - if necessary, use the same cache to store a list of identifiers for each tag.
Migration¶
Where possible, switch to
flushByTag or
flushByTags. In cases where you depend on getting identifiers by tag,
reconsider your business logic - and if necessary, keep track of which identifiers use a given tag, using a separate
list that you for example store in the cache alongside the usual cached entries. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/9.0/Deprecation-81460-DeprecateGetByTagOnCacheFrontends.html | 2020-10-23T22:22:04 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.typo3.org |
"WinRM client cannot process the request" error when you connect to Exchange Online through remote Windows PowerShell
Problem
When you try to use remote Windows PowerShell to connect to Microsoft Exchange Online in Microsoft Office 365, you receive the following error message:
[outlook.office365.com] Connecting to remote server failed with the following error message: The "WinRM client cannot process the request because the server name cannot be resolved. For more information, see the about_Remote_Troubleshooting Help topic. + CategoryInfo : OpenError: (System.Manageme....RemoteRunspace:RemoteRunspace) []. PSRemotingTransportException + FullyQualifiedErrorId : PSSessionOpenedFailed
Cause
This issue occurs if either an internal firewall or the Windows Remote Management service has not been started.
Solution
To resolve this issue, check whether the Windows Remote Management service is installed and has started. To do this, follow these steps:
Do one of the following:
- In Windows 8, press the Windows logo key+R to open the Run dialog box, type services.msc, and then press Enter.
- In Windows 7 or Windows Vista, click Start, type services.mscin the **Start search **field, and then press Enter.
- In Windows XP, click Start, click Run, type services.msc, and then press Enter.
In the Services window, double-click Windows Remote Management.
Set the startup type to Manual, and then click OK.
Right-click the service, and then click Start.
Let the service start.
Note
If the service was already started but it's not responding, you may have to click Restart.
Try to connect to Exchange Online again.
More information
For more information about how to connect to Exchange Online by using remote PowerShell, go to Connect to Exchange Online using Remote PowerShell.
Still need help? Go to Microsoft Community. | https://docs.microsoft.com/en-us/exchange/troubleshoot/administration/winrm-cannot-process-request | 2020-10-23T21:51:04 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.microsoft.com |
Important: #68079 - Extension “mediace” moved to TER¶
See Issue #68079
Description¶
The previously available “mediace” extension has been moved to the TYPO3 Extension Repository (TER) and will be managed on GitHub ().
An upgrade wizard in the Install Tool will check if the extension is needed. If so, it is downloaded from the TER and installed if necessary. | https://docs.typo3.org/c/typo3/cms-core/master/en-us/Changelog/7.6/Important-68079-ExtensionMediaceMovedToTER.html | 2020-10-23T22:23:38 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.typo3.org |
tx_solr.search¶
The search section, you probably already guessed it, provides configuration options for the all things related to actually searching the index, setting query parameters, formatting and processing result documents and the result listing.
targetPage¶
Sets the target page ID for links. If it is empty or 0, the current page ID will be used.
Note: This setting can be overwritten by the plugins flexform.
initializeWithEmptyQuery¶
If enabled, the results plugin (pi_results) issues a “get everything” query during initialization. This is useful, if you want to create a page that shows all available facets although no search has been issued by the user yet. Note: Enabling this option alone will not show results of the get everything query. To also show the results of the query, see option showResultsOfInitialEmptyQuery below.
showResultsOfInitialEmptyQuery¶
Requires initializeWithEmptyQuery (above) to be enabled to have any effect. If enabled together with initializeWithEmptyQuery the results of the initial “get everything” query are shown. This way, in combination with a filter you can easily list a predefined set of results.
keepExistingParametersForNewSearches¶
When doing a new search, existing parameters like filters will be carried over to the new search. This is useful for a scenario where you want to list all available documents first, then allow the user to filter the documents using facets and finally allow him to specify a search term to refine the search.
ignoreGlobalQParameter¶
In some cases you want EXT:solr to react on the parameter “q” in the url. Normally plugins are bounded to a namespace to allow multiple instances of the search on the same page. In this case you might want to disable this and let EXT:solr only react on the namespaced query parameter (tx_solr[q] by default).
additionalPersistentArgumentNames¶
Comma-separated list of additional argument names, that should be added to the persistent arguments that are kept for sub request, like the facet and sorting urls. Hard coded argument names are q, filter and sort.
Till solr version 6.5.x all parameters of the plugin namespace was added to the url again. With this setting you could enable this behavior again, but only with a whitelist of argument names.
query¶
The query sub-section defines a few query parameters for the query that will be sent to the Solr server later on. Some query parameters are also generated and set by the extension itself, f.e. when using facets.
query.allowedSites¶
When indexing documents (pages, records, files, …) into the Solr index, the solr extension adds a “siteHash”. The siteHash is used to allow indexing multiple sites into one index and still have each site only find its own documents. This is achieved by adding a filter on the siteHash.
Sometimes though, you want to search across multiple domains, then the siteHash is a blocker. Using the allowedSites setting you can set a comma-separated list of domains who’s documents are allowed to be included in the current domain’s search results. The default value is __solr_current_site which is a magic string/variable that is replaced with the current site’s domain when querying the Solr server.
Version 3.0 introduced a couple more magic keywords that get replaced:
- __current_site same as __solr_current_site
- __all Adds all domains as allowed sites
- * (asterisk character) Everything is allowed as siteHash (same as no siteHash check). This option should only be used when you need a search across multiple system and you know the impact of turning of the siteHash check.
query.getParameter¶
The GET query parameter name used in URLs. Useful for cases f.e. when a website tracking tool does not support the default array GET parameters.
The option expects a string, you can also define an array in the form of arrayName|arrayKey.
Example:
plugin.tx_solr.search.query.getParameter = q
query.queryFields (query.fields)¶
Defines what fields to search in the index. Fields are defined as a comma separated list. Each field can be given a boost by appending the boost value separated by the ^ character, that’s Lucene query language. The boost value itself is a float value, pay attention to using a dot as the separator for the fractions. Use this option to add more fields to search.
The boost take influence on what score a document gets when searching and thus how documents are ranked and listed in the search results. A higher score will move documents up in the result listing. The boost is a multiplier for the original score value of a document for a search term.
By default if a search term is found in the content field the documents gets scored / ranked higher as if a term was found in the title or keywords field. Although the default should provide a good setting, you can play around with the boost values to find the best ranking for your content.
query.returnFields¶
Limits the fields returned in the result documents, by default returns all field plus the virtual score field.
query.minimumMatch¶
Sets the minimum match mm query parameter. By default the mm query parameter is set in solrconfig.xml as 2<-35%. This means that for queries with less than three words they all must match the searched fields of a document. For queries with three or more words at least 65% of them must match rounded up.
Please consult the link to the Solr wiki for a more detailed description of the mm syntax.
query.boostFunction¶
A boost function can be useful to influence the relevance calculation and boost some documents to appear more at the beginning of the result list. Technically the parameter will be mapped to the “bf” parameter in the solr query.
Use cases for example could be:
“Give newer documents a higher priority”:
This could be done with a recip function:
recip(ms(NOW,created),3.16e-11,1,1)
“Give documents with a certain field value a higher priority”:
This could be done with:
termfreq(type,'tx_solr_file')
query.boostQuery¶
Sets the boost function bq query parameter.
Allows to further manipulate the score of a document by using Lucene syntax queries. A common use case for boost queries is to rank documents of a specific type higher than others.
Please consult the link to the Solr wiki for a more detailed description of boost functions.
Example (boosts tt_news documents by factor 10):
plugin.tx_solr.search.query.boostQuery = (type:tt_news)^10
query.tieParameter¶
This parameter ties the scores together. Setting is to “0” (default) uses the maximum score of all computed scores. A value of “1.0” adds all scores. The value is a number between “0.0” and “1.0”.
query.filter¶
Allows to predefine filters to apply to a search query. You can add multiple filters through a name to Lucene filter mapping. The filters support stdWrap.
Example:
plugin.tx_solr.search.query.filter { pagesOnly = type:pages johnsPages = author:John badKeywords = {foo} badKeywords.wrap = -keywords:| badKeywords.data = GP:q }
Note: When you want to filter for something with whitespaces you might need to quote the filter term.
plugin.tx_solr.search.query.filter { johnsDoesPages = author:"John Doe" }
query.filter.__pageSections¶
This is a magic/reserved filter (thus the double underscore). It limits the query and the results to certain branches/sections of the page tree. Multiple starting points can be provided as a comma-separated list of page IDs.
query.sortBy¶
Allows to set a custom sorting for the query. By default Solr will sort by relevance, using this setting you can sort by any sortable field.
Needs a Solr field name followed by asc for ascending order or desc for descending.
Example:
plugin.tx_solr.search.query.sortBy = title asc
query.phrase¶
This parameter enables the phrase search feature from Apache Solr. Setting is to “0” (default) does not change behaviour from Apache Solr if user searches for two and more words. Enabling phrase search feature influences the document set and/or the scores of documents.
query.phrase.phrase.slop¶
This parameter defines the “phrase slop” value, which represents the number of positions one word needs to be moved in relation to another word in order to match a phrase specified in a query.
Note: The value of this setting has NO influence on explicit phrase search.
query.phrase.querySlop¶
This parameter defines the “phrase slop” value, which represents the number of positions one word needs to be moved in relation to another word in order to match a phrase specified in a explicit phrase search query. Note: On explicit(“double quoted” phrase) phrase search Apache Solr searches in “qf” queryFields
- Note: The value of this setting has no influence on implicit phrase search.
- On explicit phrase search the Solr searches in qf (plugin.tx_solr.search.query.queryFields) defined fields.
query.bigramPhrase¶
This parameter enables the bigram phrase search feature from Apache Solr. Setting is to “0” (default) does not change behaviour from Apache Solr if user searches for three and more words. Enabling bigram phrase search feature influences the scores of documents with phrase occurrences.
query.bigramPhrase.fields¶
This parameter defines what fields should be used to search in the given sentence(three+ words). Matched documents will be boosted according to fields boost value. Fields are defined as a comma separated list and same way as queryFields.
Note: The value of this setting has NO influence on explicit phrase search.
query.bigramPhrase.slop¶
This parameter defines the “bigram phrase slop” value, which represents the number of positions one word needs to be moved in relation to another word in order to match a phrase specified in a query.
Note: The value of this setting has NO influence on explicit phrase search.
query.trigramPhrase¶
This parameter enables the phrase search feature from Apache Solr. Setting is to “0” (default) does not change behaviour from Apache Solr if user searches for two and more words. Enabling phrase search feature influences the scores of documents with phrase occurrences.
query.trigramPhrase.trigramPhrase.slop¶
This parameter defines the “trigram phrase slop” value, which represents the number of positions one word needs to be moved in relation to another word in order to match a phrase specified in a query.
Note: The value of this setting has NO influence on explicit phrase search.
results¶
results.resultsHighlighting¶
En-/disables search term highlighting on the results page.
Note: The FastVectorHighlighter is used by default (Since 4.0) if fragmentSize is set to at least 18 (this is required by the FastVectorHighlighter to work).
results.resultsHighlighting.highlightFields¶
A comma-separated list of fields to highlight.
Note: The highlighting in solr (based on FastVectorHighlighter requires a field datatype with termVectors=on, termPositions=on and termOffsets=on which is the case for the content field). If you add other fields here, make sure that you are using a datatype where this is configured.
results.resultsHighlighting.fragmentSize¶
The size, in characters, of fragments to consider for highlighting. “0” indicates that the whole field value should be used (no fragmenting).
results.resultsHighlighting.fragmentSeparator¶
When highlighting is activated Solr highlights the fields configured in highlightFields and can return multiple fragments of fragmentSize around the highlighted search word. These fragments are used as teasers in the results list. fragmentSeparator allows to configure the glue string between those fragments.
results.siteHighlighting¶
Activates TYPO3’s highlighting of search words on the actual pages. The words a user searched for will be wrapped with a span and CSS class csc-sword Highlighting can be styled using the CSS class csc-sword, you need to add the style definition yourself for the complete site.
spellchecking¶
lastSearches¶
frequentSearches¶
frequentSearches¶
Set
plugin.tx_solr.search.frequentSearches = 1 to display a list of the frequent / common searches.
frequentSearches.useLowercaseKeywords¶
When enabled, keywords are written to the statistics table in lower case.
frequentSearches.minSize¶
The difference between frequentSearches.maxSize and frequentSearches.minSize is used for calculating the current step.
frequentSearches.maxSize¶
The difference between frequentSearches.maxSize and frequentSearches.minSize is used for calculating the current step.
sorting¶
sorting.options¶
This is a list of sorting options. Each option has a field and label to be used. By default the options title, type, author, and created are configured, plus the virtual relevancy field which is used for sorting by default.
Example:
plugin.tx_solr.search { sorting { options { relevance { field = relevance label = Relevance } title { field = sortTitle label = Title } } } }
Note: As mentioned before relevance is a virtual field that is used to reset the sorting. Sorting by relevance means to have the order provided by the scoring from solr. That the reason why sorting descending on relevance is not possible.
faceting¶
faceting.minimumCount¶
This indicates the minimum counts for facet fields should be included in the response.
faceting.sortBy¶
Defines how facet options are sorted, by default they are sorted by count of results, highest on top. count, 1, true are aliases for each other.
Facet options can also be sorted alphabetically (lexicographic by indexed term) by setting the option to index. index, 0, false, alpha (from version 1.2 and 2.0), and lex (from version 1.2 and 2.0) are aliases for index.
faceting.limit¶
Number of options to display per facet. If more options are returned by Solr, they are hidden and can be expanded by clicking a “show more” link. This feature uses a small javascript function to collapse/expand the additional options.
faceting.keepAllFacetsOnSelection¶
When enabled selecting an option from a facet will not reduce the number of options available in other facets.
faceting.countAllFacetsForSelection¶
When
`keepAllFacetsOnSelection` is active the count of a facet do not get reduced. You can use
`countAllFacetsForSelection` to achieve that.
The following example shows how to keep all options of all facets by keeping the real document count, even when it has zero options:
`
plugin.tx_solr.search.faceting.keepAllFacetsOnSelection = 1
plugin.tx_solr.search.faceting.countAllFacetsForSelection = 1
plugin.tx_solr.search.faceting.minimumCount = 0
`
faceting.showAllLink.wrap¶
Defines the output of the “Show more” link, that is rendered if there are more facets given than set by faceting.limit.
faceting.showEmptyFacets¶
By setting this option to 1, you will allow rendering of empty facets. Usually, if a facet does not offer any options to filter a resultset of documents, the facet header will not be shown. Using this option allows the header still to be rendered when no filter options are provided.
faceting.facetLinkUrlParameters.useForFacetResetLinkUrl¶
Allows to prevent adding the URL parameters to the facets reset link by setting the option to 0.
faceting.facets¶
Defines which fields you want to use for faceting. It’s a list of facet configurations.
plugin.tx_solr.search.faceting.facets { type { field = type label = Content Type } category { field = category_stringM label = Category } }
faceting.facets.[facetName] - single facet configuration¶
You can add new facets by simply adding a new facet configuration in TypoScript. [facetName] represents the facet’s name and acts as a configuration “container” for a single facet. All configuration options for a facet are defined within that “container”.
A facet will use the values of a configured index field to offer these values as filter options to your site’s visitors. You need to make sure that the facet field’s type allows to sort the field’s value; like string, int, and other primitive types.
To configure a facet you only need to provide the label and field configuration options, all other configuration options are optional.
faceting.facets.[facetName].addFieldAsTag¶
When you want to add fields as
`additionalExcludeTags` for a facet a tag for this facet needs to exist. You can use this setting to force the creation of a tag for this facet in the solr query.
faceting.facets.[facetName].label¶
Used as a headline or title to describe the options of a facet. Used in flex forms of plugin for filter labels. Can be translated with LLL: and consumed and translated in Partial/Facets/* with f:translate ViewHelper.
faceting.facets.[facetName].excludeValues¶
Defines a comma separated list of options that are excluded (The value needs to match the value in solr)
Important: This setting only makes sence for option based facets (option, query, hierarchy)
faceting.facets.[facetName].facetLimit¶
Hard limit of options returned by solr.
Note: This is only available for options facets.
faceting.facets.[facetName].metrics¶
Metrics can be use to collect and enhance facet options with statistical data of the faceted documents. They can be used to render useful information in the context of an facet option.
Example:
plugin.tx_solr.search.faceting.facets { category { field = field label = Category metrics { downloads = sum(downloads_intS) } } }
The example above will make the metric “downloads” available for all category options. In this case it will be the sum of all downloads of this category item. In the frontend you can render this metric with “<facetoptions.>.metrics.downloads” and use it for example to show it instead of the normal option count.
faceting.facets.[facetName].partialName¶
By convention a facet is rendered by it’s default partial that is located in “Resources/Private/Partials/Facets/<Type>.html”.
If you want to render a single facet with another, none conventional partial, your can configure it with “partialName = MyFacetPartial”.
faceting.facets.[facetName].keepAllOptionsOnSelection¶
Normally, when clicking any option link of a facet this would result in only that one option being displayed afterwards. By setting this option to one, you can prevent this. All options will still be displayed.
This is useful if you want to allow the user to select more than one option from a single facet.
faceting.facets.[facetName].operator¶
When configuring a facet to allow selection of multiple options, you can use this option to decide whether multiple selected options should be combined using AND or OR.
faceting.facets.[facetName].sortBy¶
Sets how a single facet’s options are sorted, by default they are sorted by number of results, highest on top. Facet options can also be sorted alphabetically by setting the option to alpha.
Note: Since 9.0.0 it is possible to sort a facet by a function. This can be done be defining a metric and use that metric in the sortBy configuration. As sorting name you then need to use by convention “metrics_<metricName>”
Example:
pid { label = Content Type field = pid metrics { newest = max(created) } sortBy = metrics_newest desc }
faceting.facets.[facetName].manualSortOrder¶
By default facet options are sorted by the amount of results they will return when applied. This option allows to manually adjust the order of the facet’s options. The sorting is defined as a comma-separated list of options to re-order. Options listed will be moved to the top in the order defined, all other options will remain in their original order.
Example - We have a category facet like this:
News Category + Politics (256) + Sports (212) + Economy (185) + Culture (179) + Health (132) + Automobile (99) + Travel (51)
Using
faceting.facets.[facetName].manualSortOrder = Travel, Health will result in the following order of options:
News Category + Travel (51) + Health (132) + Politics (256) + Sports (212) + Economy (185) + Culture (179) + Automobile (99)
faceting.facets.[facetName].minimumCount¶
Set’s the minimumCount for a single facet. This can be usefull e.g. to set the minimumCount of a single facet to 0, to have the options available even when there is result available.
Note: This setting is only available for facets that are using the json faceting API of solr. By now this is only available for the options facets.
faceting.facets.[facetName].showEvenWhenEmpty¶
Allows you to display a facet even if it does not offer any options (is empty) and although you have set
plugin.tx_solr.search.faceting.showEmptyFacets = 0.
faceting.facets.[facetName].includeInUsedFacets¶
By setting this option to 0, you can prevent rendering of a given facet within the list of used facets.
faceting.facets.[facetName].type¶
Defines the type of the facet. By default all facets will render their facet options as a list. PHP Classes can be registered to add new types. Using this setting will allow you to use such a type and then have the facet’s options rendered and processed by the registered PHP class.
faceting.facets.[facetName].[type]¶
When setting a special type for a facet you can set further options for this type using this array.
Example (numericRange facet displayed as a slider):
plugin.tx_solr.search.faceting.facets.size { field = size_intS label = Size type = numericRange numericRange { start = 0 end = 100 gap = 1 } }
faceting.facets.[facetName].requirements.[requirementName]¶
Allows to define requirements for a facet to be rendered. These requirements are dependencies on values of other facets being selected by the user. You can define multiple requirements for each facet. If multiple requirements are defined, all must be met before the facet is rendered.
Each requirement has a name so you can easily recognize what the requirement is about. The requirement is then defined by the name of another facet and a list of comma-separated values. At least one of the defined values must be selected by the user to meet the requirement.
There are two magic values for the requirement’s values definition:
- __any: will mark the requirement as met if the user selects any of the required facet’s options
- __none: marks the requirement as met if none of the required facet’s options is selected. As soon as any of the required facet’s options is selected the requirement will not be met and thus the facet will not be rendered
Example of a category facet showing only when the user selects the news type facet option:
plugin.tx_solr { search { faceting { facets { type { label = Content Type field = type } category { label = Category field = category_stringS requirements { typeIsNews { # typeIsNews is the name of the requirement, c # choose any so you can easily recognize what it does facet = type # The name of the facet as defined above values = news # The value of the type facet option as # it is stored in the Solr index } } } } } } }
faceting.facets.[facetName].renderingInstruction¶
Overwrites how single facet options are rendered using TypoScript cObjects.
Example: (taken from issue #5920)
plugin.tx_solr { search { faceting { facets { type { renderingInstruction = CASE renderingInstruction { key.field = optionValue pages = TEXT pages.value = Pages pages.lang.de = Seiten tx_solr_file = TEXT tx_solr_file.value = Files tx_solr_file.lang.de = Dateien tt_news = TEXT tt_news.value = News tt_news.lang.de = Nachrichten } } language { renderingInstruction = CASE renderingInstruction { key.field = optionValue 0 = TEXT 0.value = English 0.lang.de = Englisch 1 = TEXT 1.value = German 1.lang.de = Deutsch } } } } } }
EXT:solr provides the following renderingInstructions that you can use in your project:
FormatDate:
This rendering instruction can be used in combination with a date field or an integer field that hold a timestamp. You can use this rendering instruction to format the facet value on rendering. A common usecase for this is, when the datatype in solr needs to be sortable (date or int) but you need to render the date as readable date option in the frontend:
plugin.tx_solr.search.faceting.facets { created { field = created label = Created sortBy = alpha reverseOrder = 1 renderingInstruction = TEXT renderingInstruction { field = optionValue postUserFunc = ApacheSolrForTypo3\Solr\Domain\Search\ResultSet\Facets\RenderingInstructions\FormatDate->format } } }
elevation¶
variants¶
By using variants you can shrink down multiple documents with the same value in one field into one document and make similar documents available in the variants property. By default the field variantId is used as Solr collapsing criteria. This can be used e.g. as one approach of deduplication to group similar documents into on “root” SearchResult.
To use the different variants of the documents you can access “document.variants” to access the expanded documents.
This can be used for example for de-duplication to list variants of the same document below a certain document.
Note: Internally this is implemented with Solr field collapsing
Set plugin.tx_solr.search.variants = 1 to enable the variants in search results.
variants.variantField¶
Used to expand the document variants to the document.variants property.
Note:: The field must be a numeric field or a string field! Not a text field! | https://docs.typo3.org/p/apache-solr-for-typo3/solr/master/en-us/Configuration/Reference/TxSolrSearch.html | 2020-10-23T22:31:56 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.typo3.org |
You can create log processing rules to mask logs. Masking lets you hide fields completely or partially in log messages, for example, fields such as password. Mask Logs tab, click New Configuration.
-. | https://docs.vmware.com/en/VMware-vRealize-Log-Insight-Cloud/services/User-Guide/GUID-26026E4A-4A18-4B85-952D-463419317510.html | 2020-10-23T22:39:05 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
TOPICS×
Apply creative selection rules
TVSDK applies creative selection rules in the following ways:
- TVSDK applies all default rules first, followed by the zone-specific rules.
- TVSDK ignores any rules that are not defined for the current zone ID.
- Once TVSDK applies the default rules, the zone-specific rules can further change the creative priorities based on the host (domain) matches on the creative selected by the default rules.
- In the included sample rules file with additional zone rules, once TVSDK applies the default rules, if the M3U8 creative domain does not contain my.domain.com or a.bcd.com and the ad zone is 1234 , the creatives are re-ordered and the Flash VPAID creative is played first if available. Otherwise an MP4 ad is played, and so on down to JavaScript.
- If an ad creative is selected that TVSDK cannot play natively ( .mp4, .flv, etc.), TVSDK issues a repackaging request.
Note that the ad types that can be handled by TVSDK are still defined through the validMimeTypes setting in AuditudeSettings . | https://docs.adobe.com/content/help/en/primetime/programming/tvsdk-3x-android-prog/advertising/update-ad/android-3x-how-tvsdk-applies-csr.html | 2020-10-23T23:01:39 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.adobe.com |
fiProcessArrays
fiProcessArrays concatenates array values into strings so that they can be used in an email report.
Some (later?) versions of FormIt handle array values already, but fiProcessArrays can be useful if you need extra customization or if have any trouble with multiple checkboxes not showing up in fiGenerateReport.
To use, call the fiGenerateReport hook before the hooks that would use the converted values (such as "fiGenerateReport" and/or "email").
[[!FormIt? &hooks=`math,spam,fiProcessArrays,fiGenerateReport,email,redirect` ... &figrExcludedFields=`op1,op2,operator,math` ]]
Available Properties
The following parameters can be passed to the FormIt call: | https://docs.modx.com/3.x/en/extras/formitfastpack/fiprocessarrays | 2020-10-23T21:41:05 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.modx.com |
With global namespaces in Tanzu Service Mesh, you can easily connect and secure the services in your application across clusters. You can learn how to add the services in your application to a global namespace to have them automatically discovered and connected across the clusters.
Where appropriate, an example of a sample e-commerce application is used to show you how to connect services across clusters by adding them to a global namespace. The sample application is made up of 12 services and is configured to have most of the services deployed on one cluster and the catalog service deployed on the other cluster.
Prerequisites
Verify the following prerequisites:
You have onboarded the clusters where your services are deployed. For a global namespace, you must onboard at least two clusters. For more information about onboarding a cluster, see Onboard a Cluster to Tanzu Service Mesh.
You know the Kubernetes namespaces in your clusters that hold the services of your application.
You are in the Tanzu Service Mesh Console. For information about accessing the Tanzu Service Mesh Console, see Access the Tanzu Service Mesh Console.
Procedure
- In the Tanzu Service Mesh Console, create a global namespace for your application services:
- In the navigation panel on the left, click Add New and then click New Global Namespace.
- On the General Details page of the New Global Namepace wizard, enter a unique name and a domain name for the global namespace.
The name of a global namespace and its domain name together forms a fully qualified domain name (FQDN) that uniquely identifies that global namespace and makes it possible for the services in the global namespace to communicate with each other across clusters.
In the example, you must enter a name of sample-gns and a domain name of sample.app.com.
- On the Service Mapping page, to add the services in your application to the global namespace, specify their Kubernetes namespace-cluster pairs. Under Map services in Kubernetes Namespace(s), in the left drop-down menu, select the namespace on one of your clusters that holds some of the services and in the right drop-down menu, select the name of the cluster. Click Add Service Mapping to select namespace-cluster pairs for the other services deployed on the other clusters.
The namespace-cluster pairs you specify here define service mapping rules that are used to select services for a global namespace. Click Service Preview under each service-mapping rule to see the names of the selected services from each cluster.
The sample application has services running on two clusters. For most of the services running in one cluster, select the default namespace in the left drop-down menu and the prod-cluster1 cluster in the right drop-down menu. Then click Add Service Mapping and select the default namespace and the prod-cluster2 cluster for the catalog service on the other cluster.
- On the next pages of the wizard, select service and security options.
- Review the configuration of the global namespace and click Finish.
- To enable the cross-cluster communication between the services, edit the Kubernetes deployment manifest for the appropriate service on one cluster to specify the domain name of the global namespace, prefixing the domain name with the name of the service on the other cluster.Important:
Make sure that you prefix the domain name with the name of the service that you want the service being edited to communicate with. See the following example.
In the sample application, the shopping service on one cluster must communicate with the catalog service on the other cluster. To edit the deployment manifest of the shopping service, run the following kubectl command.
kubectl --context=prod-cluster1.local edit deployment shopping
In the deployment manifest, set the appropriate variable to catalog.sample.app.com. The
catalogprefix is required for the shopping service to communicate with the catalog service.Important:
If you are using your custom application instead of the sample application, make sure to add the
nameof the port in the latest version protocol resolver in the Kubernetes deployment manifest for your application. This is needed for the services running on one cluster to communicate with the services running on the other clusters.
For HTTP, prefix
namewith "http-", for example,
http-server. The following example of a deployment manifest shows
namefor HTTP under
ports.
apiVersion: apps/v1 kind: Deployment metadata: name: order-app-deployment spec: selector: matchLabels: app: order-service-app replicas: 1 # tells deployment to run N pods matching the template template: # create pods using pod definition in this template metadata: labels: app: order-service-app spec: containers: - env: - name: CATALOG_HOST value: catalog-service.onlinestore - name: CATALOG_PORT value: "8010" - name: CUSTOMER_HOST value: customer-management-service.onlinestore - name: CUSTOMER_PORT value: "8011" image: my-images/order-service imagePullPolicy: Always name: order-service-app ports: - containerPort: 8012 name: http-server protocol: TCP
- Verify the cross-cluster communication between the services in Tanzu Service Mesh.
- On the navigation panel on the left, click Inventory and then Global Namespaces.
- On the Global Namespaces page, click the name of the global namespace that you created (sample-gns in the example).
- Click the GNS Topology tab.
The service topology graph shows the connections between the services in the different clusters. The line between the services indicates that traffic flows between them. The number of requests per seconds (RPS) or other specified service metrics are shown.
What to do next
For information about how to specify metrics to show in the service topology graph and other details about using the topology graph, see View the Summary Infrastructure and Service Information. | https://docs.vmware.com/en/VMware-Tanzu-Service-Mesh/services/getting-started-guide/GUID-8D483355-6F58-4AAD-9EAF-3B8E0A87B474.html | 2020-10-23T22:19:38 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
The VMware Security Development Lifecycle (SDL) program identifies and mitigates security risk during the development phase of VMware software products. VMware also operates the VMware Security Response Center (VSRC) to conduct the analysis and remediation of software security issues in VMware products.
The SDL is the software development methodology that the VMware Security Engineering, Communication, and Response (vSECR) group, and VMware product development groups, use to help identify and mitigate security issues. For more information about the VMware Security Development Lifecycle, see the webpage at.
The VSCR works with customers and the security research community to achieve the goals of addressing security issues and providing customers with actionable security information in a timely manner. For more information about the VMware Security Response Center, see the webpage at. | https://docs.vmware.com/en/VMware-vSphere/7.0/com.vmware.vsphere.security.doc/GUID-B525D5B5-6666-40EA-9AB9-3255D14C140E.html | 2020-10-23T22:43:14 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.vmware.com |
Wrong coordinates?
Duplicate spot?
Inappropriate names?
No problem!
If you want a spot to be renamed, deleted, moved, merged with another spot, or if there is any other issue related to our spot database, please send us a clear and detailed description with what needs to be done to [email protected] Allow us a few days to respond.
Updated 6 months ago | https://docs.woosports.com/docs/edit-a-spot | 2020-10-23T21:37:06 | CC-MAIN-2020-45 | 1603107865665.7 | [] | docs.woosports.com |
Sandbox
Internet Explorer 10 and Windows apps using JavaScript introduce support for the sandbox attribute. The sandbox attribute enables security restrictions for iframe elements that contain untrusted content. These restrictions enhance security by preventing untrusted content from performing actions that can lead to potentially malicious behavior.
The sandbox attribute is specified in Section 4.8.2 of the World Wide Web Consortium (W3C)’s HTML5 specification.
Enabling sandbox
To enable these restrictions, specify the sandbox attribute, as shown in the following code example.
<iframe sandbox</iframe>
When the sandbox attribute is specified for an iframe element, the content in the iframe element is said to be sandboxed.
Behavior restricted by sandbox
When iframe elements are sandboxed, the following actions are restricted:
-.
Customizing sandbox restrictions
Internet Explorer 10 and Windows apps using JavaScript enable you to customize selected sandbox restrictions. To do so, specify one or more of the following customization flags as the value of the sandbox attribute.
The following example shows a sandboxed iframe element that uses customization flags to customize the restrictions for the content in the element.
<iframe sandbox="allow-forms allow-same-origin" src="frame1.html"></iframe>
This example permits form submission and access to local data sources. Be aware that multiple customization flags are separated by spaces.
For a hands-on demonstration of HTML5 Sandbox in action, see Defense in Depth: HTML5 Sandbox on the IE Test Drive.
API Reference
Internet Explorer Test Drive demos
Defense in Depth: HTML5 Sandbox
IEBlog posts
Defense in Depth: Locking Down Mash-Ups with HTML5 Sandbox
Specification
HTML5: Sections 4.8.2, 5.4
Related topics
How to Safeguard your Site with HTML5 Sandbox | https://docs.microsoft.com/en-us/previous-versions/windows/internet-explorer/ie-developer/dev-guides/hh673561(v=vs.85) | 2018-02-18T00:29:05 | CC-MAIN-2018-09 | 1518891808539.63 | [] | docs.microsoft.com |
>>IMAGE.
What "acknowledge" means runbook URL in the condition thresholds for the alert policy. New Relic Alerts will include the runbook URL in Incidents.
How to acknowledge incidents.
To acknowledge an incident from the user interface:
- Go to alerts.newrelic.com > Incidents > Open incidents or All incidents.
- From the selected incident row, verify whether someone else's name or gravatar already appears in the Acknowledge column.
- Select the Acknowledge icon.
This automatically notifies everyone who has been set up to receive the alert notification. New Relic Alerts also automatically adds your name (or gravatar if applicable) to the incident's Acknowledge column.
To view the name associated with a gravatar in the Acknowledge column, mouse over the gravatar.
What happens to additional violations
If you acknowledge an incident and new violations occur, the selected channel(s) will trigger an alert notification only about the new violations. Your incident rollup preference determines how New Relic Alerts will group the additional violations in the Incidents page.
Where to view your acknowledged incidents
To view any incidents you have acknowledged:
- Go to alerts.newrelic.com > Incidents > Open incidents.
- Optional: Use any available search or sort options to refine the index results (not available for the Acknowledge column).
- Select any row that shows your name or gravatar to view the incident's history and details.
What other options to acknowledge are available
New Relic Alerts includes additional methods (channels) to acknowledge alert notifications. For example:
- If your alert notification channel includes email, you can acknowledge the alert at the time you receive it by selecting the email's Acknowledge link.
- If you have registered and linked your iOS or Android mobile app to your New Relic account, you can add mobile as a notification channel for your alert policies. For example, Android users can then acknowledge an alert by using the notification bar. You do not need to open the New Relic app.
The ability for New Relic Alerts to receive acknowledgements back (ack back) from other notification channels is a planned enhancement for a future release. | https://docs.newrelic.com/docs/alerts/new-relic-alerts/reviewing-alert-incidents/acknowledge-alert-incidents | 2018-02-17T23:12:06 | CC-MAIN-2018-09 | 1518891808539.63 | [array(['https://docs.newrelic.com/sites/default/files/thumbnails/image/040815incidents-ack.png',
'Alerts v3: Alert incidents acknowledge Alerts v3: Alert incidents acknowledge'],
dtype=object)
array(['https://docs.newrelic.com/sites/default/files/styles/inline_660px/public/thumbnails/image/040815incidents-no-ack.png?itok=2t1Hy6mQ',
'Alerts v3: Incidents acknowledge Alerts v3: Incidents acknowledge'],
dtype=object) ] | docs.newrelic.com |
Understanding what APIs and plug-ins are installed
BMC Remedy AR System includes plug-ins and corresponding application programming interfaces (APIs) that extend the BMC Remedy AR System functionality to external data sources. The plug-in service, a companion server to the BMC Remedy AR System server, loads the plug-ins and accesses them upon request from the AR System server.
When you install BMC Remedy AR System, you can choose what you want to install:
- The provided Lightweight Directory Access Protocol (LDAP) plug-ins (AREA LDAP and ARDBC LDAP)
- The features to create your own AREA and ARDBC plug-ins
- The API package
The BMC Remedy AR System API suite is composed of a C API, a Java API, a plug-in API, and plug-ins that use APIs:
- BMC Remedy AR System External Authentication (AREA) — Accesses network directory services, such as LDAP. You can create and configure your own custom authentication plug-in, or you can use the provided plug-in. The AREA plug-in also enables BMC Remedy AR System users to consolidate user authentication information for external applications or data sources.
- BMC Remedy AR System Database Connectivity (ARDBC) — Accesses external sources of data. The ARDBC plug-in, which you access through vendor forms, enables you to perform the following tasks on external data:
- Create, delete, modify, and get external data
- Retrieve lists for external data
- Populate search-style character menus
For example, if you need to access data in an external directory service, you can use the ARDBC LDAP plug-in.
Install the API if you will install the mid tier, or if you require functionality that is not included in the BMC Remedy AR System client tools.
Related topics
Developing an API program
Using the ARDBC LDAP plug-in | https://docs.bmc.com/docs/ars8000/understanding-what-apis-and-plug-ins-are-installed-149489112.html | 2019-04-18T13:25:24 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.bmc.com |
Initializes a new instance of the XRSummary class with the specified range and function.
Namespace: DevExpress.XtraReports.UI
Assembly: DevExpress.XtraReports.v18.2.dll
public XRSummary(
SummaryRunning running,
SummaryFunc func
)
Public Sub New(
running As SummaryRunning,
func As SummaryFunc
)
A SummaryRunning enumeration value, specifying the range for which the summary function should be calculated. This value is assigned to the XRSummary.Running property.
A SummaryFunc enumeration value, specifying the summary function to be calculated. This value is assigned to the XRSummary.Func property. | https://docs.devexpress.com/XtraReports/DevExpress.XtraReports.UI.XRSummary.-ctor(DevExpress.XtraReports.UI.SummaryRunning-DevExpress.XtraReports.UI.SummaryFunc) | 2019-04-18T13:04:55 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.devexpress.com |
Autosave indicator
Note
These release notes describe functionality that may not have been released yet. To see when this functionality is planned to release, please review What's new and planned for Dynamics 365 Business Central. Delivery timelines and projected functionality may change or may not ship (see Microsoft policy).
Current Business Central customers as well as customers of Dynamics NAV are very familiar with the concept of autosave common in our products. This is a very much loved and welcomed feature, but we heard from many of our customers moving from other ERP systems that they are not aware data is saved and secured in Business Central—even without explicitly using any save function. It is for those customers that we have built a smart autosave indicator showing when the data is being saved for them.
Business value
This new element indicates directly the state of card or document data being saved in the background and provides any user with a clear indication that the entered information is secure.
Autosave indicator appearance
The indicator is shown on the right side of the card on screen and changes values when the computer communicates with the server and saves the data. The indicator can display Saving or Saved depending on current state. In case a data validation error appears, it would also display Not saved. An example of the indicator in action can be seen below:
Tell us what you think
Help us improve Dynamics 365 Business Central by discussing ideas, providing suggestions, and giving feedback. Use the Business Central forum at. | https://docs.microsoft.com/en-us/business-applications-release-notes/April19/dynamics365-business-central/autosave | 2019-04-18T12:45:14 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['media/autosave.png',
'Autosave Indicator The new Autosave indicator in Business Central'],
dtype=object) ] | docs.microsoft.com |
Get-Federation
Information
Use the Get-FederationInformation cmdlet to get federation information, including federated domain names and target URLs, from an external Exchange organization.
For information about the parameter sets in the Syntax section below, see Exchange cmdlet syntax ().
Syntax
Get-FederationInformation -DomainName <SmtpDomain> [-BypassAdditionalDomainValidation] [-Force] [-TrustedHostnames <MultiValuedProperty>] [<CommonParameters>]
Description-FederationInformation -DomainName contoso.com
This example gets federation information from the domain contoso.com.
Parameters
The BypassAdditionalDomainValidation switch specifies that the command skip validation of domains from the external Exchange organization. You don't need to specify a value with this switch.
We recommend that you only use this switch to retrieve federation information in a hybrid deployment between on-premises and Exchange Online organizations. Don't use this switch to retrieve federation information for on-premises Exchange organizations in a cross-organization arrangement.
The DomainName parameter specifies the domain name for which federation information is to be retrieved.
The Force switch specifies whether to suppress warning or confirmation messages. You can use this switch to run tasks programmatically where prompting for administrative input is inappropriate. You don't need to specify a value with this switch.
A confirmation prompt warns you if the host name in the Autodiscover endpoint of the domain doesn't match the Secure Sockets Layer (SSL) certificate presented by the endpoint and the host name isn't specified in the TrustedHostnames parameter.
The TrustedHostnames parameter specifies the fully qualified domain name (FQDN) of federation endpoints. Federation endpoints are the client access (frontend) services on Mailbox servers in an organization with federation enabled. Explicitly specifying the TrustedHostnames parameter allows the cmdlet to bypass prompting if the certificate presented by the endpoint doesn't match the domain name specified in the DomainName: | https://docs.microsoft.com/en-us/powershell/module/exchange/federation-and-hybrid/get-federationinformation?view=exchange-ps | 2019-04-18T12:39:27 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.microsoft.com |
ExclusionWindow - createWindowWithWeeklySchedule
ExclusionWindow - createWindowWithWeeklySchedule
Description :
This command creates a Window with a weekly schedule.
The dateString argument defines a weekly schedule to be added to the window definition. It must be in the format YYYY-MM-DD HH:MM:SS.
Use the daysOfWeek argument to specify a sum representing the days when the job should execute. Each day of the week has a value, as shown below:
- SUNDAY=1
- MONDAY=2
- TUESDAY=4
- WEDNESDAY=8
- THURSDAY=16
- FRIDAY=32
- SATURDAY=64
Add values representing the days when you want the window to be active. For example, if you want the window to be active on Monday, Wednesday, and Friday, then daysOfWeek=42.
Use the frequency argument to specify an interval in weeks for the window to be active (for example, 2 means the job runs every other week).
Command Input :
Example
The following example shows how to create a new Window with a weekly schedule. In the example, the window is active every three weeks at 11:35 PM on Monday, Wednesday, and Friday.
Script
DATE_STRING="2016-01-01 23:35:00" EXCLUSION_WINDOW_NAME="WEEKLY_WINDOW" EXCLUSION_WINDOW_DESC="Weekly exclusion window" DAYS_OF_WEEK=42 FREQUENCY=3 WINDOW_KEY=`blcli ExclusionWindow createWindowWithWeeklySchedule "$EXCLUSION_WINDOW_NAME" "$EXCLUSION_WINDOW_DESC" "$DATE_STRING" $DAYS_OF_WEEK $FREQUENCY 0 5 35` | https://docs.bmc.com/docs/blcli/89/exclusionwindow-createwindowwithweeklyschedule-658500545.html | 2019-04-18T13:24:47 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.bmc.com |
A list of roles associated with the current user.
Namespace: DevExpress.Persistent.Base
Assembly: DevExpress.Persistent.Base.v18.2.dll
IEnumerable<IPermissionPolicyRole> Roles { get; }
ReadOnly Property Roles As IEnumerable(Of IPermissionPolicyRole)
In eXpressApp Framework applications, permissions are not assigned to a user directly. Users have roles, which in turn are characterized by a permission set. So, each user has one or more roles that determine what actions can be performed. | https://docs.devexpress.com/eXpressAppFramework/DevExpress.Persistent.Base.IPermissionPolicyUser.Roles | 2019-04-18T12:43:46 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.devexpress.com |
With cloud user provisioning you can manage Atlassian accounts and security policies in one place. You will save time, increase security accounts based on the user information sent from the identity provider. All though this approach makes sure that all users easily can log in, there are some disadvantages:
- Manual cleaning: Inactive and old users must still be deleted manually.
- Less control over user access and license costs: If users are created dynamically at login, you have less control of the set of users using the Atlassian products.
With cloud user provisioning, an auto synchronized and virtual user directory is setup. This takes responsibility of keeping the Atlassian products updated with user accounts, groups and group memberships..
Atlassian Crowd APIs is not used to make this work, so you do not need to have a license for the Atlassian Crowd products. synchronized, you can preview the users, groups and group memberships:
You are always welcome to reach out to our support team if you have any questions or would like a demo. | https://docs.kantega.no/pages/viewpage.action?pageId=50495601 | 2019-04-18T13:00:52 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.kantega.no |
.
Import or Edit API definition
To import an existing swagger definition from a file or a URL, click Import. Click Edit Source to manually edit the API swagger definition.
After the resource is added, expand its
GETmethod and update the Produces to
application/xmland Consumes to
application/json.
We define MIME types in the resource definition. in later tutorials.
You can define the following parameter types for resource parameters you are adding.
And you can use following as Data type categories supported by swagger.
primitive(input/output)
- containers (as arrays/sets) (input/output)
- complex (as
models) (input/output)
void(output)
File(input)
HTTP POST
By design HTTP POST method specifies that the web server accepts data enclosed within the body of the request, hence when adding a POST method API manager adds the payload parameter to the POST method by default.
Import or edit API definition
The “Import" button allows to define resources of the API using an existing swagger definition. Once the swagger is imported as file or from URL, the resources will be automatically added. And "Edit Source" button allows to. Give the information in the table below. refer Deploy and Test as a Prototype.
Click Next: Manage > and give.
See Working with Throttling for more information about maximum backend throughput and advanced throttling policies.
Click Save & Publish. This publishes the API that you just created to the API Store so that subscribers can use it.
You can save partially complete APIs or save completed APIs without publishing them. Select the API and click on the Lifecycle tab to manage the API Lifecycle. For more details about the states of the API see API lifecycle.
You have created an API. | https://docs.wso2.com/display/AM200/Create+and+Publish+an+API | 2019-04-18T12:35:09 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.wso2.com |
Although it is possible to remove an object from the UI, you should note that the object is actually retained in the datastore. Details can still be displayed for audit purposes, and you can choose to include destroyed objects in searches.
The following topics are covered in this section:
When would I destroy data?
In general, you should not need to destroy data in production. When testing, you might need to destroy data; for example, when developing patterns.
To destroy an object
- Display the View Object page of the relevant object by clicking a highlighted object on any List page or any Report page.
All of the object's current attributes and relationships are listed.
- From the Actions drop-down menu, select. | https://docs.bmc.com/docs/display/DISCO113/Destroying+data | 2019-04-18T13:26:33 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.bmc.com |
A network zone defines the security level of trust for the the network. The user should choose an appropriate zone value for their setup. Possible values include: drop, block, public, external, dmz, work, home, internal, trusted.
Windows Firewall is the default component of Microsoft Windows that provides firewalling and packet filtering. There are many 3rd party firewalls available for Windows, some of which use rules from the Windows Firewall. If you are experiencing problems see the vendor's specific documentation for opening the required ports.
The Windows Firewall can be configured using the Windows Interface or from the command line.
Windows Firewall (interface):
wf.mscat the command prompt or in a run dialog (Windows Key + R)
Windows Firewall (command line):
The Windows Firewall rule can be created by issuing a single command. Run the following command from the command line or a run prompt:
netsh advfirewall firewall add rule name="Salt" dir=in action=allow protocol=TCP localport=4505-4506 below line
to allow traffic on
tcp/4505 and
tcp/4506:
-A INPUT -m state --state new -m tcp -p tcp --dport 4505:4506 -j ACCEPT
Ubuntu
Salt installs firewall rules in /etc/ufw/applications.d/salt.ufw. Enable with:
ufw allow salt
The BSD-family of operating systems uses packet filter (pf). The following
example describes the addition to
pf.conf needed to access the Salt
master.
pass in on $int_if proto tcp from any to $int_if port 4505:4506
Once this addition --dports 4505:4506 -j ACCEPT -I INPUT -s 10.1.3.0/24 -p tcp --dports 4505:4506 -j ACCEPT # Allow Salt to communicate with Master on the loopback interface -A INPUT -i lo -p tcp --dports 4505:4506 -j ACCEPT # Reject everything else -A INPUT -p tcp --dports 4505:4506 -j REJECT
Note
The important thing to note here is that the
salt command
needs to communicate with the listening network socket of
salt-master on the loopback interface. Without this you will
see no outgoing Salt traffic from the master, even for a simple
salt '*' test.version, because the
salt client never reached
the
salt-master to tell it to carry out the execution. | https://docs.saltstack.com/en/latest/topics/tutorials/firewall.html | 2019-04-18T12:51:50 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.saltstack.com |
How to Use the Sign-up Shortcode
Below are some examples of the sign-up shortcode . One thing to note is lists need to be public for someone to subscribe to them. To check that a list is public you can hit the edit link below the list name on the subscribers page.
Full ShortCode with all options:
[sp-signup firstname_label='First Name' lastname_label='Last Name' email_label='EMail' list_label='List Selection' listids='' redirect_page='' lists_checked='1' display_firstname='' display_lastname='' label_display='' desc='' label_width='100' thank_you='Thank you for subscribing!' button_text='Submit' no_list_error='-- NO LIST HAS BEEN SET! --' postnotification='' pnlistid='0' ]<br>
Basic setup:
[sendpress-signup listids='1']<br>
Advanced Setup:
[sp-signup-signup firstname_label='First Name' lastname_label='Last Name' email_label='E-Mail' listids='121,122,123,124' lists_checked='1' display_firstname='true' display_lastname='true' label_display='true' desc='' thank_you='Thank you for subscribing CREC newsletter!' button_text='Submit']<br>
To use multiple lists in the short code, separate each list id with a comma and the shortcode will display each list with a checkbox next to it so the user can choose to subscribe to as many lists as they want to subscribe to. Here's an example of your shortcode with more lists added for comparison:
| https://docs.sendpress.com/article/30-how-to-use-the-sign-up-shortcode | 2019-04-18T13:20:58 | CC-MAIN-2019-18 | 1555578517639.17 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54fced86e4b034c37ea94588/images/55060180e4b034c37ea95ff8/file-N17m0TrH0y.png',
None], dtype=object) ] | docs.sendpress.com |
-- mode: markdown; mode: visual-line; fill-column: 80 --
UL HPC Tutorial: Create and reproduce work environments using Vagrant
/!\ IMPORTANT Up-to-date instructions for Vagrant can be found in the "Reproducible Research at the Cloud Era" Tutorial. Below instructions are probably outdated but kept for archive purposes.
Vagrant is a tool that allows to easily and rapidly create and configure reproducible and portable work environments using Virtual Machines. This is especially useful if you want to test your work in a stable and controlled environment and minimize the various unintended or untrackable changes that may occur on a physical machine.
In this tutorial, we are going to explain the steps to install Vagrant and create your first basic Linux Virtual Machine with it.
Vagrant installation
Prerequisite:
Vagrant can use many Virtual Machine providers such as VirtualBox, VMware and Docker with VirtualBox being the easiest to use, and the default option in Vagrant.
Our first step is to install VirtualBox, you can download and install the correct version for your operating system from the official website. In many Linux distributions it is provided as a package from the standard repositories, thus you can use your usual package manager to install it.
Once this prerequisite is met, we can install Vagrant. Download the correct version for your operating system on the official website and install it.
Using Vagrant to create a Virtual Machine
The main advantage of Vagrant is that it lets you import and use pre-configured Virtual Machines (called
boxes in this context) which can become bases for your own customizations (installed applications, libraries, etc). With Vagrant it becomes really fast and effortless to create and run a new Virtual Machine.
The Vagrant boxes contain the disk image of a VM without the virtual hardware details of the VM, which are initialized by Vagrant and can be edited by the user.
The first step is to choose a pre-configured box to use. It is possible to create your own from scratch yet this is not in the scope of the current tutorial. Freely available boxes can be found at the following two main sources:
The first catalog is the default box download location for Vagrant. This means that you can directly use the name of the boxes you find here with Vagrant (e.g.
ubuntu/trusty64).
To use the second catalog you would additionaly need to provide the source box URL, yet this catalog provides a much richer variety of boxes.
Adding a new box
To add a box and make it usable in Vagrant, we are going to use the
vagrant box add command. In the example below we will add one box from each of the catalogs in order to present the different possibilities.
We are going to add the
ubuntu/trusty64 box from the Atlas catalog and the
Ubuntu 14.04 box (by its url) from the vagrantbox.es catalog.
To add the first box, we use the following command (which may take some time due to the time needed to download the box):
$> vagrant box add ubuntu/trusty64 ==> box: Loading metadata for box 'ubuntu/trusty64' box: URL: ==> box: Adding box 'ubuntu/trusty64' (v14.04) for provider: virtualbox box: Downloading: ==> box: Successfully added box 'ubuntu/trusty64' (v14.04) for 'virtualbox'!
In this case, you just had to give the name of the box and Vagrant found the box by itself and added the box under the
ubuntu/trusty64 name.
To list the local boxes available to Vagrant for initialization of new VMs, we use the
vagrant box list command:
$> vagrant box list ubuntu/trusty64 (virtualbox, 14.04)
To add the second box, you need to use a slightly different syntax since you need to precise the name you want to give to the box as well as its source URL:
$> vagrant box add ubuntu14.04 ==> box: Adding box 'ubuntu14.04' (v0) for provider: box: Downloading: ==> box: Successfully added box 'ubuntu14.04' (v0) for 'virtualbox'!
Now a second box will be available to Vagrant under the name
ubuntu14.04:
$> vagrant box list ubuntu/trusty64 (virtualbox, 14.04) ubuntu14.04 (virtualbox, 0)
In the rest of the tutorial we are only going to use the first box. To remove a box we use the
vagrant box remove command as follows:
$> vagrant box remove ubuntu14.04 Removing box 'ubuntu14.04' (v0) with provider 'virtualbox'...
Checking that it has been removed:
$> vagran box list ubuntu/trusty64 (virtualbox, 14.04)
Creating a new Virtual Machine
Now we are going to create a new Virtual Machine using the
ubuntu/trusty64 box.
We will initialize it in an empty directory (which is not absolutely mandatory):
$> mkdir vagrant && cd vagrant
Next, we make Vagrant prepare the configuration file describing the VM:
$>.
You should now see a file named
Vagrantfile in your directory. This file contains the minimal information for Vagrant to launch the VM. We could modify it to set up specific parameters of the VM (number of virtual cores, memory size, etc), but this constitutes advanced usage for which full documentation that can be found on the official site. However, it may be interesting to understand what is actually needed in this file, since it contains a lot of commented information.
The minimal content of a
Vagrantfile is as follows:
VAGRANTFILE_API_VERSION = "2" Vagrant.configure("VAGRANTFILE_API_VERSION") do |config| config.vm.box = "hashicorp/trusty64" end
This basically defines which version of the Vagrant API will be used to build the VM using the box given as a base.
Now, to launch the VM you only need to use the single
vagrant up command in the same directory where the
Vagrantfile exists (this may take some time since Vagrant is going to boot the VM and set its basic configuration):
$> vagrant up Bringing machine 'default' up with 'virtualbox' provider... ==> default: Importing base box 'ubuntu/trusty64'... ==> default: Matching MAC address for NAT networking... ==> default: Checking if box 'ubuntu/trusty64' is up to date... ==> default: Setting the name of the VM: vagrant_default_1425476252413_67101 ==> default: Clearing any previously set forwarded ports... ==>: Machine booted and ready! ==> default: Checking for guest additions in VM... ==> default: Mounting shared folders... default: /vagrant => /tmp/vagrant
Your VM is now up and running at this point. To access it, use the
vagrant ssh command within the same directory :
$> vagrant ssh
You should now be connected to your VM and ready to work.
An interesting feature of Vagrant is that your computer (the "host") shares the directory that contains the
Vagrantfile with your VM (the "guest"), where it is seen as
/vagrant.
Assuming you have a script or data files you want to access from within the VM, you simply put them in the same directory as the
Vagrantfile and then use them in the VM under
/vagrant. The reverse is also true.
To learn more than the basics covered in this tutorial, we encourage you to refer to the official documentation. | https://ulhpc-tutorials.readthedocs.io/en/latest/virtualization/vagrant/ | 2019-04-18T12:21:23 | CC-MAIN-2019-18 | 1555578517639.17 | [] | ulhpc-tutorials.readthedocs.io |
ANNarchy supports the dynamic addition/suppression of synapses during the simulation (i.e. after compilation).
Warning
Structural plasticity is not available with the CUDA backend and will likely never be…
Because structural plasticity adds some complexity to the generated code, it has to be enabled before compilation by setting the
structural_plasticity flag to
True in the call to
setup():
setup(structural_plasticity=True)
If the flag is not set, the following methods will do nothing.
There are two possibilities to dynamically create or delete synapses:
Two methods of the
Dendrite class are available for creating/deleting synapses:
create_synapse()
prune_synapse()
Let’s suppose that we want to add regularly new synapses between strongly active but not yet connected neurons with a low probability. One could for example define a neuron type with an additional variable averaging the firing rate over a long period of time.
LeakyIntegratorNeuron = Neuron( parameters=""" tau = 10.0 baseline = -0.2 tau_mean = 100000.0 """, equations = """ tau * dmp/dt + mp = baseline + sum(exc) r = pos(mp) tau_mean * dmean_r/dt = (r - mean_r) : init = 0.0 """ )
Two populations are created and connected using a sparse connectivity:
pop1 = Population(1000, LeakyIntegratorNeuron) pop2 = Population(1000, LeakyIntegratorNeuron) proj = Projection(pop1, pop2, 'exc', Oja).connect_fixed_probability(weights = 1.0, probability=0.1)
After an initial period of simulation, one could add new synapses between strongly active pair of neurons:
# For all post-synaptic neurons for post in xrange(pop2.size): # For all pre-synaptic neurons for pre in xrange(pop1.size): # If the neurons are not connected yet if not pre in proj[post].ranks: # If they are both sufficientely active if pop1[pre].mean_r * pop2[post].mean_r > 0.7: # Add a synapse with weight 1.0 and the default delay proj[post].create_synapse(pre, 1.0)
create_synapse only allows to specify the value of the weight and the delay. Other syanptic variables will take the value they would have had before compile(). If another value is desired, it should be explicitely set afterwards.
Removing useless synapses (pruning) is also possible. Let’s consider a synapse type whose “age” is incremented as long as both pre- and post-synaptic neurons are inactive at the same time:
AgingSynapse = Synapse( equations=""" age = if pre.r * post.r > 0.0 : 0 else : age + 1 : init = 0, int """ )
One could periodically track the too “old” synapses and remove them:
# Threshold on the age: T = 100000 # For all post-synaptic neurons receiving synapses for post in proj.post_ranks: # For all existing synapses for pre in proj[post].ranks: # If the synapse is too old if proj[post][pre].age > T : # Remove it proj[post].prune_synapse(pre)
Warning
This form of structural plasticity is rather slow because:
forloops are in Python, not C++. Implementing this structural plasticity in Cython should already help.
It is of course the user’s responsability to balance synapse creation/destruction, otherwise projections could become either empty or fully connected on the long-term.
Conditions for creating or deleting synapses can also be specified in the synapse description, through the
creating or
pruning arguments. Thise arguments accept string descriptions of the boolean conditions at which a synapse should be created/deleted, using the same notation as other arguments.
The creation of a synapse must be described by a boolean expression:
CreatingSynapse = Synapse( parameters = " ... ", equations = " ... ", creating = "pre.mean_r * post.mean_r > 0.7 : proba = 0.5, w = 1.0" )
The condition can make use of any pre- or post-synaptic variable, but NOT synaptic variables, as they obviously do not exist yet. Global parameters (defined with the
postsynaptic or
projection flags) can nevertheless be used.
Several flags can be passed to the expression:
probaspecifies the probability according to which a synapse will be created, if the condition is met. The default is 1.0 (i.e. a synapse will be created whenever the condition is fulfilled).
wspecifies the value for the weight which will be created (default: 0.0).
dspecifies the delay (default: the same as all other synapses if the delay is constant in the projection,
dtotherwise).
Warning
Note that the new value for the delay can not exceed the maximal delay in the projection, nor be different from the others if they were all equal.
Other synaptic variables will take the default value after creation.
Synapse creation is not automatically enabled at the start of the simulation: the Projectiom method
start_creating() must be called:
proj.start_creating(period=100.0)
This method accepts a
period parameter specifying how often the conditions for creating synapses will be checked (in ms). By default they would be checked at each time step (
dt), what would be too costly.
Similarly, the
stop_creating() method can be called to stop the creation conditions from being checked.
Synaptic pruning also rely on a boolean expression:
PruningSynapse = Synapse( parameters = " T = 100000 : int, projection ", equations = """ age = if pre.r * post.r > 0.0 : 0 else : age + 1 : init = 0, int""", pruning = "age > T : proba = 0.5" )
creatingand
pruningarguments.
pruningargument can rely on synaptic variables (here
age), as the synapse already exist.
probaflag can be passed to specify the probability at which the synapse will be deleted if the condition is met.
start_pruning()and
stop_pruning()methods.
start_pruning()accepts a
periodargument. | https://annarchy.readthedocs.io/en/stable/manual/StructuralPlasticity.html | 2019-04-18T13:12:57 | CC-MAIN-2019-18 | 1555578517639.17 | [] | annarchy.readthedocs.io |
Welcome to the BookBrainz Developer Docs!¶
This documentation is intended to act as a guide for developers working on or with the BookBrainz project, describing the system, its modules and functions. The BookBrainz webservice is also fully documented here.
For a description of the BookBrainz and end-user oriented documentation, please see the BookBrainz User Guide. | https://bbdocs-dev.readthedocs.io/en/latest/ | 2019-04-18T12:58:05 | CC-MAIN-2019-18 | 1555578517639.17 | [] | bbdocs-dev.readthedocs.io |
The following steps need to be performed when using a Windows server as a host to LANSA Client. If you wish to use an IBM i as a host to LANSA Client, refer to Prepare an IBM i as the Host for LANSA Client.
These steps are based on the demonstration material provided by the Visual LANSA Partition Initialization process. Using the demonstration partition will allow you to quickly test LANSA Client. To execute LANSA Client against your own data, you will need to load your application files into Visual LANSA.
Before you Start
You will need the LANSA Windows CD-ROM to perform these steps.
LANSA Client Server Setup
When using LANSA Client with a Windows Server, you must complete the following tasks on the server:
1. Install Visual LANSA on the Windows Server.
2. Set up communications using the LANSA Communications Administrator.
Select Settings and Administration from the LANSA system's desktop folder and select LANSA Communications Administrator from the sub menu.
3. In the LANSA Communications Administrator's dialog:
Select the Advanced option on the Menu bar and choose Listener.
Change the Number of Threads to 2.
Press the Start Listener button.
4. Install the LANSA Client definitions. This is an automatic process carried out as part of the Partition Initialization process when you log on to Visual LANSA. To do this:
a. Logon on to Visual LANSA.
b. Enter your Password and, before you press OK, select the Partition Initialization button.
c. Press OK and the Partition Initialization dialog box will open.
In the Partition Initialization dialog box, select the software that you wish to install.
d. For LANSA Client, select (i.e. tick) the LANSA Client field and file definitions. Press OK. There will be a short delay while the fields and files are installed.
5. Perform SYSEXPORT: From your LANSA System's desktop folder, select Settings and Administration, Utilities and then SYSEXPORT.
6. Run the *CLTEXPORT: From your LANSA System's desktop folder, select Execute Applications and then Exec Process.
7. Add the LANSA Client Licenses by selecting the Licensing - Server Licenses in the Settings and Administration folder. You should have been provided with an xml file. From the New tab, select the xml file then apply the license. If successful, license details should appear in the Applied tab.
8. Confirm that the x_lansa.pro file exists in the directory which corresponds to the LANSA Client partition. For example, the DEM partition has a directory named <drive>:\Program Files (x86)\lansa\x_win95\x_lansa\x_dem.
If x_lansa.pro does not exist, create a text file named x_lansa.pro. For example:
DBUT=MSSQLS
DBII=LX_LANSA
GUSR=QOTHPRDOWN
9. Save the new x_lansa.pro file in <drive>:\Program Files (x86)\lansa\x_win95\x_lansa\x_dem directory (Assuming "x_dem" is the directory that you are using). | https://docs.lansa.com/14/en/lansa038/content/lansa/lclengbf_0055.htm | 2019-04-18T12:17:56 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.lansa.com |
What Is AWS Batch?
AWS, similar to traditional batch computing software. This service can efficiently provision resources in response to jobs submitted in order to eliminate capacity constraints, reduce compute costs, and deliver results quickly.
As a fully managed service, AWS Batch enables you.
Components of AWS Batch
AWS Batch is a regional service that simplifies running batch jobs across multiple Availability Zones within a region. You can create AWS Batch compute environments within a new or existing VPC. After a compute environment is up and associated with a job queue, you can define job definitions that specify which Docker container images to run your jobs. Container images are stored in and pulled from container registries, which may exist within or outside of your AWS infrastructure.
Jobs
A unit of work (such as a shell script, a Linux executable, or a Docker container image) that you submit to AWS Batch. It has a name, and runs as a containerized application on an Amazon EC2 instance in your compute environment, using parameters that you specify in a job definition. Jobs can reference other jobs by name or by ID, and can be dependent on the successful completion of other jobs. For more information, see Jobs.
Job Definitions
A job definition specifies how jobs are to be run; you can think of it as a blueprint for the resources in your job. You can supply your job with an IAM role to provide programmatic access to other AWS resources, and you specify both memory and CPU requirements. The job definition can also control container properties, environment variables, and mount points for persistent storage. Many of the specifications in a job definition can be overridden by specifying new values when submitting individual Jobs. For more information, see Job Definitions
Job Queues
When you submit an AWS Batch job, you submit it to a particular job queue, where it resides until it is scheduled onto a compute environment. You associate one or more compute environments with a job queue, and you can assign priority values for these compute environments and even across job queues themselves. For example, you could have a high priority queue that you submit time-sensitive jobs to, and a low priority queue for jobs that can run anytime when compute resources are cheaper.
Compute Environment
A compute environment is a set of managed or unmanaged compute resources that are
used to run jobs. Managed compute. AWS Batch will efficiently launch, manage, and
terminate EC2 instances as needed. You can also manage your own compute
environments. In this case you are responsible for setting up and scaling the
instances in an Amazon ECS cluster that AWS Batch creates for you. For more information,
see Compute Environments.
Getting Started
Get started with AWS Batch by creating a job definition, compute environment, and a job queue in the AWS Batch console.
The AWS Batch first-run wizard gives you the option of creating a compute environment and a job queue and submitting a sample hello world job. If you already have a Docker image you would like to launch in AWS Batch, you can create a job definition with that image and submit that to your queue instead. For more information, see Getting Started with AWS Batch. | https://docs.aws.amazon.com/batch/latest/userguide/what-is-batch.html | 2019-04-18T12:53:17 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.aws.amazon.com |
You are viewing documentation for version 3 of the AWS SDK for Ruby. Version 2 documentation can be found here.
Class: Aws::Lightsail::Types::DeleteDomainRequest
- Defined in:
- gems/aws-sdk-lightsail/lib/aws-sdk-lightsail/types.rb
Overview
Note:
When making an API call, you may pass DeleteDomainRequest data as a hash:
{ domain_name: "DomainName", # required }
Instance Attribute Summary collapse
- #domain_name ⇒ String
The specific domain name to delete.
Instance Attribute Details
#domain_name ⇒ String
The specific domain name to delete. | https://docs.aws.amazon.com/sdk-for-ruby/v3/api/Aws/Lightsail/Types/DeleteDomainRequest.html | 2019-04-18T13:18:18 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.aws.amazon.com |
Bitnami ownCloud Stack for Microsoft Azure
ownCloud is a file storage and sharing server that is hosted in your own cloud account. Access, update, and sync your photos, files, calendars, and contacts on any device, on a platform that you own.
Need more help? Find below detailed instructions for solving complex issues. | https://docs.bitnami.com/azure/apps/owncloud/ | 2019-04-18T13:38:10 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.bitnami.com |
Summary
This guide covers single culvert crossings which are the most common structure used to cross small to medium sized rivers. Culverts are relatively easy to install and low cost compared to other crossing structures. Designed, constructed and maintained correctly, they will endure, but careful planning and installation is required to prevent failure and ensure fish passage.
File type:
File size:
2.87 MB
Pages:
10
cloud_download Download document | https://docs.nzfoa.org.nz/forest-practice-guides/crossings/3.4-crossings-single-culvert-river-crossings/ | 2019-04-18T12:21:51 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.nzfoa.org.nz |
Welcome to WebExtensions Experiments¶
WebExtensions are a cross-browser system for developing browser add-ons.
WebExtensions Experiments allow developers to write experimental WebExtensions APIs for Firefox. They can be used to prototype APIs for landing in Firefox, or for use on Nightly or Developer Edition.
Information about policies for WebExtensions APIs and how to request a new API are available on the Mozilla wiki.
Contents:
This documentation lives on github, pull requests and issues are welcome. | https://webextensions-experiments.readthedocs.io/en/latest/ | 2019-04-18T12:57:14 | CC-MAIN-2019-18 | 1555578517639.17 | [] | webextensions-experiments.readthedocs.io |
You just need to download the latest version of ReMository and the latest Quickdown. After you install Remoistory and Quickdown and make sure you enable Quickdown mambots then you do the following:
Here is the code to show the video:
{mosmodule video=}
Here is the code to show your download item from ReMository:
{quickdown:[id]}
You can see a working sample here:... | http://docs.ongetc.com/?q=content/mosmodule-how-embed-video-w-remository | 2019-04-18T12:48:29 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.ongetc.com |
OpenID Connect
HelloID now supports OpenID as an Authentication mechanism. New OpenID applications will be added to the HelloID application catalog and, as an administrator, you can now easily configure OpenID applications via the predefined wizard.
Group OU
In the configuration, a secondary filter is provided to add scoping to groups that are synchronized from the Active Directory, including a deletion threshold.
Self-service administration
The administrator now has the power to assign, request and return products on behalf of users.
The information and notification email templates, which are used in HelloID to send emails, are now updated and have an improved design. | https://docs.helloid.com/hc/en-us/articles/360003822434-HelloID-version-4-4-4-0 | 2019-04-18T12:44:13 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.helloid.com |
Integrations
Google Analytics provides comprehensive analytics solutions, including event, demographic, ecommerce, funnel, crash, and exception reporting.
mParticle supports Google Analytics Mobile App Analytics through our mobile SDKs and platform forwarding functionality. Data collection is enabled through SDK instrumentation. Once your app is properly configured, it is ingested into the mParticle platform, which maps inbound data to Google Analytics features and their required formats, and then forwards the data to Google Analytics.
If you are new to setting up Google’s Mobile App Analytics, start with Google’s Mobile App Analytics docs
When mParticle sends data server-to-server to Google Analytics, we utilize Google’s Measurement Protocol. This allows mParticle to implement server side data forwarding and supports our value proposition to customers of not requiring that additional app SDK components be continually added and updated for integrations. A Measurement Protocol overview can be found on Google’s site here:
You will need a Google Analytics account and a new app property for every app that you want to track. A Google Analytics tracking id is automatically generated for each property and you will need this when configuring Google Analytics integration in the mParticle console. We are using the term “logical app” here because as a Google Analytics best practice you will want to track different physical platforms of the same app in the same property. For example, if you have an iOS app and an Android app with the same functionality that represents one logical app, but two physical apps, and as a result you would want to use the same tracking id for both. You can then setup new Google Analytics views within the same property for each app platform to have reporting by platform/physical app. If your iOS and Android apps differ significantly in terms of usage and data capture you will want to track in different properties and tracking ids.
While mParticle forwards all data in real time, Google Analytics has a processing latency of 24-48 hours. See their documentation for more information on latency and hit limits.
Google Analytics has limits around the number of custom dimensions and custom metrics as noted here:
If AppName is not available, then mParticle will not forward events to Google Analytics -.
For each supported feature below, a detailed description is provided in the Supported Feature Reference Section that follows.
One of the most important decisions to make when setting up your Google Analytics implementation is how to identify your users. Google Analytics does not allow Personally Identifiable Information to be uploaded, but you still need a unique identifier, so that you can track how many unique users you have and to keep each user’s activity separate. Google’s Measurement Protocol allows for two types of identifier:
cid) must be a UUIDv4.
uid) can be any string but must not contain personally identifiable information.
There are two basic options for generating Client ID. The default is to have mParticle generate a
cid for you. If you select this option, mParticle will generate a UUIDv4 for each device based on device and application metadata. This option is recommended if your app is not already being tracked in Google Analytics.
Alternatively, you can choose to use one of your
Other identity types as the
cid, by selecting it in the Configuration Settings. If you choose this option, you must ensure that the value you set for this identity type is a valid UUIDv4.
mParticle uses the following rules to set
cid:
Default, mParticle will generate a default
cidbased on device and app metadata.
If your Client ID Type is one of your
Other types, mParticle will do one of the following depending on the identity value for the user.
cid.
cidbased on device and app metadata.
mParticle gives you the option to send a hash of your Customer ID as the
uid by setting Hash Customer ID in your Connection Settings.
If you are intending to send feed data from a feed which is not bound to one of your native platforms, you will need to make sure mParticle has enough information to generate at least one unique ID for each user. Without Device information, mParticle may generate the same
cid value for all event data received via an unbound feed. In Google Analytics, this will look like a lot of activity from a single user. To prevent this, make sure your incoming data contains Customer ID values and set Hash Customer ID to
true. When mParticle processes event data from an unbound feed with a Customer ID value, mParticle will set only a
uid to prevent issues in Google Analytics that arise from multiple users having the same
cid.
Google does not allow any data to be uploaded to Google Analytics that allows for an individual to be personally identifiable. For example, certain names, social security numbers, email addresses, or any similar data is expressly not allowed per Google Policy. Likewise, any data that permanently identifies a particular device is not allowed to be uploaded to Google (such as a mobile phone’s unique device identifier if such an identifier cannot be reset - even in hashed form).
This section provides detailed implementation guidance for each of the supported features.
mParticle forwards events with MessageType = CrashReport to Google Analytics with the following logic:
logErrorEventWithExceptionmethod is implemented in the app to log handled exceptions, they will be forwarded to Google Analytics accordingly.
beginUncaughtExceptionLogging/
endUncaughtExceptionLoggingmethods are implemented, app crashes will be captured and forwarded to Google Analytics.
Additional Crash Handling setup can be configured for your app.
mParticle supports 200 Custom Dimensions. You can use them to collect and analyze data that Google Analytics doesn’t automatically track. Click here for instructions on how to create custom dimensions in Google Analytics.
Once you have created the custom metrics/dimensions in Google Analytics, you can map the information in mParticle Connection settings by specifying an event attribute, a user attribute, or a product attribute.
mParticle supports both Google Analytics eCommerce and Advanced eCommerce features. In order to use the Advanced eCommerce tracking, you must enable Enhanced ECommerce Settings from the Admin section of the Google Analytics Web Interface. You must also enable the mParticle “Enable Enhanced Ecommerce” setting.
You can send in-app purchase data or any kind of transaction data to Google Analytics via eCommerce tracking. To make sure Google Analytics integration functions properly, app developers need to pass necessary information to mParticle so that mParticle can format the transaction data properly and forward it to Google Analytics.
An incoming event can have the following attributes:
Additional eCommerce Tracking Guidance
logEcommerceTransactionWithProduct.If your eCommerce transaction has multiple SKUs, you will need to call the method once for each SKU.
You can associate Google Analytics custom flags with an AppEvent via the Custom Flags APIs provided by the mParticle SDKs. See the table below to determine the correct Custom Flag to append to an AppEvent for your desired Google Analytics category, label, and value. The name of the event is passed as the Event Action (Google Analytics ea parameter).
For HitType, by default on web, pageviews are logged as HitType
pageview, and all other events including commerce events are logged as HitType
event. While these are the default and most common HitTypes, you can customize these using Custom Flags to be any types that Google allows (
pageview,
screenview,
event,
transaction,
item,
social,
exception,
timing).
See the code samples below and the SDK docs for help setting custom flags with the mParticle iOS and Android SDKs.
MPEvent *event = [[MPEvent alloc] initWithName:@"Set Category" type:MPEventTypeUserPreference; [event addCustomFlag:@"Music" withKey:@"Google.Category"]; [[MParticle sharedInstance] logEvent:event];
MPEvent event = new MPEvent.Builder("Set Category", MParticle.EventType.UserPreference) .addCustomFlag("Google.Category", "Music") .build(); MParticle.getInstance().logEvent(event);
mParticle maps logged events to Google Analytic’s event structure as follows:
Screens in Google Analytics represent content users are viewing within an app.
mParticle’s session management scheme will be used, which is different from Google Analytics. mParticle will forward session start and end messages to Google Analytics as follows:
mParticle will forward any events with EventType = Social to Google Analytics as social events. Below is a list of attributes for social interactions that Google Analytics require and how they are mapped.
App developers can measure how long an event takes and log it by the
logEvent method and passing in “eventLength” parameter.
On a logged event, if eventLength is > 0, mParticle will forward the event to Google Analytics as both an event (event hit type) and a user timing measurement (timing hit type). When forwarding as a timing hit type, the data is mapped as follows.
Since mParticle sends the data as two different hit types, two URLs are sent. For example, an event called “Update Profile” with eventLength = 1914 ms will trigger the following two URLs being sent to Google Analytics.
Event hit:
Timing hit:
The ‘ec’ for the event hit types matches the ‘utc’ in timing hit type, ‘ea’ will match ‘utv’, and ‘el’ will match ‘utl’.
To handle Campaign Parameters, mParticle will forward user attributes to Google Analytics as noted below.
Was this page helpful? | https://docs.mparticle.com/integrations/google-analytics/ | 2019-04-18T13:09:30 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.mparticle.com |
SimplyRETS Wordpress Style Guide
The SimplyRETS Wordpress plugin generates HTML for the listings shown on your site. It uses a standard set of classes and id's that is compatible with most themes out there. It will even use most of your theme's styling like colors, font-sizes, etc. In the case that you would like to customize the plugin even more for your theme, we've made it simple for you. Use the style guide below to easily find which element for which you need to add style. These classes and id's are versioned and safe to use, so you don't need to worry about your custom styles breaking when you upgrade. Like always, you should still use a child theme or a plugin like Add Custom CSS to add styles.
simply-rets-client.css - Copyright (C) Reichert Brothers 2014 Licensed under the GNU GPL version 3 or later. | https://docs.simplyrets.com/simply-rets-client.html | 2019-04-18T13:16:01 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.simplyrets.com |
By creating a custom provider you'll get an unique URL to open positions by opening on your browser or use it with TradingView or any other app.
Click on “Create” on the “Create Your Own Provider” box.
Set a name and description and click on “Create”.
Once done, you’ll see an URL to open positions. The only thing you need to change there is the Exchange (only Binance for now) and Pair.
Example URL:
In this case, by opening this URL with your browser, we'll execute an XRP buy with the amount you set by default in your settings.
Do not share this URL. Take into consideration that if you open it multiple times in your browser, multiple positions will be created. Be careful using this URL. | https://docs.zignaly.com/providers/custom-provider | 2019-04-18T12:26:58 | CC-MAIN-2019-18 | 1555578517639.17 | [] | docs.zignaly.com |
You are viewing documentation for version 2 of the AWS SDK for Ruby. Version 3 documentation can be found here.
Class: Aws::Route53Domains::Types::DomainSuggestion
- Defined in:
- (unknown)
Overview
Information about one suggested domain name.. Amazon. | http://docs.aws.amazon.com/sdkforruby/api/Aws/Route53Domains/Types/DomainSuggestion.html | 2017-10-17T02:08:14 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.aws.amazon.com |
Scale and zoom¶
It is common for different rules to be applied at different zoom levels on a web map.
For example, on a roads layer, you would not not want to display every single road when viewing the whole world. Or perhaps you may wish to styles the same features differently depending on the zoom level. For example: a cities layer styled using points at low zoom levels (when “zoomed out”) and with polygon borders at higher zoom levels (“zoomed in”).
YSLD allows rules to be applied depending on the the scale or zoom level. You can specify by scale, or you can define zoom levels in terms of scales and specify by zoom level.
Warning
Be aware that scales for a layer (where a style is applied) may interact differently when the layer is contained in a map, if the map has a different coordinate reference system from the layer.
Scale syntax¶
The syntax for using a scale conditional parameter in a rule is:
rules: - ... scale: [<min>,<max>] ...
where:
Note
It is not possible to use an expression for any of these values.
Use the literal strings min and max to denote where there are no lower or upper scale boundaries. For example, to denote that the scale is anything less than some <max> value:
scale: [min,<max>]
To denote that the scale is anything greater than or equal to some <min> value:
scale: [<min>,max]
Note
In the above examples, min and max are always literals, entered exactly like that, while <min> and <max> would be replaced by actual scalar values.
If the scale parameter is omitted entirely, then the rule will apply at all scales.
Scale examples¶
Three rules, all applicable at different scales:
rule: - name: large_scale scale: [min,100000] symbolizers: - line: stroke-width: 3 stroke-color: '#0165CD' - name: medium_scale scale: [100000,200000] symbolizers: - line: stroke-width: 2 stroke-color: '#0165CD' - name: small_scale scale: [200000,max] symbolizers: - line: stroke-width: 1 stroke-color: '#0165CD'
This example will display lines with:
- A stroke width of 3 at scales less than 100,000 (large_scale)
- A stroke width of 2 at scales between 100,000 and 200,000 (medium_scale)
- A stroke width of 1 at scales greater than 200,000 (small_scale)
Given the rules above, the following arbitrary sample scales would map to the rules as follows:
Note the edge cases, since the min value is inclusive and the max value is exclusive.
Scientific notation for scales¶
To make comprehension easier and to lessen the chance of errors, scale values can be expressed in scientific notation.
So a scale of 500000000, which is equal to 5 × 10^8 (a 5 with eight zeros), can be replaced by 5e8.
Relationship between scale and zoom¶
When working with web maps, often it is more convenient to talk about zoom levels instead of scales. The relationship between zoom and scale is context dependent.
For example, for EPSG:4326 with world boundaries, zoom level 0 (completely zoomed out) corresponds to a scale of approximately 279,541,000 with each subsequent zoom level having half the scale value. For EPSG:3857 (Web Mercator) with world boundaries, zoom level 0 corresponds to a scale of approximately 559,082,000, again with each subsequent zoom level having half the scale value.
But since zoom levels are discrete (0, 1, 2, etc.) and scale levels are continuous, it’s actually a range of scale levels that corresponds to a given zoom level.
For example, if you have a situation where a zoom level 0 corresponds to a scale of 1,000,000 (and each subsequent zoom level is half that scale, as is common), you can set the scale values of your rules to be:
- scale: [750000,1500000] (includes 1,000,000)
- scale: [340000,750000] (includes 500,000)
- scale: [160000,340000] (includes 250,000)
- scale: [80000,160000] (includes 125,000)
- etc.
Also be aware of the inverse relationship between scale and zoom; as the zoom level increases, the scale decreases.
Zoom syntax¶
In certain limited cases, it can be more useful to specify scales by way of zoom levels for predefined gridsets. These can be any predefined gridsets in GeoServer.
Inside a rule, the syntax for using zoom levels is:
rules: - ... zoom: [<min>, <max>] ...
where:
Note
It is not possible to use an expression for any of these values.
As with scales, use the literal strings min and max to denote where there are no lower or upper scale boundaries. For example, to denote that the zoom level is anything less than some <max> value:
zoom: [min,<max>]
To denote that the zoom level is anything greater than or equal to some <min> value:
zoom: [<min>,max]
Note
In the above examples, min and max are always literals, entered exactly like that, while <min> and <max> would be replaced by actual scalar values.
The scale and zoom parameters should not be used together in a rule (but if used, scale takes priority over zoom).
Specifying a grid¶
While every web map can have zoom levels, the specific relationship between a zoom level and its scale is dependent on the gridset (spatial reference system, extent, etc.) used.
So when specifying zoom levels in YSLD, you should also specify the grid.
The grid parameter should remain at the top of the YSLD content, above any Feature Styles or Rules. The syntax is:
grid: name: <string>
where:
Note
As many web maps use “web mercator” (also known as EPSG:3857 or EPSG:900913), this is assumed to be the default if no grid is specified.
Warning
As multiple gridsets can contain the same SRS, we recommend naming custom gridsets by something other than the EPSG code.
Zoom examples¶
Default gridset¶
Given the default of web mercator (also known as EPSG:3857 or EPSG:900913), which requires no grid designation, this defines zoom levels as the following scale levels (rounded to the nearest whole number below):
Named gridsets¶
For the existing gridset of WGS84 (often known as EPSG:4326):
grid: name: WGS84
This defines zoom levels as the following scale levels (rounded to the nearest whole number below):
Given a custom named gridset called NYLongIslandFtUS, defined by a CRS of EPSG:2263 and using its full extent:
grid: name: NYLongIslandFtUS
This defines zoom levels as the following (rounded to the nearest whole number below):
Note
These scale values can be verified in GeoServer on the Gridsets page under the definition for the gridset:
Gridset defined in GeoServer
Specifically, note the Scale values under Tile Matrix Set. | http://docs.geoserver.org/latest/en/user/styling/ysld/reference/scalezoom.html | 2017-10-17T02:07:50 | CC-MAIN-2017-43 | 1508187820556.7 | [array(['../../../_images/scalezoom_customgridset.png',
'../../../_images/scalezoom_customgridset.png'], dtype=object)] | docs.geoserver.org |
Creating an Amazon SNS Topic for Budget Notifications
When you create a budget that sends notifications to an Amazon Simple Notification Service (Amazon SNS) topic, you need to either have a pre-existing Amazon SNS topic or create an Amazon SNS topic. Amazon SNS topics allow you to send notifications over SMS in addition to email. Your budget must have permissions to send a notification to your topic.
To create an Amazon SNS topic and grant permissions to your budget, use the Amazon SNS console.
To create an Amazon SNS notification topic and grant permissions
Sign in to the AWS Management Console and open the Amazon SNS console at.
On the navigation pane, choose Topics.
On the Topics page, choose Create new topic.
In the dialog box, for Topic name, type the name for your notification topic.
In the dialog box, for Display name, type the name that you want displayed when you receive a notification.
Choose Create topic. Your topic appears in the list of topics on the Topics page.
Select your topic, and copy the ARN next to your topic name.
For Actions, choose Edit topic policy.
In the dialog box, choose Advanced view.
In the policy text field, after "Statement": [, add the following text:Copy
{ "Sid":
"ExampleSid123456789012", "Effect": "Allow", "Principal": { "Service": "budgets.amazonaws.com" }, "Action": "SNS:Publish", "Resource":
"your topic ARN"}
Replace ExampleSid123456789012 with a string. The Sid must be unique within the policy.
Replace
your topic ARNwith the Amazon SNS topic ARN from step seven.
Choose Update policy. This grants your budget permissions to publish to your topic.
You can now use the Amazon SNS topic ARN to set up Amazon SNS notifications for a budget.
Checking or Resending Notification Confirmation Emails
When you create a budget with notifications, you also create Amazon Simple Notification Service (Amazon SNS) notifications. In order for notifications to be sent, you must accept the subscription to the Amazon SNS notification topic.
To confirm that your notification subscriptions have been accepted or to resend a subscription confirmation email, use the Amazon SNS console.
To check your notification status or to resend a notification confirmation email
Sign in to the AWS Management Console and open the Amazon SNS console at.
On the navigation pane, choose Subscriptions.
On the Subscriptions page, for Filter, enter
budget. A list of your budget notifications appears.
Under Subscription ARN, you will see
PendingConfirmationif a subscription has not been accepted. If you do not see a
PendingConfirmation, all of your budget notifications have been activated.
(Optional) To resend a confirmation request, select the subscription with a pending confirmation, and choose Request confirmations. Amazon SNS will send a confirmation request to the email addresses that are subscribed to the notification.
When each owner of an email address receives the email, they must choose the Confirm subscription link to activate the notification. | http://docs.aws.amazon.com/awsaccountbilling/latest/aboutv2/budgets-sns-policy.html | 2017-10-17T01:54:48 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.aws.amazon.com |
This chapter introduces the Evaluated Configuration of MarkLogic Server, which is currently under evaluation for the Common Criteria. This chapter includes the following sections:
The Common Criteria for Information Technology Security Evaluation (the Common Criteria, or CC) and the companion Common Methodology for Information Technology Security Evaluation (CEM) are the technical basis for the Common Criteria Recognition Arrangement (CCRA), which ensures:
MarkLogic Server 9 is currently under evaluation for Common Criteria Evaluation Assurance Level 2 (EAL2+).
For the documentation describing the Common Criteria evaluation process and methodology, see the documents at.
The evaluated configuration of MarkLogic Server is the configuration in which the Common Criteria evaluation was performed. This is a specific version of MarkLogic Server set up in a specific way. That configuration is outlined in this guide. This guide does not explain the various features of MarkLogic Server. For information on the MarkLogic Server features, see the MarkLogic Server documentation.
This guide includes the list of features that cannot be used in an evaluated configuration, along with any needed guidelines for how to exclude these features from your configuration. The evaluated configuration assumes that the configuration is set up according to these guidelines; configurations that do not follow these guidelines are not considered evaluated configurations.
An Authorized Administrator is any user that has the
admin role or any user that has the privilege(s) needed to run the Admin API (
admin-module-read and
admin-module-write), the Security API (any of the privileges in the
security role), or the PKI API (
pki-read and
pki-write). These privileges exist in roles that are installed in the TOE, such as the
security role, or can be added to any role by an Authorized Administrator. Any role that provides access to administering security functional requirements, whether the role is predefined at installation time or user-created (by an Authorized Administrator), must be granted by an Authorized Administrator; it is the responsibility of Authorized Administrators to be aware of these privileges when granting privileges or roles to users. Furthermore, any user who has any such privileges is considered an Authorized Administrator.
Additionally, there are other administrative XQuery built-in functions () that perform functions such as starting and stopping the server, and these functions each have privileges associated with them. Any user that is granted any of the privileges associated with these functions (for example,
xdmp-shutdown) is also considered an Authorized Administrator.
Administrators with the
admin role have full privileges to the system. Administrators who have any of the privileges to run functions in the security-related APIs (Admin API, Security API, PKI API, and XQuery Admin built-in functions) only have those privileges that have been granted to them (via roles) by an Authorized Administrator. Those privileges each protect specific functions or sets of functions; the functions are primitives and must be used in a program with the proper logic in order to perform Security Functional Requirements. It is up to the Authorized Administrator who grants these privileges to determine which privileges a user is granted.
If administration is performed using the Admin API, Security API, PKI API, and/or the built-in Admin functions, those APIs must run against an HTTP or XDBC App Server that is set up to use TLS. Actions against the Admin Interface, HTTP interfaces, and XDBC interfaces are auditable, based on the configuration for the App Server. You should audit actions based on your own security policies.
Only Authorized Administrators can manage the target of evaluation (TOE) using the Admin Interface or using the various XQuery administrative functions included with MarkLogic (the Admin API, the Security API, the PKI API, or the built-in Admin functions). Additionally, all code must be evaluated through an interface that is set up to use TLS. Authorized administrators are assumed to be non-hostile, appropriately trained, and follow proper administrative procedures. For more details about the Authorized Administrator and about performing administrative tasks in MarkLogic Server, see the Administrator's Guide and Security Guide. For more details about the TOE, see Target of Evaluation (TOE).
This section lists the requirements for the target of evaluation (TOE). This is a subset of the platforms in which MarkLogic Server runs (see the Installation Guide for those details), and includes the following parts:
In its evaluated configuration, MarkLogic Server is supported on Red Hat Enterprise Linux 7 (x64). This platform provides the following capabilities that fulfil certain security objectives for the operational environment: its system clock provides a reliable time source that is used by MarkLogic Server to timestamp audit records (OE.TIME); it is a multi-processing platform that provides applications with dedicated processes for their exclusive use, isolating applications from one another in the operational environment (OE.PROCESS). For further details about this platform, see the Installation Guide.
The TOE requires the 9.0 Essential Enterprise Edition of MarkLogic Server, which is enabled by a license key. Contact your sales representative or MarkLogic Support for information about obtaining a license key.
The App Server in which the Admin Interface runs must be configured to use HTTPS. To configure HTTPS on the Admin App Server, follow the procedure described in Configure the Admin App Server to Use HTTPS. Additionally, any App Server where Admin API or Security API functions are run must also be set up to use HTTPS.
Any application that runs in the TOE should have its App Server(s) configured to use HTTPS. To configure HTTPS on an App Server, follow the procedure in Configuring SSL on App Servers in the Security Guide. Additionally, all App Servers must be configured to use digest authentication, which is the default.
MarkLogic Server must be configured so it does not use any features that are not part of the TOE. For details, see Not Allowed in the TOE.
The evaluated configuration requires MarkLogic Server Essential Enterprise 9.0. | http://docs.marklogic.com/guide/common-criteria/intro | 2017-10-17T02:07:43 | CC-MAIN-2017-43 | 1508187820556.7 | [] | docs.marklogic.com |
Installation
Genymotion operation relies on the use of Oracle VM VirtualBox in the background. This enables virtualizing Android operating systems. If you do not already have Oracle VM VirtualBox installed, you will be asked to do so prior to installing Genymotion.
If you already have Oracle VM VirtualBox, make sure you have the version recommended for your operating system, as detailed in section Software.
To install Genymotion, follow the steps corresponding to your operating system.
To download Genymotion for Windows:
- Go to the Genymotion download page.
From this page, you can:
- download the ready-to-run Genymotion installer for Windows (recommended).
This package includes Oracle VM VirtualBox installer.
- download the Windows 32/64-bit package.
In this case, you must first download and install VirtualBox for Windows hosts from the Download VirtualBox page.
When installing VirtualBox, in the Custom setup window, make sure VirtualBox Networking is enabled.
- Save and run the
.exefile.
- Select the setup language and click OK. By default, the Genymotion <![CDATA[ ]].
To download Genymotion for macOS:
- Download and install VirtualBox for OS X hosts from the Download VirtualBox page.
When installing VirtualBox, in the Custom setup window, make sure VirtualBox Networking is enabled.
- When finished, reboot.
- Go to the
Genymotion downloadpage and download the Mac OS X 64-bit package.
- <![CDATA[ ]]>Open the
.dmgfile.
- Drag and drop Genymotion and Genymotion Shell to the Applications directory..
When installing VirtualBox, in the Custom setup window, make sure VirtualBox Networking is enabled.
- Go to the
Genymotion downloadpage and download the Linux package corresponding to your system.
- Run the following commands:
chmod +x <Genymotion installer path>/genymotion-<version>_<arch>.bin
cd <Genymotion installer path>
./genymotion-<version>_<arch>.bin -d <Genymotion installer path>
- Run Genymotion using the following command:
cd <Genymotion installer path>
./genymotion
Make sure that the
dkms package is installed and that it compiles VirtualBox kernel modules each time a new kernel update is available.
To do so, run
sudo /etc/init.d/vboxdrv status. You should get the message "VirtualBox kernel modules (vboxdrv, vboxnetflt, vboxnetadp, vboxpci) are loaded". If not, force VirtualBox kernel modules compilation by running
sudo /etc/init.d/vboxdrv setup.
Make also sure that you are part of the
vboxusers group. If not, run
sudo usermod -a -G vboxusers <login>. | https://docs.genymotion.com/latest/Content/01_Get_Started/Installation.htm | 2018-07-16T04:25:30 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object) ] | docs.genymotion.com |
ClusPropertyValues.Count property
[The Count property is available for use in the operating systems specified in the Requirements section. It may be altered or unavailable in subsequent versions.]
Returns the number of property values in the ClusPropertyValues collection.
This property is read-only.
Syntax
ClusPropertyValues.Count
Property value
Long indicating the number of objects in the collection. | https://docs.microsoft.com/en-us/previous-versions/windows/desktop/mscs/cluspropertyvalues-count | 2018-07-16T05:18:42 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
RCDRKD Extensions
This section describes the RCDRKD debugger extension commands. These commands display WPP trace messages created by drivers. Starting with Windows 8, you no longer need a separate trace message format (TMF) file to parse WPP messages. The TMF information is stored in the regular symbol file (PDB file).
Starting in Windows 10, kernel-mode and user-mode drivers can use Inflight Trace Recorder (IFR) for logging traces. Your kernel-mode driver can use the RCDRKD commands to read messages from the circular buffers, format the messages, and display the messages in the debugger.
Note You cannot use the RCDRKD commands to view UMDF driver logs, UMDF framework logs, and KMDF framework logs. To view those logs, use Windows Driver Framework Extensions (Wdfkd.dll) commands.
The RCDRKD debugger extension commands are implemented in Rcdrkd.dll. To load the RCDRKD commands, enter .load rcdrkd.dll in the debugger.
The following two commands are the primary commands for displaying trace messages.
The following auxiliary commands provide services related to displaying and saving trace messages.
- !rcdrkd.rcdrloglist
- !rcdrkd.rcdrlogsave
- !rcdrkd.rcdrsearchpath
- !rcdrkd.rcdrsettraceprefix
- !rcdrkd.rcdrtmffile
- !rcdrkd.rcdrtraceprtdebug
The !rcdrkd.rcdrhelp displays help for the RCDRKD commands in the debugger.
Related topics
Using the Framework's Event Logger | https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/rcdrkd-extensions | 2018-07-16T05:06:36 | CC-MAIN-2018-30 | 1531676589179.32 | [] | docs.microsoft.com |
Nudging Layers
T-HFND-005-009.
-.
| https://docs.toonboom.com/help/harmony-15/essentials/rigging/nudge-layer.html | 2018-07-16T04:49:59 | CC-MAIN-2018-30 | 1531676589179.32 | [array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'],
dtype=object)
array(['../Resources/Images/HAR/Stage/Breakdown/an_order_2.png', None],
dtype=object) ] | docs.toonboom.com |
The Manage Restrictions function for Child Rate Plans is an optional setting and is enabled in SETUP | SETTINGS | GENERAL SETTINGS. The feature must be set to "Yes" in order to set Restrictions for Child Rate Plans and for the Manage Restrictions function to be visible under Manage Rates. For instructions on how to enable Manage Restrictions, see General Settings
A Child Rate is a rate plan that is setup under a Parent Rate Plan. The rate is "linked" to the Rate of the Parent Rate Plan and calculated by the percentage increase or decrease entered in the Child Rate Plan.
Therefore, the daily rate of a Child Rate Plan will always be calculated from the Parent Rate and can not be changed in Manage Rates. If you want to edit the percentage or description of Child Rate, go to SETUP | RATES | DEFAULT RATES. See Edit Child Rate
However, Child Rate Plans can have unique restrictions, which means the restrictions can be different from the Parent Rate that it is linked to. This is useful if you want to offer a discounted rate only on specific days or with specific MINLOS, etc.
For example, you want to offer a Weekday Special (15% Discount Monday-Thursday with a 2 night minimum). A Child Rate Plan is set up with -15% entered in the Adjustment. Then, in Manage Restrictions, "Arrival" is checked for Friday, Saturday, Sunday to close the rate to arrival and 2 is entered in MINLOS.
These Rate Plan Restrictions are updated to all of the channels you have the Child Rate allocated to. For example, when "Closed" is checked for a specific date, then it will be unavailable for arrival on your website, GDS/OTA Channels.
Managing Restrictions for Child Rates
All changes to Child Rate Plan Restrictions are made in SETUP | RATE | MANAGE RATES after a Child Rate Plan is created., there are two options for changing Child Rate Plan restrictions. See instructions on how to use each option below.
- Single Rate Restrictions Change restrictions for a Single Rate Plan, unless selected for a multiple rate change.
- Multiple Rate Restrictions: Change restrictions for multiple Rate Plans at the same time.
There are four settings for Child Rate Restrictions:
-.
How to change Restrictions for Child Rates:
Step 1: Regardless of which option you choose to change restrictions, Single or Multiple Rate Plans, the first step is to put a check mark in the box called "Manage Restrictions". If this box is not checked, then the Child Rates will not appear in the drop-down list. After you have checked the "Manage Restrictions" box, then choose one or more Child Rate Plans. See below for details on both options.
To display Child Rates in the drop down list, put a check mark in the "Manage Restrictions" box.
Click image to enlarge
To change the Restrictions for Child Rate Plan(s):
- Go to SETUP | RATES | MANAGE RATES.
- "Manage Restrictions": Put a check mark in the box called "Manage Restrictions"
- Select the Child Rate Plan: In the drop down menu, select one or more Child Rate Plans by putting a check mark in the box next to it. See below for examples of each option. Remember that the restriction changes you make will apply to ALL rates in the selected date range.
- Select the date range: Select the beginning end date that you want to change. restriction changes you make will apply to ALL rates in the selected date range. If you are changing rates for a long date range like two years, then remember that it will over ride any short term changes made to specific time periods within the date range.
- Click Display Rates: The Restrictions for the Child Rate Plan will display.
- Enter Restrictions: See above for details on restrictions. Use optional "Save" to save only the specific items you are changing..
Option 1: Single.
Option 2: Multiple. | https://docs.bookingcenter.com/display/MYPMS/Child+Rate+Restrictions | 2018-08-14T17:47:24 | CC-MAIN-2018-34 | 1534221209216.31 | [array(['/download/attachments/7374428/Manage%20Restrictions%20Show%20Child%20Rates.png?version=2&modificationDate=1497636915000&api=v2',
None], dtype=object) ] | docs.bookingcenter.com |
How to adjust the design of the JustOn Lightning app?
JustOn 2.51 switches to Lightning Experience as the primary user interface.
If upgrading from earlier versions to JustOn 2.51 (or later), your org may not reflect the original JustOn app design. You can adjust the logo and the primary color manually:
- Open the Lightning Experience App Manager.
Click to open the Setup menu and select Setup, then open Apps > App Manager.
- Click Edit in the row of JustOn Lightning (Managed).
- Upload the app logo.
- Set the primary color to
#2A4B9B.
- Click Save.
Related information:
JustOn 2.51 Release Notes | https://docs.juston.com/en/jo_faq_lightning_appdesign/ | 2018-08-14T17:26:01 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.juston.com |
Export a responsive dashboard to PDF Export a dashboard as a PDF so you can archive or print it.. About this task Interactive filters that are applied to the dashboard are also applied to the PDF. However, applied breakdowns are not included in the export.Note: To generate the PDF locally, set the interactive filters, click the settings icon, and choose Printer Friendly Version to open the dashboard in a new window or tab. Export the dashboard using the browser's print settings. Limitations: Custom content may not generate as expected when exported to PDF. For more information, see Custom content PDF export limitations. Dashboards that are exported to PDF do not include the dashboard layout. Widgets are stacked on top of each other and take up the full page width. Widgets are exported to a fixed height. Large widgets, such as workbench or list widgets, are truncated. Breakdowns applied to a dashboard are not included in the PDF. Widgets may appear in a different order than on the dashboard. Widget legends may not appear. Coloring on the delta text for single score report widgets is not preserved. The selected time frame at the widget level (for example, three minutes) is not reflected in the PDF file when the Show date range selector is selected at the widget level. Note: PDFs that are sent as emails may not be generated immediately. Procedure Navigate to Self-Service > Dashboards. From the dashboard picker in the upper left, select the dashboard that you want to export. Click the context menu () and select Export to PDF. Configure your print and delivery options. Click Export. ResultThe content is exported to PDF according to the print and delivery options. If the PDF does not generate, identify and resolve any JavaScript errors. | https://docs.servicenow.com/bundle/istanbul-performance-analytics-and-reporting/page/use/performance-analytics/task/t_ExportAHomePageOrDashboardToPDF.html | 2018-08-14T17:15:29 | CC-MAIN-2018-34 | 1534221209216.31 | [] | docs.servicenow.com |
term¶
Specify a termination condition for a PB-(S)AM Brownian dynamics trajectory. The syntax is:
term {type} {options}
where the
options are determined by the
type as follows:
contact {file}
Termination based on molecular contact conditions.
fileis a string for the contact file filename. The contact file has a list formatted as follows:
moltype1 at1 moltype2 at2 distwhere
moltype1and
moltype2are indices of the molecular types,
at1is the index of an atom from the first molecular type,
at2is the index of an atom from the second molecular type, and
distis the maximum distance between the two atoms that defines the contact.
padis distance criterion that will be checked in the case that the true atom contact distance may not be fulfilled.
Note
Sometimes these distances cannot be reached due to the assumption in this model that the molecule is spherical. If this is the case, the atom positions are transformed to the molecule surface and surface points are compared to the pad distance.
{pos} {val} {molecule}
Specify a position termination condition for a given molecule. where
posis one of the following options:
x<=, x>=, y<=, y>=, z<=, z>=, r<=, r>=.
valis the value along the given axis to check against.
moleculeis the molecule index (1 based) according to the order of molecules listed in the
READsection that this condition applies to. This command can be understood as: “Terminate the simulation when molecule
moleculefulfills the condition
pos
val”.
Todo
Add a constant keyword (e.g., like
position) before the
{pos}argument of
term. Documented in
time {val}
- Specify a time termination condition where
valis a floating point number for the trajectory time limit (in picoseconds). | http://apbs-pdb2pqr.readthedocs.io/en/latest/apbs/input/elec/term.html | 2018-08-14T18:16:08 | CC-MAIN-2018-34 | 1534221209216.31 | [] | apbs-pdb2pqr.readthedocs.io |
-3-
at best bring some profit to Germany but none at
all to England.
The Fuhrer declares that German-Polish
problem must be solved and will be solved. He is
however prepared power of
German Reich at its disposal if his colonial demands
which are limited and can be negotiated by peaceable,
methods are fulfilled and in this case he is prepared
to fix the longest time limit.
His obligations towards Italy are not
touched; in other words he does not demand, that
England give up her obligations, towards France and
similarly for his own part he cannot withdraw from
his obligations towards Italy.
He also desires to stress irrevocable
determination of Germany never again to enter into
conflict with Russia. The Fuhrer is ready to conclude
agreements with England which as has already been
emphasised would not only guarantee existence of
British Empire in all circumstances as far as
Germany is concerned but also if necessary assure
British Empire of German assistance regardless of
where such assistance should be necessary. The
Fuhrer then would also be ready to accept reasonable
limitation/ | http://docs.fdrlibrary.marist.edu/psf/box31/t295s04.html | 2013-05-18T14:13:50 | CC-MAIN-2013-20 | 1368696382450 | [] | docs.fdrlibrary.marist.edu |
View tree
Close tree
|
Preferences
|
|
Feedback
|
Legislature home
|
Table of contents
Search
Up
Up
Med 8.04
Med 8.04
Educational program approval.
The board shall approve only educational programs accredited and approved by the committee on allied health education and accreditation of the American medical association, the commission for accreditation of allied health education programs, or its successor agency.
Med 8.04 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; am.
Register, October, 1994, No. 466
, eff. 11-1-94; am.
Register, December, 1999, No. 528
, eff. 1-1-00.
Med 8.05
Med 8.05
Panel review of applications; examinations required.
The board may use a written examination prepared, administered and scored by the national commission on certification of physician assistants or its successor agency, or a written examination from other professional testing services as approved by the board.
Med 8.05(1)
(1)
Application.
An applicant for examination for licensure as a physician assistant shall submit to the board:
Med 8.05(1)(a)
(a)
An application on a form prescribed by the board.
Med 8.05 Note
Note:
An application form may be obtained upon request to the Medical Examining Board office located at 1400 East Washington Avenue, P.O. Box 8935, Madison, Wisconsin 53708.
Med 8.05(1)(b)
(b)
After July 1, 1993, proof of successful completion of an educational program, as defined in
ss.
Med 8.02 (4)
and
8.04
.
Med 8.05(1)(c)
(c)
Proof of successful completion of the national certifying examination.
Med 8.05(1)(cm)
(cm)
Proof that the applicant is currently certified by the national commission on certification of physician assistants or its successor agency.
Med 8.05(1)(d)
(d)
The fee specified in s.
440.05 (1)
, Stats.
Med 8.05(1)(e)
(e)
An unmounted photograph, approximately 8 by 12 cm., of the applicant taken no more than 60 days prior to the date of application which has on the reverse side a statement of a notary public that the photograph is a true likeness of the applicant.
Med 8.05(2)
(2)
Examinations, panel review of applications.
Med 8.05(2)(a)
(a)
All applicants shall complete the written examination under this section, and an open book examination on statutes and rules governing the practice of physician assistants in Wisconsin.
Med 8.05(2)(b)
(b)
An applicant may be required to complete an oral examination if the applicant:
Med 8.05(2)(b)1.
1.
Has a medical condition which in any way impairs or limits the applicant's ability to practice as a physician assistant with reasonable skill and safety.
Med 8.05(2)(b)2.
2.
Uses chemical substances so as to impair in any way the applicant's ability to practice as a physician assistant with reasonable skill and safety.
Med 8.05(2)(b)3.
3.
Has been disciplined or had certification denied by a licensing or regulatory authority in Wisconsin or another jurisdiction.
Med 8.05(2)(b)4.
4.
Has been convicted of a crime, the circumstances of which substantially relate to the practice of physician assistants.
Med 8.05(2)(b)5.
5.
Has not practiced as a physician assistant for a period of 3 years prior to application, unless the applicant has been graduated from an approved educational program for physician assistants within that period.
Med 8.05(2)(b)6.
6.
Has been found to have been negligent in the practice as a physician assistant or has been a party in a lawsuit in which it was alleged that the applicant has been negligent in the practice of medicine.
Med 8.05(2)(b)7.
7.
Has been diagnosed as suffering from pedophilia, exhibitionism or voyeurism.
Med 8.05(2)(b)8.
8.
Has within the past 2 years engaged in the illegal use of controlled substances.
Med 8.05(2)(b)9.
9.
Has been subject to adverse formal action during the course of physician assistant education, postgraduate training, hospital practice, or other physician assistant employment.
Med 8.05(2)(c)
(c)
An application filed under this chapter shall be reviewed by an application review panel of at least 2 council members designated by the chairperson of the board to determine whether an applicant is required to complete an oral examination under
par.
(a)
. If the application review panel is not able to reach unanimous agreement on whether an applicant is eligible for licensure without completing an oral examination, the application shall be referred to the board for a final determination.
Med 8.05(2)(d)
(d)
Where both written and oral examinations are required they shall be scored separately and the applicant shall achieve a passing grade on both examinations to qualify for a license.
Med 8.05(3)
(3)
Examination failure.
An applicant who fails to receive a passing score on an examination may reapply by payment of the fee specified in
sub.
(1) (d)
. An applicant may reapply twice at not less than 4-month intervals. If an applicant fails the examination 3 times, he or she may not be admitted to an examination unless the applicant submits proof of having completed further professional training or education as the board may prescribe.
Med 8.05 Note
Note:
There is no provision for waiver of examination nor reciprocity under rules in s.
Med 8.05
.
Med 8.05(4)
(4)
Licensure; renewal.
At the time of licensure and each biennial registration of licensure thereafter, a physician assistant shall list with the board the name and address of the supervising physician and shall notify the board within 20 days of any change of a supervising physician.
Med 8.05 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; am. (intro.), r. and recr. (2),
Register, October, 1989, No. 406
, eff. 11-1-89; am. (1) (b), cr. (1) (cm),
Register, July, 1993, No. 451
, eff. 8-1-93; am. (intro.), (1) (intro), (cm), (2) (b) 4., 5., 6., (c) and (4),
Register, October, 1996, No. 490
, eff. 11-1-96; am. (2) (a), (b) (intro.) and 3. to 5., r. and recr. (2) (b) 1. and 2., cr. (2) (b) 7. to 11.,
Register, February, 1997, No. 494
, eff. 3-1-97; am. (intro.), (1) (intro.) and (cm), (2) (b) 5., (c), (d) and (4), r. (2) (b) 10. and 11.,
Register, December, 1999, No. 528
, eff. 1-1-00.
Med 8.053
Med 8.053
Examination review by applicant.
Med 8.053(1)
(1)
An applicant who fails the oral or statutes and rules examination may request a review of that examination by filing a written request and required fee with the board within 30 days of the date on which examination results were mailed.
Med 8.053(2)
(2)
Examination reviews are by appointment only.
Med 8.053(3)
(3)
An applicant may review the statutes and rules examination for not more than one hour.
Med 8.053(4)
(4)
An applicant may review the oral examination for not more than 2 hours.
Med 8.053(5)
(5)
The applicant may not be accompanied during the review by any person other than the proctor.
Med 8.053(6)
(6)
At the beginning of the review, the applicant shall be provided with a copy of the questions, a copy of the applicant's answer sheer or oral tape and a copy of the master answer sheet.
Med 8.053.
Med 8.053(8)
(8)
An applicant may not review the examination more than once.
Med 8.053 History
History:
Cr.
Register, February, 1997, No. 494
, eff. 3-1-97.
Med 8.056
Med 8.056
Board review of examination error claim.
Med 8.056(1)
(1)
An applicant claiming examination error shall file a written request for board review in the board office within 30 days of the date the examination was reviewed. The request shall include all of the following:
Med 8.056(1)(a)
(a)
The applicant's name and address.
Med 8.056(1)(b)
(b)
The type of license for which the applicant applied.
Med 8.056(1)(c)
(c)
A description of the mistakes the applicant believes were made in the examination content, procedures, or scoring, including the specific questions or procedures claimed to be in error.
Med 8.056(1)(d)
(d)
The facts which the applicant intends to prove, including reference text citations or other supporting evidence for the applicant's claim.
Med 8.056(2)
(2)
The board shall review the claim, make a determination of the validity of the objections and notify the applicant in writing of the board's decision and any resulting grade changes.
Med 8.056(3)
(3)
If the decision does not result in the applicant passing the examination, a notice of denial of license shall be issued. If the board issues a notice of denial following its review, the applicant may request a hearing under
s.
SPS 1.05
.
Med 8.056 Note
Note:
The board office is located at 1400 East Washington Avenue, P.O. Box 8935, Madison, Wisconsin 53708.
Med 8.056 History
History:
Cr.
Register, February, 1997, No. 494
, eff. 3-1-97; correction in (3) made under s.
13.92 (4) (b) 7.
, Stats.,
Register November 2011 No. 671
.
Med 8.06
Med 8.06
Temporary license.
Med 8.06(1)
(1)
An applicant for licensure may apply to the board for a temporary license to practice as a physician assistant if the applicant:
Med 8.06(1)(a)
(a)
Remits the fee specified in s.
440.05 (6)
, Stats.
Med 8.06(1)(b)
(b)
Is a graduate of an approved school and is scheduled to take the examination for physician assistants required by
s.
Med 8.05 (1)
or has taken the examination and is awaiting the results; or
Med 8.06(1)(c)
(c)
Submits proof of successful completion of the examination required by
s.
Med 8.05 (1)
and applies for a temporary license no later than 30 days prior to the date scheduled for the next oral examination.
Med 8.06(2)
(2)
Med 8.06(2)(a)
(a)
Except as specified in
par.
(b)
, a temporary license expires on the date the board grants or denies an applicant permanent licensure. Permanent licensure to practice as a physician assistant is deemed denied by the board on the date the applicant is sent notice from the board that he or she has failed the examination required by
s.
Med 8.05 (1) (c)
.
Med 8.06(2)(b)
(b)
A temporary license expires on the first day of the next regularly scheduled oral examination for permanent licensure if the applicant is required to take, but failed to apply for, the examination.
Med 8.06(3)
(3)
A temporary license may not be renewed.
Med 8.06(4)
(4)
An applicant holding a temporary license may apply for one transfer of supervising physician and location during the term of the temporary license.
Med 8.06 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; am. (1) (b) and (c),
Register, October, 1989, No. 406
, eff. 11-1-89; am. (2) (a),
Register, January, 1994, No. 457
, eff. 2-1-94; am. (1) (intro.) and (2) (a),
Register, October, 1996, No. 490
, eff. 11-1-96; am. (1) (intro.) and (b) to (3), cr. (4),
Register, December, 1999, No. 528
, eff. 1-1-00.
Med 8.07
Med 8.07
Practice.
Med 8.07(1)
(1)
Scope and limitations.
In providing medical care, the entire practice of any physician assistant shall be under the supervision of a licensed physician. The scope of practice is limited to providing medical care specified in
sub.
(2)
. A physician assistant's practice may not exceed his or her educational training or experience and may not exceed the scope of practice of the supervising physician. A medical care task assigned by the supervising physician to a physician assistant may not be delegated by the physician assistant to another person.
Med 8.07(2)
(2)
Medical care.
Medical care a physician assistant may provide include:
Med 8.07(2)(a)
(a)
Attending initially a patient of any age in any setting to obtain a personal medical history, perform an appropriate physical examination, and record and present pertinent data concerning the patient in a manner meaningful to the supervising physician.
Med 8.07(2)(b)
(b)
Performing, or assisting in performing, routine diagnostic studies as appropriate for a specific practice setting.
Med 8.07(2)(c)
(c)
Performing routine therapeutic procedures, including, but not limited to, injections, immunizations, and the suturing and care of wounds.
Med 8.07(2)(d)
(d)
Instructing and counseling a patient on physical and mental health, including diet, disease, treatment and normal growth and development.
Med 8.07(2)(e)
(e)
Assisting the supervising physician in a hospital or facility, as defined in s.
50.01 (1m)
, Stats., by assisting in surgery, making patient rounds, recording patient progress notes, compiling and recording detailed narrative case summaries and accurately writing or executing orders under the supervision of a licensed physician.
Med 8.07(2)(f)
(f)
Assisting in the delivery of medical care to a patient by reviewing and monitoring treatment and therapy plans.
Med 8.07(2)(g)
(g)
Performing independently evaluative and treatment procedures necessary to provide an appropriate response to life-threatening emergency situations.
Med 8.07(2)(h)
(h)
Facilitating referral of patients to other appropriate community health-care facilities, agencies and resources.
Med 8.07(2)(i)
(i)
Issuing written prescription orders for drugs under the supervision of a licensed physician and in accordance with procedures specified in
s.
Med 8.08 (2)
.
Med 8.07 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; am. (2) (i),
Register, July, 1994, No. 463
, eff. 8-1-94; am. (1) and (2) (intro.),
Register, October, 1996, No. 490
, eff. 11-1-96; am. (1), (2) (intro.), (c), (e), (f) and (i),
Register, December, 1999, No. 528
, eff. 1-1-00.
Med 8.08
Med 8.08
Prescribing limitations.
Med 8.08(1)
(1)
A physician assistant may not prescribe or dispense any drug independently. A physician assistant may only prescribe or dispense a drug pursuant to written guidelines for supervised prescriptive practice. The guidelines shall be kept on file at the practice site and made available to the board upon request.
Med 8.08(2)
(2)
A physician assistant may issue a prescription order only if all the following conditions apply:
Med 8.08(2)(a)
(a)
The physician assistant issues the prescription order only in patient situations specified and described in established written guidelines, including the categories of drugs for which prescribing authority has been authorized. The guidelines shall be reviewed at least annually by the physician assistant and his or her supervising physician.
Med 8.08(2)(b)
(b)
The supervising physician and physician assistant determine by mutual agreement that the physician assistant is qualified through training and experience to issue a prescription order as specified in the established written guidelines.
Med 8.08(2)(c)
(c)
The supervising physician is available for consultation as specified in
s.
Med 8.10 (3)
.
Med 8.08(2)(d)
(d)
The prescription orders prepared under procedures in this section contain all information required under s.
450.11 (1)
, Stats.
Med 8.08(3)
(3)
Med 8.08(3)(a)
(a)
A physician who supervises the prescribing practice of a physician assistant shall conduct a periodic review of the prescription orders prepared by the physician assistant to ensure quality of care. In conducting the periodic review of the prescriptive practice of a physician assistant, the supervising physician shall do at least one of the following:
Med 8.08(3)(a)1.
1.
Review a selection of the prescription orders prepared by the physician assistant.
Med 8.08(3)(a)2.
2.
Review a selection of the patient records prepared by the physician assistant practicing in the office of the supervising physician or at a facility or a hospital in which the supervising physician has staff privileges.
Med 8.08(3)(a)3.
3.
Review by telecommunications or other electronic means the patient record or prescription orders prepared by the physician assistant who practices in an office facility other than the supervising physician's main office of a facility or hospital in which the supervising physician has staff privileges.
Med 8.08(3)(b)
(b)
The supervising physician shall determine the method and frequency of the periodic review based upon the nature of the prescriptive practice, the experience of the physician assistant, and the welfare of the patients. The process and schedule for review shall indicate the minimum frequency of review and identify the selection of prescriptive orders or patient records to be reviewed.
Med 8.08 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; r. (3),
Register, July, 1994, No. 463
, eff. 8-1-94; am. (1), (2) (intro.), (a), (b), (c), (d), (e) 1., 2. and 3.,
Register, October, 1996, No. 490
, eff. 11-1-96; am. (1) to (2) (d), (e) 2. and 3.,
Register, December, 1999, No. 528
, eff. 1-1-00;
CR 09-006
: am. (1) and (2) (a), r. (2) (e), cr. (3)
Register August 2009 No. 644
, eff. 9-1-09.
Med 8.09
Med 8.09
Employee status.
No physician assistant may be self-employed. If the employer of a physician assistant is other than a licensed physician, the employer shall provide for, and may not interfere with, the supervisory responsibilities of the physician, as defined in
s.
Med 8.02 (6)
and required in
ss.
Med 8.07 (1)
and
8.10
.
Med 8.09 History
History:
Cr.
Register, July, 1984, No. 343
, eff. 8-1-84; am.
Register, October, 1996, No. 490
, eff. 11-1-96.
Down
Down
/code/admin_code/med/8
true
administrativecode
/code/admin_code/med/8/053
administrativecode/Med 8.053
administrativecode/Med 8.053? | http://docs.legis.wisconsin.gov/code/admin_code/med/8/053 | 2013-05-18T14:58:37 | CC-MAIN-2013-20 | 1368696382450 | [] | docs.legis.wisconsin.gov |
'], exactly
The break statement, like in C, breaks out of the smallest enclosing for or while loop.
The continue statement, also borrowed from C, continues with the next iteration of the
The pass statement does nothing. It can be used when a statement is required syntactically but the program requires no action. For example:
>>> while 1: ... pass # Busy-wait for keyboard interrupt ...
We can create a function that writes the Fibonacci series to an arbitrary boundary:
>>> def fib(n): # write Fibonacci series up to n ... """Print a Fibonacci series up to n.""" ... a, b = 0, 1 ... while b < n: ... print b, ... a, b = b, a+b ... >>> # Now call the function we just defined: ... fib(2000) try to global symbol table, and then).4.1:
>>> fib <function object at 10042ed0> >>> f = fib >>> f(100)
This function can be called either like this:
ask_ok('Do you really want to quit?') or like this:
ask_ok('OK to overwrite the file?', 2). or dictionary.
Functions can also be called using keyword arguments of the form . Here's an example that fails due to this restriction:
>>> def function(a): ... pass ... >>> function(0, a=0) Traceback (most recent call last): File "<stdin>", line 1, in ? TypeError: keyword parameter redefined
When a final formal parameter of the form
**name is
present, it receives a dictionary containing all keyword arguments
whose keyword doesn't correspond keys =', sketch='Cheese Shop Sketch')
and of course it would print:
-- Do you have any Limburger ? -- I'm sorry, we're all out of Limburger It's very runny, sir. It's really very, VERY runny, sir. ---------------------------------------- client : John Cleese shopkeeper : Michael Palin sketch : Cheese Shop Sketch
Note that % args)
By popular demand, a few features commonly found in functional programming languages and. | http://docs.python.org/release/2.2.3/tut/node6.html | 2013-05-18T14:43:01 | CC-MAIN-2013-20 | 1368696382450 | [] | docs.python.org |
Most Zope components live in the Zope Object DataBase (ZODB). Components that are stored in ZODB are said to be persistent. Creating persistent components is, for the most part, a trivial exercise, but ZODB does impose a few rules that persistent components must obey in order to work properly. This chapter describes the persistence model and the interfaces that persistent objects can use to live inside the ZODB.
Persistent objects are Python objects that live for a long time. Most objects are created when a program is run and die when the program finishes. Persistent objects are not destroyed when the program ends, they are saved in a database.
A great benefit of persistent objects is their transparency. As a developer, you do not need to think about loading and unloading the state of the object from memory. Zope’s persistent machinery handles all of that for you.
This is also a great benefit for application designers; you do not need to create your own kind of “data format” that gets saved to a file and reloaded again when your program stops and starts. Zope’s persistence machinery works with any kind of Python objects (within the bounds of a few simple rules) and as your types of objects grow, your database simply grows transparently with it.
Here is a simple example of using ZODB outside of Zope. If all you plan on doing is using persistent objects with Zope, you can skip this section if you wish.
The first thing you need to do to start working with ZODB is to create a “root object”. This process involves first opening a “storage” , which is the actual backend storage location for your data.
ZODB supports many pluggable storage back-ends, but for the purposes of this article we’re going to show you how to use the ‘FileStorage’ back-end storage, which stores your object data in a file. Other storages include storing objects in relational databases, Berkeley databases, and a client to server storage that just ZODB, see the instructions for downloading ZODB from the ZODB web page.
After installing ZODB, you can start to experiment with it right from the Python command line interpreter. If you’ve installed Zope, before running this set of commands, shut down your Zope server, and “cd” to the “lib/python” directory of your Zope instance. If you’re using a “standalone” version of ZODB, you likely don’t need to do this, and you’ll be able to use ZODB by importing it from a standard Python package directory. In either case, try the following set of commands:
chrism@saints:/opt/zope/lib/python$ python Python 2.1.1 (#1, Aug 8 2001, 21:17:50) [GCC 2.95.2 20000220 (Debian GNU/Linux)] on linux2 Type "copyright", "credits" or "license" for more information. >>> from ZODB import FileStorage, DB >>> storage = FileStorage.FileStorage('mydatabase.fs') >>> db = DB( storage ) >>> connection = db.open() >>> root = connection.root()
Here, you create storage and use the ‘mydatabse.fs’ file to store the object information. Then, you create a database that uses that storage.
Next, the database needs to be “opened” by calling the ‘open()’ method. This will return a connection object to the database. The connection object then gives you access to the ‘root’ of the database with the ‘root()’ method.
The ‘root’ object is the dictionary that holds all of your persistent objects. For example, you can store a simple list of strings in the root object:
root['employees'] = ['Bob', 'Mary', 'Jo']
Now, you have changed the persistent database by adding a new object, but this change is so far only temporary. In order to make the change permanent, you must commit the current transaction:
get_transaction().commit()
Transactions are ways to make a lot of changes in one atomic operation. In a later article, we’ll show you how this is a very powerful feature. For now, you can think of committing transactions as “checkpoints” where you save the changes you’ve made to your objects so far. Later on, we’ll show you how to abort those changes, and how to undo them after they are committed.
If you had used a relational database, you would have had to issue a SQL query to save even a simple python list like the above!
Working with simple python types is useful, but the real power of ZODB comes out when you store your own kinds of objects in the database. For example, consider a class that represents a employee:
from Persistence import Persistent class Employee(Persistent): def setName(self, name): self.name = name
Calling ‘setName’ will set a name for the employee. Now, you can put Employee objects in your database:
for name in ['Bob', 'Mary', 'Joe']: employee = Employee() employee.setName(name) root['employees'].append(employee) get_transaction().commit()
Don’t forget to call ‘commit()’, so that the changes you have made so far are committed to the database, and a new transaction is begun.
There are a few rules that must be followed when your objects are persistent.
In this section, we’ll look at each of these special rules one by one.
The first rules says that your objects must be pickleable. This means that they can be serialized into a data format with the “pickle” module. Most python data types (numbers, lists, dictionaries) can be pickled. Code objects (method, functions, classes) and file objects (files, sockets) cannot be pickled. Instances can be persistent objects if:
The second rule is that none of your objects attributes can begin with ‘_p_’. For example, ‘_p_b_and_j’ would be an illegal object attribute. This is because the persistence machinery reserves all of these names for its own purposes.
The third rule is that all object attributes that begin with ‘_v_’ are “volatile” and are not saved to the database. This means that as long as the persistent object is in Zope memory cache, volatile attributes can be used. When the object is deactivated (removed from memory) volatile attributes are thrown away.
Volatile attributes are useful for data that is good to cache for a while but can often be thrown away and easily recreated. File connections, cached calculations, rendered templates, all of these kinds of things are useful applications of volatile attributes. You must exercise care when using volatile attributes. Since you have little control over when your objects are moved in and out of memory, you never know when your volatile attributes may disappear.
The fourth rule is that you must signal changes to mutable types. This is because persistent objects can’t detect when mutable types change, and therefore, doesn’t know whether or not to save the persistent object or not.
For example, say you had a list of names as an attribute of your object called ‘departments’ that you changed in a method called ‘addDepartment’:
class DepartmentManager(Persistent): def __init__(self): self.departments = [] def addDepartment(self, department): self.departments.append(department)
When you call the ‘addDepartment’ method you change a mutable type, ‘departments’ but your persistent object will not save that change.
There are two solutions to this problem. First, you can assign a special flag, ‘_p_changed’:
def addDepartment(self, department): self.department.append(department) self._p_changed = 1
Remember, ‘_p_’ attributes do something special to the persistence machinery and are reserved names. Assigning 1 to ‘_p_changed’ tells the persistence machinery that you changed the object, and that it should be saved.
Another technique is to use the mutable attribute as though it were immutable. In other words, after you make changes to a mutable object, reassign it:
def addDepartment(self, department): departments = self.departments departments.append(department) self.department = departments
Here, the ‘self.departments’ attribute was re-assigned at the end of the function to the “working copy” object ‘departments’. This technique is cleaner because it doesn’t have any explicit ‘_p_changed’ settings in it, but this implicit triggering of the persistence machinery should always be understood, otherwise use the explicit syntax.
A final option is to use persistence-aware mutable attributes such as ‘PersistentMapping’, and ‘IOBTree’. ‘PersistentMapping’ is a mapping class that notifies ZODB when you change the mapping. You can use instances of ‘PersistentMapping’ in place of standard Python dictionaries and not worry about signaling change by reassigning the attribute or using ‘_p_changed’. Zope’s Btree classes are also persistent-aware mutable containers. This solution can be cleaner than using mutable objects immutably, or signaling change manually assuming that there is a persistence-aware class available that meets your needs.
When changes are saved to ZODB, they are saved in a transaction. This means that either all changes are saved, or none are saved. The reason for this is data consistency. Imagine the following scenario:
Now imagine that an error happens during the last step of this process, sending the payment to sandwich.com. Without transactions, this means that the account was debited, but the payment never went to sandwich.com! Obviously this is a bad situation. A better solution is to make all changes in a transaction:
Now, if an error is raised anywhere between steps 2 and 5, all changes made are thrown away, so if the payment fails to go to sandwich.com, the account won’t be debited, and if debiting the account raises an error, the payment won’t be made to sandwich.com, so your data is always consistent.
When using your persistent objects with Zope, Zope will automatically begin a transaction when a web request is made, and commit the transaction when the request is finished. If an error occurs at any time during that request, then the transaction is aborted, meaning all the changes made are thrown away.
If you want to intentionally abort a transaction in the middle of a request, then just raise an error at any time. For example, this snippet of Python will raise an error and cause the transaction to abort:
raise SandwichError('Not enough peanut butter.')
A more likely scenario is that your code will raise an exception when a problem arises. The great thing about transactions is that you don’t have to include cleanup code to catch exceptions and undo everything you’ve done up to that point. Since the transaction is aborted the changes made in the transaction will not be saved.
Because Zope does transaction management for you, most of the time you do not need to explicitly begin, commit or abort your own transactions. For more information on doing transaction management manually, see the links at the end of this chapter that lead to more detailed tutorials of doing your own ZODB programming.
Zope waits until the transaction is committed to save all the changes to your objects. This means that the changes are saved in memory. If you try to change more objects than you have memory in your computer, your computer will begin to swap and thrash, and maybe even run you out of memory completely. This is bad. The easiest solution to this problem is to not change huge quantities of data in one transaction.
If you need to spread a transaction out of lots of data, however, you can use subtransactions. Subtransactions allow you to manage Zope’s memory usage yourself, so as to avoid swapping during large transactions.
Subtransactions allow you to make huge transactions. Rather than being limited by available memory, you are limited by available disk space. Each subtransaction commit writes the current changes out to disk and frees memory to make room for more changes.
To commit a subtransaction, you first need to get a hold of a transaction object. Zope adds a function to get the transaction objects in your global namespace, ‘get_transaction’, and then call ‘commit(1)’ on the transaction:
get_transaction().commit(1)
You must balance speed, memory, and temporary storage concerns when deciding how frequently to commit subtransactions. The more subtransactions, the less memory used, the slower the operation, and the more temporary space used. Here’s and example of how you might use subtransactions in your Zope code:
tasks_per_subtransaction = 10 i = 0 for task in tasks: process(task) i = i + 1 if i % tasks_per_subtransaction == 0: get_transaction().commit(1)
This example shows how to commit a subtransaction at regular intervals while processing a number of tasks..
The first case involves threads making lots of changes to objects and writing to the database. The way ZODB and threading works is that each thread that uses the database gets its own connection to the database. Each connection gets its own copy of your object. All of the threads can read and change any of the objects. ZODB keeps all of these objects synchronized between the threads. The upshot is that you don’t have to do any locking or thread synchronization yourself. Your code can act as though it is single threaded.
However, synchronization problems can occur when objects are changed by two different threads at the same time.
Imagine that thread 1 gets its own copy of object A, as does thread 2. If thread 1 changes its copy of A, then thread 2 will not see those changes until thread 1 commits them. In cases where lots of objects are changing, this can cause thread 1 and 2 to try and commit changes to object 1 at the same time.
When this happens, ZODB lets one transaction do the commit (it “wins”) and raises a ‘ConflictError’ in the other thread (which “looses”). The looser can elect to try again, but this may raise yet another ‘ConflictError’ if many threads are trying to change object A. Zope does all of its own transaction management and will retry a losing transaction three times before giving up and raising the ‘ConflictError’ all the way up to the user.; no ‘ConflictError’ is raised.: oldState['count'] = oldState['count'] + savedDiff + newDiff return oldState
In the above example, ‘_p_resolveConflict’ resolves the difference between the two conflicting transactions.
ZODB takes care of threadsafety for persistent objects. However, you must handle threadsafey yourself for non-persistent objects which are shared between threads.
One tricky type of non-persistent, shared objects are mutable default arguments to functions, and methods. Default arguments are useful because they are cached for speed, and do not need to be recreated every time the method is called. But if these cached default arguments are mutable, one thread may change (mutate) the object when another thread is using it, and that can be bad. So, code like:
def foo(bar=[]): bar.append('something')
Could get in trouble if two threads execute this code because lists are mutable. There are two solutions to this problem:
We recommend the first solution because mutable default arguments are confusing, generally a bad idea in the first place.
This chapter has only covered the most important features of ZODB from a Zope developer’s perspective. Check out some of these sources for more in depth information:. | http://docs.zope.org/zope2/zdgbook/ZODBPersistentComponents.html | 2013-05-18T14:13:31 | CC-MAIN-2013-20 | 1368696382450 | [] | docs.zope.org |
Efx Library Documentation
- Version:
- 1.7.99
- Date:
- 2012
Efx is the effects libraries.
For a better reference, check the following groups:
- General types and functions.
- Efx Effects Queue
- Efx Follow
- Efx Fade Effect
- Efx Rotation Effects
- Efx Movement Effects
- Efx Resize Effect
Please see the Authors page for contact details. | http://docs.enlightenment.org/auto/efx/ | 2013-05-18T15:06:04 | CC-MAIN-2013-20 | 1368696382450 | [] | docs.enlightenment.org |
Quickly navigate with the Office 365 app launcher and the Dynamics 365 home page
Applies to Dynamics 365 for Customer Engagement apps version 9.x
Dynamics 365 (online) introduces a new app model for Dynamics 365 apps and makes accessing these and Office 365 apps fast and easy.
If you're a Dynamics 365 (online) user with an Office 365 subscription, you're just two clicks away from accessing the family of online apps that are available to you, like Word and Excel Online.
Watch a short video (3:35) about the Dynamics 365 business apps.
For admins and end users: Quickly move between apps with the new Office 365 app launcher
The Office 365 app launcher is built in to all Dynamics and Office 365 apps. Use the app launcher to quickly navigate to your Dynamics application of choice.
If you have an Office 365 subscription, click the app launcher to go to the Office 365 apps and services available to you.
Check your email. Create a Word doc. Get files from your OneDrive. All while staying just two clicks away from getting back to Dynamics 365 (online).
Note
TIP: If you've just started a trial or upgraded to Dynamics 365, you might need to refresh or open a new browser session to see your apps. There might be a delay for your instance to fully provision.
For Microsoft Dynamics 365 Government subscriptions, the Office 365 app launcher will take users to either Dynamics 365 (online) or the Dynamics 365 admin center. Admins will go to the Dynamics 365 admin center.
For admins: Get to the admin center through the Office 365 app launcher
If you're a Dynamics 365 system administrator or an Office 365 global administrator, click the app launcher to see the Admin tile.
Click the Admin tile to go to the Office 365 Admin Center, where you can add users and change passwords.
For admins and end users: Introducing the Dynamics 365 home page
If you've transitioned to December 2016 Update for Dynamics 365 (online), we have a new page for you to manage and open Dynamics 365 apps. Click Dynamics 365from the app launcher, to go to the Dynamics 365 home page (home.dynamics.com).
The new Dynamics 365 home page.
Note
The Dynamics 365 home page is not part of the Microsoft Dynamics 365 Government subscription. Clicking Dynamics 365 takes Microsoft Dynamics 365 Government users to your instance of Dynamics 365 (online) or to the Dynamics 365 admin center.
See the next section to see what you can do with the home page.
View your apps
Any Dynamics 365 app for which you have a license appears as an app module tile on this page. If you have multiple instances of an app, select the tile for the instance you want to open.
In this example, there are two instances of Dynamics 365 (online) displayed.
Tip
If you've just started a trial or upgraded to Dynamics 365, you might need to refresh or open a new browser session to see your apps. There might be a delay for your instance to fully provision.
Note
What is "Dynamics 365 - custom"?
"Dynamics 365 - custom" is the app name for all online organizations with a version 8.1 and lower as well as the default app on 8.2. The name for the 8.2 default app can be changed by the administrator.
What are the tiles on the home page?
Dynamics 365 is introducing a new app model and what you're seeing are Dynamics 365 (online) apps for which you're licensed once you've upgraded to December 2016 Update for Dynamics 365 (online).
Admins: You have options for displaying and naming Dynamics 365 - custom.
Once you update to December 2016 Update for Dynamics 365 (online), you have options. Go to Settings > Administration > System Setting > General tab. Scroll down to Set options for the default app: Dynamics 365 - custom.
Where do I get more information about upgrading to Dynamics 365?
Pin your frequently-used apps
For companies with lots of Dynamics 365 apps, you can do a variety of things to make the home page more manageable. For example, pin your frequently-used apps to the top of your page.
Select the app on the home page.
Click the ellipses (...), and then click Pin this app.
The app will appear at the top of the home page and in the task pane.
Pinned in the home page.
Pinned in the task pane.
Search your apps
If you have a lot of apps, you can search for specific ones.
For admins and end users: Select a Dynamics 365 app from the new app switcher
For customers who have upgraded to December 2016 Update for Dynamics 365 (online) or later, you can use the app switcher in Dynamics 365 (online) to quickly select other Dynamics 365 apps for which you're licensed.
You can pin apps using the ellipses on this menu, which will pin to the menu and to the home page.
See also
Blog: Meet the all new Dynamics 365 Home page
Sign in to Dynamics and Office 365 apps My Apps on Home.Dynamics.com
Important information for CRM Online customers
Switch from Dynamics CRM Online to Dynamics 365 (online)
Meet the Office 365 app launcher | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/admin/quickly-navigate-office-365-app-launcher | 2018-12-10T03:22:14 | CC-MAIN-2018-51 | 1544376823236.2 | [array(['media/new-office-365-app-launcher.png',
'Office 365 app launcher Office 365 app launcher'], dtype=object)
array(['media/select-admin-from-app-launcher.png',
'Admin tile on the Office 365 app launcher Admin tile on the Office 365 app launcher'],
dtype=object)
array(['media/office-365-admin-center.png',
'Office 365 admin center in Dynamics 365 Office 365 admin center in Dynamics 365'],
dtype=object)
array(['media/select-dynamics-365-app-launcher.png',
'Dynamics 365 tile on the Office 365 app launcher Dynamics 365 tile on the Office 365 app launcher'],
dtype=object)
array(['media/dynamics-365-home-page.png',
'Dynamics 365 home page Dynamics 365 home page'], dtype=object)
array(['media/two-instances-of-dynamics-365-online-in-the-home-page.png',
'Two instances of Dynamics 365 (online) on the home page Two instances of Dynamics 365 (online) on the home page'],
dtype=object)
array(['media/search-dynamics-365-apps.png',
'Search for Dynamics 365 apps Search for Dynamics 365 apps'],
dtype=object)
array(['media/useapp-switchergoother-dynamics-365-apps.png',
'Dynamics 365 app switcher Dynamics 365 app switcher'],
dtype=object) ] | docs.microsoft.com |
For backend devices that offer replication features, Cinder provides a common mechanism for exposing that functionality on a per volume basis while still trying to allow flexibility for the varying implementation and requirements of all the different backend devices.
There are 2 sides to Cinder’s replication feature, the core mechanism and the driver specific functionality, and in this document we’ll only be covering the driver side of things aimed at helping vendors implement this functionality in their drivers in a way consistent with all other drivers.
Although we’ll be focusing on the driver implementation there will also be some mentions on deployment configurations to provide a clear picture to developers and help them avoid implementing custom solutions to solve things that were meant to be done via the cloud configuration.
As a general rule replication is enabled and configured via the cinder.conf file under the driver’s section, and volume replication is requested through the use of volume types.
NOTE: Current replication implementation is v2.1 and it’s meant to solve a very specific use case, the “smoking hole” scenario. It’s critical that you read the Use Cases section of the spec here:
From a user’s perspective volumes will be created using specific volume types,
even if it is the default volume type, and they will either be replicated or
not, which will be reflected on the
replication_status field of the volume.
So in order to know if a snapshot is replicated we’ll have to check its volume.
After the loss of the primary storage site all operations on the resources will
fail and VMs will no longer have access to the data. It is then when the Cloud
Administrator will issue the
failover-host command to make the
cinder-volume service perform the failover.
After the failover is completed, the Cinder volume service will start using the failed-over secondary storage site for all operations and the user will once again be able to perform actions on all resources that were replicated, while all other resources will be in error status since they are no longer available.
Most storage devices will require configuration changes to enable the replication functionality, and this configuration process is vendor and storage device specific so it is not contemplated by the Cinder core replication functionality.
It is up to the vendors whether they want to handle this device configuration in the Cinder driver or as a manual process, but the most common approach is to avoid including this configuration logic into Cinder and having the Cloud Administrators do a manual process following a specific guide to enable replication on the storage device before configuring the cinder volume service.
The way to enable and configure replication is common to all drivers and it is
done via the
replication_device configuration option that goes in the
driver’s specific section in the
cinder.conf configuration file.
replication_device is a multi dictionary option, that should be specified
for each replication target device the admin wants to configure.
While it is true that all drivers use the same
replication_device
configuration option this doesn’t mean that they will all have the same data,
as there is only one standardized and REQUIRED key in the configuration
entry, all others are vendor specific:
Values of
backend_id keys are used to uniquely identify within the driver
each of the secondary sites, although they can be reused on different driver
sections.
These unique identifiers will be used by the failover mechanism as well as in the driver initialization process, and the only requirement is that is must never have the value “default”.
An example driver configuration for a device with multiple replication targets is show below:
..... [driver-biz] volume_driver=xxxx volume_backend_name=biz [driver-baz] volume_driver=xxxx volume_backend_name=baz [driver-foo] volume_driver=xxxx volume_backend_name=foo replication_device = backend_id:vendor-id-1,unique_key:val.... replication_device = backend_id:vendor-id-2,unique_key:val....
In this example the result of calling
self.configuration.safe_get('replication_device) within the driver is the
following list:
[{backend_id: vendor-id-1, unique_key: val1}, {backend_id: vendor-id-2, unique_key: val2}]
It is expected that if a driver is configured with multiple replication targets, that replicated volumes are actually replicated on all targets.
Besides specific replication device keys defined in the
replication_device,
a driver may also have additional normal configuration options in the driver
section related with the replication to allow Cloud Administrators to configure
things like timeouts.
There are 2 new replication stats/capability keys that drivers supporting
replication v2.1 should be reporting:
replication_enabled and
replication_targets:
stats["replication_enabled"] = True|False stats["replication_targets"] = [<backend-id_1, <backend-id_2>...]
If a driver is behaving correctly we can expect the
replication_targets
field to be populated whenever
replication_enabled is set to
True, and
it is expected to either be set to
[] or be missing altogether when
replication_enabled is set to
False.
The purpose of the
replication_enabled field is to be used by the scheduler
in volume types for creation and migrations.
As for the
replication_targets field it is only provided for informational
purposes so it can be retrieved through the
get_capabilities using the
admin REST API, but it will not be used for validation at the API layer. That
way Cloud Administrators will be able to know available secondary sites where
they can failover.
The way to control the creation of volumes on a cloud with backends that have replication enabled is, like with many other features, through the use of volume types.
We won’t go into the details of volume type creation, but suffice to say that you will most likely want to use volume types to discriminate between replicated and non replicated volumes and be explicit about it so that non replicated volumes won’t end up in a replicated backend.
Since the driver is reporting the
replication_enabled key, we just need to
require it for replication volume types adding
replication_enabled='<is>
True` and also specifying it for all non replicated volume types
replication_enabled='<is> False'.
It’s up to the driver to parse the volume type info on create and set things up as requested. While the scoping key can be anything, it’s strongly recommended that all backends utilize the same key (replication) for consistency and to make things easier for the Cloud Administrator.
Additional replication parameters can be supplied to the driver using vendor specific properties through the volume type’s extra-specs so they can be used by the driver at volume creation time, or retype.
It is up to the driver to parse the volume type info on create and retype to set things up as requested. A good pattern to get a custom parameter from a given volume instance is this:
extra_specs = getattr(volume.volume_type, 'extra_specs', {}) custom_param = extra_specs.get('custom_param', 'default_value')
It may seem convoluted, but we must be careful when retrieving the
extra_specs from the
volume_type field as it could be
None.
Vendors should try to avoid obfuscating their custom properties and expose them
using the
_init_vendor_properties method so they can be checked by the
Cloud Administrator using the
get_capabilities REST API.
NOTE: For storage devices doing per backend/pool replication the use of volume types is also recommended.
Drivers are expected to honor the replication parameters set in the volume type during creation, retyping, or migration.
When implementing the replication feature there are some driver methods that will most likely need modifications -if they are implemented in the driver (since some are optional)- to make sure that the backend is replicating volumes that need to be replicated and not replicating those that don’t need to be:
create_volume
create_volume_from_snapshot
create_cloned_volume
retype
clone_image
migrate_volume
In these methods the driver will have to check the volume type to see if the volumes need to be replicated, we could use the same pattern described in the Volume Types / Extra Specs section:
def _is_replicated(self, volume): specs = getattr(volume.volume_type, 'extra_specs', {}) return specs.get('replication_enabled') == '<is> True'
But it is not the recommended mechanism, and the
is_replicated method
available in volumes and volume types versioned objects instances should be
used instead.
Drivers are expected to keep the
replication_status field up to date and in
sync with reality, usually as specified in the volume type. To do so in above
mentioned methods’ implementation they should use the update model mechanism
provided for each one of those methods. One must be careful since the update
mechanism may be different from one method to another.
What this means is that most of these methods should be returning a
replication_status key with the value set to
enabled in the model
update dictionary if the volume type is enabling replication. There is no need
to return the key with the value of
disabled if it is not enabled since
that is the default value.
In the case of the
create_volume, and
retype method there is no need to
return the
replication_status in the model update since it has already been
set by the scheduler on creation using the extra spec from the volume type. And
on
migrate_volume there is no need either since there is no change to the
replication_status.
NOTE: For storage devices doing per backend/pool replication it is not
necessary to check the volume type for the
replication_enabled key since
all created volumes will be replicated, but they are expected to return the
replication_status in all those methods, including the
create_volume
method since the driver may receive a volume creation request without the
replication enabled extra spec and therefore the driver will not have set the
right
replication_status and the driver needs to correct this.
Besides the
replication_status field that drivers need to update there are
other fields in the database related to the replication mechanism that the
drivers can use:
replication_extended_status
replication_driver_data
These fields are string type fields with a maximum size of 255 characters and they are available for drivers to use internally as they see fit for their normal replication operation. So they can be assigned in the model update and later on used by the driver, for example during the failover.
To avoid using magic strings drivers must use values defined by the
ReplicationStatus class in
cinder/objects/fields.py file and
these are:
ERROR: When setting the replication failed on creation, retype, or migrate. This should be accompanied by the volume status
error.
ENABLED: When the volume is being replicated.
DISABLED: When the volume is not being replicated.
FAILED_OVER: After a volume has been successfully failed over.
FAILOVER_ERROR: When there was an error during the failover of this volume.
NOT_CAPABLE: When we failed-over but the volume was not replicated.
The first 3 statuses revolve around the volume creation and the last 3 around the failover mechanism.
The only status that should not be used for the volume’s
replication_status
is the
FAILING_OVER status.
Whenever we are referring to values of the
replication_status in this
document we will be referring to the
ReplicationStatus attributes and not a
literal string, so
ERROR means
cinder.objects.field.ReplicationStatus.ERROR and not the string “ERROR”.
This is the mechanism used to instruct the cinder volume service to fail over to a secondary/target device.
Keep in mind the use case is that the primary backend has died a horrible death and is no longer valid, so any volumes that were on the primary and were not being replicated will no longer be available.
The method definition required from the driver to implement the failback mechanism is as follows:
def failover_host(self, context, volumes, secondary_id=None):
There are several things that are expected of this method:
If no secondary storage device is provided to the driver via the
backend_id
argument (it is equal to
None), then it is up to the driver to choose which
storage device to failover to. In this regard it is important that the driver
takes into consideration that it could be failing over from a secondary (there
was a prior failover request), so it should discard current target from the
selection.
If the
secondary_id is not a valid one the driver is expected to raise
InvalidReplicationTarget, for any other non recoverable errors during a
failover the driver should raise
UnableToFailOver or any child of
VolumeDriverException class and revert to a state where the previous
backend is in use.
The failover method in the driver will receive a list of replicated volumes
that need to be failed over. Replicated volumes passed to the driver may have
diverse
replication_status values, but they will always be one of:
ENABLED,
FAILED_OVER, or
FAILOVER_ERROR.
The driver must return a 2-tuple with the new storage device target id as the first element and a list of dictionaries with the model updates required for the volumes so that the driver can perform future actions on those volumes now that they need to be accessed on a different location.
It’s not a requirement for the driver to return model updates for all the
volumes, or for any for that matter as it can return
None or an empty list
if there’s no update necessary. But if elements are returned in the model
update list then it is a requirement that each of the dictionaries contains 2
key-value pairs,
volume_id and
updates like this:
[{ 'volume_id': volumes[0].id, 'updates': { 'provider_id': new_provider_id1, ... }, 'volume_id': volumes[1].id, 'updates': { 'provider_id': new_provider_id2, 'replication_status': fields.ReplicationStatus.FAILOVER_ERROR, ... }, }]
In these updates there is no need to set the
replication_status to
FAILED_OVER if the failover was successful, as this will be performed by
the manager by default, but it won’t create additional DB queries if it is
returned. It is however necessary to set it to
FAILOVER_ERROR for those
volumes that had errors during the failover.
Drivers don’t have to worry about snapshots or non replicated volumes, since the manager will take care of those in the following manner:
statusfield saved in the
previous_statusfield, the
statusfield changed to
error, and their
replication_statusset to
NOT_CAPABLE.
error.
statuschanged to
error, their current
statuspreserved in
previous_status, and their
replication_statusset to
FAILOVER_ERROR.
error.
Any model update request from the driver that changes the
status field will
trigger a change in the
previous_status field to preserve the current
status.
Once the failover is completed the driver should be pointing to the secondary and should be able to create and destroy volumes and snapshots as usual, and it is left to the Cloud Administrator’s discretion whether resource modifying operations are allowed or not.
Drivers are not required to support failback, but they are required to raise a
InvalidReplicationTarget exception if the failback is requested but not
supported.
The way to request the failback is quite simple, the driver will receive the
argument
secondary_id with the value of
default. That is why it was
forbidden to use the
default on the target configuration in the cinder
configuration file.
Expected driver behavior is the same as the one explained in the Failover section:
If the failback of any of the volumes fail the driver must return
replication_status set to
ERROR in the volume updates for those
volumes. If they succeed it is not necessary to change the
replication_status since the default behavior will be to set them to
ENABLED, but it won’t create additional DB queries if it is set.
The manager will update resources in a slightly different way than in the failover case:
statuschanged to
error, have their current
statuspreserved in the
previous_statusfield, and their
replication_statusset to
FAILOVER_ERROR.
error.
We can avoid using the “default” magic string by using the
FAILBACK_SENTINEL class attribute from the
VolumeManager class.
It stands to reason that a failed over Cinder volume service may be restarted, so there needs to be a way for a driver to know on start which storage device should be used to access the resources.
So, to let drivers know which storage device they should use the manager passes
drivers the
active_backend_id argument to their
__init__ method during
the initialization phase of the driver. Default value is
None when the
default (primary) storage device should be used.
Drivers should store this value if they will need it, as the base driver is not storing it, for example to determine the current storage device when a failover is requested and we are already in a failover state, as mentioned above.
In many cases, after a failover has been completed we’ll want to allow changes to the data in the volumes as well as some operations like attach and detach while other operations that modify the number of existing resources, like delete or create, are not allowed.
And that is where the freezing mechanism comes in; freezing a backend puts the control plane of the specific Cinder volume service into a read only state, or at least most of it, while allowing the data plane to proceed as usual.
While this will mostly be handled by the Cinder core code, drivers are informed when the freezing mechanism is enabled or disabled via these 2 calls:
freeze_backend(self, context) thaw_backend(self, context)
In most cases the driver may not need to do anything, and then it doesn’t need
to define any of these methods as long as its a child class of the
BaseVD
class that already implements them as noops.
Raising a VolumeDriverException exception in any of these methods will result in a 500 status code response being returned to the caller and the manager will not log the exception, so it’s up to the driver to log the error if it is appropriate.
If the driver wants to give a more meaningful error response, then it can raise other exceptions that have different status codes.
When creating the freeze_backend and thaw_backend driver methods we must
remember that this is a Cloud Administrator operation, so we can return errors
that reveal internals of the cloud, for example the type of storage device, and
we must use the appropriate internationalization translation methods when
raising exceptions; for VolumeDriverException no translation is necessary
since the manager doesn’t log it or return to the user in any way, but any
other exception should use the
_() translation method since it will be
returned to the REST API caller.
For example, if a storage device doesn’t support the thaw operation when failed over, then it should raise an Invalid exception:
def thaw_backend(self, context): if self.failed_over: msg = _('Thaw is not supported by driver XYZ.') raise exception.Invalid(msg)
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents. | https://docs.openstack.org/cinder/queens/contributor/replication.html | 2018-12-10T01:42:06 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.openstack.org |
.
Prerequisites
Browse to Datastores in the vSphere Web Client navigator. See Display Datastore Information in the vSphere Web Client.
Procedure
- Select the datastore to grow and click the Increase Datastore Capacity icon.
- Select a device from the list of storage devices.
Your selection depends on whether an expandable storage device is available.
-. | https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-D57FEF5D-75F1-433D-B337-E760732282FC.html | 2018-12-10T01:56:47 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.vmware.com |
Usage Billing
Except for DynamoDB reservations, which are billed based on throughput, reservations are billed for every clock-hour during the term you select, regardless of whether an instance is running or not. A clock-hour is defined as the standard 24-hour clock that runs from midnight to midnight and is divided into 24 hours (for example, 1:00:00 to 1:59:59 is one clock-hour).
You can find out about the charges and fees to your account by viewing the AWS Billing and Cost Management console. You can also examine your utilization and coverage, and receive reservation purchase recommendations, via AWS Cost Explorer. You can dive deeper into your reservations and Reserved Instance discount allocation via the AWS Cost and Usage Report.
For more information on Reserved Instance usage billing, see Usage Billing. | https://docs.aws.amazon.com/aws-technical-content/latest/cost-optimization-reservation-models/usage-billing.html | 2018-12-10T02:24:46 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.aws.amazon.com |
Pro and VMware Horizon 6.
If you have a Mac that supports connecting USB 3.0 devices, guest operating systems can connect to USB 3.0 devices as USB 3.0 and connect to USB 2.0 devices as USB 2.0. However, guests with virtual USB 2.0 hardware will have issues when connecting to USB 3.0 devices. An example of a guest operating system that does not have virtual USB 3.0 hardware is Windows XP. Depending on the specific device, performance might be slow or partial, or the device might fail to connect.
Guests on older Macs can have virtual USB 3.0 virtual hardware, but both USB 2.0 and USB 3.0 devices will connect in USB 2.0 mode. Guests with virtual USB 2.0 hardware will also use USB 2.0 mode for USB 2.0 and USB 3.0 devices.
Fusion does not support USB adapters for connecting displays to your virtual machines. | https://docs.vmware.com/en/VMware-Fusion/8.0/com.vmware.fusion.using.doc/GUID-4819DA9E-3575-4C7F-9912-C2FFBA6CEAD6.html | 2018-12-10T02:17:12 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.vmware.com |
Understanding PyInstaller Hooks¶
In summary, a “hook” file extends PyInstaller to adapt it to the special needs and methods used by a Python package. The word “hook” is used for two kinds of files. A runtime hook helps the bootloader to launch an app. For more on runtime hooks, see Changing Runtime Behavior. Other hooks run while an app is being analyzed. They help the Analysis phase find needed files..
A hook file is a Python script, and can use all Python features.
It can also import helper methods from
PyInstaller.utils.hooks
and useful variables from
PyInstaller.compat.
These helpers are documented below.
The name of a hook file is
hook-full-import-name
.py,
where full-import-name is
the fully-qualified name of an imported script or module.
You can browse through the existing hooks in the
hooks folder of the PyInstaller distribution folder
and see the names of the packages for which hooks have been written.
For example
hook-PyQt5.QtCore.py is a hook file telling
about hidden imports needed by the module
PyQt5.QtCore.
When your script contains
import PyQt5.QtCore
(or
from PyQt5 import QtCore),
Analysis notes that
hook-PyQt5.QtCore.py exists, and will call it.
Many hooks consist of only one statement, an assignment to
hiddenimports.
For example, the hook for the dnspython package, called
hook-dns.rdata.py, has only this statement:
hiddenimports = [ "dns.rdtypes.*", "dns.rdtypes.ANY.*" ]
When Analysis sees
import dns.rdata or
from dns import rdata
it calls
hook-dns.rdata.py and examines its value
of
hiddenimports.
As a result, it is as if your source script also contained:
import dns.rdtypes.* import dsn.rdtypes.ANY.*
A hook can also cause the addition of data files, and it can cause certain files to not be imported. Examples of these actions are shown below.
When the module that needs these hidden imports is useful only write a hook for a module used by others, please send us the hook file so we can make it available.
How a Hook Is Loaded¶
A hook is a module named
hook-full-import-name
.py
in a folder where the Analysis object looks for hooks.
Each time Analysis detects an import, it looks for a hook file with
a matching name.
When one is found, Analysis imports the hook’s code into a Python namespace.
This results in the execution of all top-level statements in the hook source,
for example import statements, assignments to global names, and
function definitions.
The names defined by these statements are visible to Analysis
as attributes of the namespace.
Thus a hook is a normal Python script and can use all normal Python facilities.
For example it could test
sys.version and adjust its
assignment to
hiddenimports based on that.
There are over 150 hooks in the PyInstaller installation.
You are welcome to browse through them for examples.
Hook Global Variables¶
A majority of the existing hooks consist entirely of assignments of values to one or more of the following global variables. If any of these are defined by the hook, Analysis takes their values and applies them to the bundle being created.
hiddenimports
A list of module names (relative or absolute) that should be part of the bundled app. This has the same effect as the
--hidden-importcommand line option, but it can contain a list of names and is applied automatically only when the hooked module is imported. Example:
hiddenimports = ['_proxy', 'utils', 'defs']
excludedimports
A list of absolute module names that should not be part of the bundled app. If an excluded module is imported only by the hooked module or one of its sub-modules, the excluded name and its sub-modules will not be part of the bundle. (If an excluded name is explicitly imported in the source file or some other module, it will be kept.) Several hooks use this to prevent automatic inclusion of the
tkintermodule. Example:
excludedimports = [modname_tkinter]
datas
A list of files to bundle with the app as data. Each entry in the list is a tuple containing two strings. The first string specifies a file (or file “glob”) in this system, and the second specifies the name(s) the file(s) are to have in the bundle. (This is the same format as used for the
datas=argument, see Adding Data Files.) Example:
datas = [ ('/usr/share/icons/education_*.png', 'icons') ]
If you need to collect multiple directories or nested directories, you can use helper functions from the
PyInstaller.utils.hooksmodule (see below) to create this list, for example:
datas = collect_data_files('submodule1') datas+= collect_data_files('submodule2')
In rare cases you may need to apply logic to locate particular files within the file system, for example because the files are in different places on different platforms or under different versions. Then you can write a
hook()function as described below under The hook(hook_api) Function.
binaries
A list of files or directories to bundle as binaries. The format is the same as
datas(tuples with strings that specify the source and the destination). Binaries is a special case of
datas, in that PyInstaller will check each file to see if it depends on other dynamic libraries. Example:
binaries = [ ('C:\\Windows\\System32\\*.dll', 'dlls') ]
Many hooks use helpers from the
PyInstaller.utils.hooksmodule to create this list (see below):
binaries = collect_dynamic_libs('zmq')
Useful Items in
PyInstaller.compat¶
A hook may import the following names from
PyInstaller.compat,
for example:
from PyInstaller.compat import modname_tkinter, is_win
is_py2:
- True when the active Python is version 2.7.
is_py3:
- True when the active Python is version 3.X.
is_py34,
is_py35,
is_py36:
- True when the current version of Python is at least 3.4, 3.5 or 3.6 respectively.
is_win:
- True in a Windows system.
is_cygwin:
- True when
sys.platform=='cygwin'.
is_darwin:
- True in Mac OS X.
is_linux:
- True in any Linux system (
sys.platform.startswith('linux')).
is_solar:
- True in Solaris.
is_aix:
- True in AIX.
is_freebsd:
- True in FreeBSD.
is_venv:
- True in any virtual environment (either virtualenv or venv).
base_prefix:
- String, the correct path to the base Python installation, whether the installation is native or a virtual environment.
modname_tkinter:
String,
Tkinterin Python 2.7 but
tkinterin Python 3. To prevent an unnecessary import of Tkinter, write:
from PyInstaller.compat import modname_tkinter excludedimports = [ modname_tkinter ]
EXTENSION_SUFFIXES:
- List of Python C-extension file suffixes. Used for finding all binary dependencies in a folder; see
hook-cryptography.pyfor an example.
Useful Items in
PyInstaller.utils.hooks¶
A hook may import useful functions from
PyInstaller.utils.hooks.
Use a fully-qualified import statement, for example:
from PyInstaller.utils.hooks import collect_data_files, eval_statement
The
PyInstaller.utils.hooks functions listed here are generally useful
and used in a number of existing hooks.
There are several more functions besides these that serve the needs
of specific hooks, such as hooks for PyQt4/5.
You are welcome to read the
PyInstaller.utils.hooks module
(and read the existing hooks that import from it) to get code and ideas.
exec_statement( 'statement' ):
Execute a single Python statement in an externally-spawned interpreter and return the standard output that results, as a string. Examples:
tk_version = exec_statement( "from _tkinter import TK_VERSION; print(TK_VERSION)" ) mpl_data_dir = exec_statement( "import matplotlib; print(matplotlib._get_data_path())" ) datas = [ (mpl_data_dir, "") ]
eval_statement( 'statement' ):
Execute a single Python statement in an externally-spawned interpreter. If the resulting standard output text is not empty, apply the
eval()function to it; else return None. Example:
databases = eval_statement(''' import sqlalchemy.databases print(sqlalchemy.databases.__all__) ''') for db in databases: hiddenimports.append("sqlalchemy.databases." + db)
is_module_satisfies( requirements, version=None, version_attr='__version__' ):
Check that the named module (fully-qualified) exists and satisfies the given requirement. Example:
if is_module_satisfies('sqlalchemy >= 0.6'):
This function provides robust version checking based on the same low-level algorithm used by
easy_installand
pip, and should always be used in preference to writing your own comparison code. In particular, version strings should never be compared lexicographically (except for exact equality). For example
'00.5' > '0.6'returns True, which is not the desired result.
The
requirementsargument uses the same syntax as supported by the Package resources module of setup tools (follow the link to see the supported syntax).
The optional
versionargument is is a PEP0440-compliant, dot-delimited version specifier such as
'3.14-rc5'.
When the package being queried has been installed by
easy_installor
pip, the existing setup tools machinery is used to perform the test and the
versionand
version_attrarguments are ignored.
When that is not the case, the
versionargument is taken as the installed version of the package (perhaps obtained by interrogating the package in some other way). When
versionis
None, the named package is imported into a subprocess, and the
__version__value of that import is tested. If the package uses some other name than
__version__for its version global, that name can be passed as the
version_attrargument.
For more details and examples refer to the function’s doc-string, found in
Pyinstaller/utils/hooks/__init__.py.
collect_submodules( 'package-name', pattern=None ):
Returns a list of strings that specify all the modules in a package, ready to be assigned to the
hiddenimportsglobal. Returns an empty list when
packagedoes not name a package (a package is defined as a module that contains a
__path__attribute).
The
pattern, if given, is function to filter through the submodules found, selecting which should be included in the returned list. It takes one argument, a string, which gives the name of a submodule. Only if the function returns true is the given submodule is added to the list of returned modules. For example,
filter=lambda name: 'test' not in namewill return modules that don’t contain the word
test.
is_module_or_submodule( name, mod_or_submod ):
- This helper function is designed for use in the
filterargument of
collect_submodules, by returning
Trueif the given
nameis a module or a submodule of
mod_or_submod. For example:
collect_submodules('foo', lambda name: not is_module_or_submodule(name, 'foo.test'))excludes
foo.testand
foo.test.onebut not
foo.testifier.
collect_data_files( 'module-name', subdir=None, include_py_files=False ):
Returns a list of (source, dest) tuples for all non-Python (i.e. data) files found in module-name, ready to be assigned to the
datasglobal. module-name is the fully-qualified name of a module or package (but not a zipped “egg”). The function uses
os.walk()to visit the module directory recursively.
subdir, if given, restricts the search to a relative subdirectory.
Normally Python executable files (ending in
.py,
.pyc, etc.) are not collected. Pass
include_py_files=Trueto collect those files as well. (This can be used with routines such as those in
pkgutilthat search a directory for Python executable files and load them as extensions or plugins.)
collect_dynamic_libs( 'module-name' ):
Returns a list of (source, dest) tuples for all the dynamic libs present in a module directory. The list is ready to be assigned to the
binariesglobal variable. The function uses
os.walk()to examine all files in the module directory recursively. The name of each file found is tested against the likely patterns for a dynamic lib:
*.dll,
*.dylib,
lib*.pyd, and
lib*.so. Example:
binaries = collect_dynamic_libs( 'enchant' )
get_module_file_attribute( 'module-name' ):
Return the absolute path to module-name, a fully-qualified module name. Example:
nacl_dir = os.path.dirname(get_module_file_attribute('nacl'))
get_package_paths( 'package-name' ):
Given the name of a package, return a tuple. The first element is the absolute path to the folder where the package is stored. The second element is the absolute path to the named package. For example, if
pkg.subpkgis stored in
/abs/Python/libthe result of:
get_package_paths( 'pkg.subpkg' )
is the tuple,
( '/abs/Python/lib', '/abs/Python/lib/pkg/subpkg' )
copy_metadata( 'package-name' ):
Given the name of a package, return the name of its distribution metadata folder as a list of tuples ready to be assigned (or appended) to the
datasglobal variable.
Some packages rely on metadata files accessed through the
pkg_resourcesmodule. Normally PyInstaller does not include these metadata files. If a package fails without them, you can use this function in a hook file to easily add them to the bundle. The tuples in the returned list have two strings. The first is the full pathname to a folder in this system. The second is the folder name only. When these tuples are added to
datas, the folder will be bundled at the top level. If package-name does not have metadata, an AssertionError exception is raised.
get_homebrew_path( formula='' ):
- Return the homebrew path to the named formula, or to the global prefix when formula is omitted. Returns None if not found.
django_find_root_dir():
- Return the path to the top-level Python package containing the Django files, or None if nothing can be found.
django_dottedstring_imports( 'django-root-dir' )
- Return a list of all necessary Django modules specified in the Django settings.py file, such as the
Django.settings.INSTALLED_APPSlist and many others.
The
hook(hook_api) Function¶
In addition to, or instead of, setting global values,
a hook may define a function
hook(hook_api).
A
hook() function should only be needed if the hook
needs to apply sophisticated logic or to make a complex
search of the source machine.
The Analysis object calls the function and passes it a
hook_api object
which has the following immutable properties:
__name__:
- The fully-qualified name of the module that caused the hook to be called, e.g.,
six.moves.tkinter.
__file__:
The absolute path of the module. If it is:
- A standard (rather than namespace) package, this is the absolute path of this package’s directory.
- A namespace (rather than standard) package, this is the abstract placeholder
-.
- A non-package module or C extension, this is the absolute path of the corresponding file.
__path__:
- A list of the absolute paths of all directories comprising the module if it is a package, or
None. Typically the list contains only the absolute path of the package’s directory.
The
hook_api object also offers the following methods:
add_imports( *names ):
- The
namesargument may be a single string or a list of strings giving the fully-qualified name(s) of modules to be imported. This has the same effect as adding the names to the
hiddenimportsglobal.
del_imports( *names ):
- The
namesargument may be a single string or a list of strings, giving the fully-qualified name(s) of modules that are not to be included if they are imported only by the hooked module. This has the same effect as adding names to the
excludedimportsglobal.
add_datas( tuple_list ):
- The
tuple_listargument has the format used with the
datasglobal variable. This call has the effect of adding items to that list.
add_binaries( tuple_list ):
- The
tuple_listargument has the format used with the
binariesglobal variable. This call has the effect of adding items to that list.
The
hook() function can add, remove or change included files using the
above methods of
hook_api.
Or, it can simply set values in the four global variables, because
these will be examined after
hook() returns.
The
pre_find_module_path( pfmp_api ) Method¶
You may write a hook with the special function
pre_find_module_path( pfmp_api ).
This method is called when the hooked module name is first seen
by Analysis, before it has located the path to that module or package
(hence the name “pre-find-module-path”).
Hooks of this type are only recognized if they are stored in
a sub-folder named
pre_find_module_path in a hooks folder,
either in the distributed hooks folder or an
--additional-hooks-dir folder.
You may have normal hooks as well as hooks of this type for the same module.
For example PyInstaller includes both a
hooks/hook-distutils.py
and also a
hooks/pre_find_module_path/hook-distutils.py.
The
pfmp_api object that is passed has the following immutable attribute:
module_name:
- A string, the fully-qualified name of the hooked module.
The
pfmp_api object has one mutable attribute,
search_dirs.
This is a list of strings that specify the absolute path, or paths,
that will be searched for the hooked module.
The paths in the list will be searched in sequence.
The
pre_find_module_path() function may replace or change
the contents of
pfmp_api.search_dirs.
Immediately after return from
pre_find_module_path(), the contents
of
search_dirs will be used to find and analyze the module.
For an example of use,
see the file
hooks/pre_find_module_path/hook-distutils.py.
It uses this method to redirect a search for distutils when
PyInstaller is executing in a virtual environment.
The
pre_safe_import_module( psim_api ) Method¶
You may write a hook with the special function
pre_safe_import_module( psim_api ).
This method is called after the hooked module has been found,
but before it and everything it recursively imports is added
to the “graph” of imported modules.
Use a pre-safe-import hook in the unusual case where:
- The script imports package.dynamic-name
- The package exists
- however, no module dynamic-name exists at compile time (it will be defined somehow at run time)
You use this type of hook to make dynamically-generated names known to PyInstaller. PyInstaller will not try to locate the dynamic names, fail, and report them as missing. However, if there are normal hooks for these names, they will be called.
Hooks of this type are only recognized if they are stored in a sub-folder
named
pre_safe_import_module in a hooks folder,
either in the distributed hooks folder or an
--additional-hooks-dir folder.
(See the distributed
hooks/pre_safe_import_module folder for examples.)
You may have normal hooks as well as hooks of this type for the same module.
For example the distributed system has both a
hooks/hook-gi.repository.GLib.py
and also a
hooks/pre_safe_import_module/hook-gi.repository.GLib.py.
The
psim_api object offers the following attributes,
all of which are immutable (an attempt to change one raises an exception):
module_basename:
- String, the unqualified name of the hooked module, for example
text.
module_name:
- String, the fully-qualified name of the hooked module, for example
module_graph:
- The module graph representing all imports processed so far.
parent_package:
- If this module is a top-level module of its package,
None. Otherwise, the graph node that represents the import of the top-level module.
The last two items,
module_graph and
parent_package,
are related to the module-graph, the internal data structure used by
PyInstaller to document all imports.
Normally you do not need to know about the module-graph.
The
psim_api object also offers the following methods:
add_runtime_module( fully_qualified_name ):
Use this method to add an imported module whose name may not appear in the source because it is dynamically defined at run-time. This is useful to make the module known to PyInstaller and avoid misleading warnings. A typical use applies the name from the
psim_api:
psim_api.add_runtime_module( psim_api.module_name )
add_alias_module( real_module_name, alias_module_name ):
real_module_nameis the fully-qualifed name of an existing module, one that has been or could be imported by name (it will be added to the graph if it has not already been imported).
alias_module_nameis a name that might be referenced in the source file but should be treated as if it were
real_module_name. This method ensures that if PyInstaller processes an import of
alias_module_nameit will use
real_module_name.
append_package_path( directory ):
- The hook can use this method to add a package path to be searched by PyInstaller, typically an import path that the imported module would add dynamically to the path if the module was executed normally.
directoryis a string, a pathname to add to the
__path__attribute. | https://pyinstaller.readthedocs.io/en/v3.2.1/hooks.html | 2018-12-10T02:00:49 | CC-MAIN-2018-51 | 1544376823236.2 | [] | pyinstaller.readthedocs.io |
Contents IT Business Management Previous Topic Next Topic Technology Portfolio Management ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Technology Portfolio Management The technologies that underlie the business applications used in your business enterprise have a shelf life that must be actively managed and diligently monitored to track their versions and lifecycle. Use the timeline view of the Technology Portfolio Management to track their dates and thereafter create an idea or a project to upgrade or retire them. The technology of a business application is also known as a software model. A software model is a specific version or configuration of a software. The software models or the technologies used in your business applications can be operating systems, database management systems, development tools, and middleware, each of which has a lifecycle. If these lifecycle stages are not tracked, there are risks where the vendor may not support them any longer and the business applications that run on these technologies are at stake. Creating an inventory of all technologies used in the enterprise helps to Track the versions of the software and manufacturer support dates for the software. Set an internal lifecycle guidance for the software. Assess risk in using outdated software. Plan to retire them just like the applications they support, at a definite date. Support upgrade processes. Internal and external lifecycle stages of the software model The business applications used in your organization are all linked to one or more business services. Each of the business services run on one or more technologies or software models. The software model has a sequence of lifecycle stages/phases from their installation to retirement. Internally, business organizations set a date based on the lifecycle phase of the software models. These phases can be Early Adopter, Mainstream, Declining use, and Retired. Similarly, the vendor of the software also sets a date for the software based on the vendor lifecycle phases such as Pre-release, General Availability, End of Life, and Obsolete. The support from the vendor may vary depending on the phase of the technology. When the software model reaches the stage of obsolescence, the vendor may stop supporting the technology. Note: The Publisher choice type of the Lifecycle type field in the Software Model Lifecycle form is the same as the External Lifecycle that is being used in APM. As a software asset management user or a software model manager you have the ability to add the software model lifecycle details to the software model. To use TPM ensure that the lifecycle data is populated in the software model table and the table name is present either in TPM or SAM. Integration with Service Mapping to use Technology Portfolio Management If the Service Mapping product is installed then business applications are related to the discovered business services. If the Service Mapping product is not installed then business applications are related to the business services. APM no longer integrates with Service Mapping through the Instances tab. The application Instances tab has been removed and the apm_app_instance table has been deprecated, which is replaced by the Business Services (cmdb_ci_service) table or the Discovered Service (cmdb_ci_discovered_service) table. Any data existing in the application instances table must be migrated to the business service table. If you are upgrading to the Kingston release, then contact the ServiceNow personnel for migrating the data. Service Mapping discovers all the business services in an organization. It builds a comprehensive map of all devices, applications, and configuration profiles used in the business services. It maps dependencies based on the connection between devices and applications. It lists all the underlying software models of a business application such as web servers, application servers, databases, middleware and network infrastructure supporting a business service. Figure 1. APM or TPM dependencies in mature ServiceNow implementation TPM depends on SAM to retrieve the technology information of the software product You can use Technology Portfolio Management even if you do not have Software Asset Management (SAM) installed. A preconfigured Software Product Model table is available to all TPM users. You can create a list of all software models that your organization uses either manually or import them from the Discovery application. Figure 2. Connecting software lifecycles to the business application Using TPM depends on SAM plugins and the dependency is as follows: With SAM Premium plugin To access the Product Classification (samp_sw_product) table you require the Software Asset Management Premium plugin. Reference of samp_sw_product_classification is present in samp_sw_product table. This content table is referenced in the Software Product Model (cmdb_software_product_model) table to retrieve the technology information. Subscribing to SAM Premium plugin enables you to view the applications By Business Applications as well as By Category in the TPM timeline view. Without SAM plugin Product classification is not available without this plugin and hence view By Category is not available in the TPM timeline view. Software model information is retrieved from SW Product Model (cmdb_software_product_model) table. You must populate this table manually or export the content from an excel sheet. View technology risks in a timelineThe timeline view of the Technology Portfolio Management displays the internal and external lifecycle phases of all technologies or the software models being used in your organization. The stages at which the technology is in terms of risk factor is color coded.Relate a business application to a business service through CI relationship editorBusiness applications can have multiple instances. Application instances are nothing but business services. Relate business applications to instances by relating business applications to business services. Business application and business service are two different configuration items which must be related through a CI relationship.Associate a business service to a software modelBusiness applications have multiple instances such as development, QA, and production. Instances are nothing but business services. Hence business services must be associated with software models to know the risk of the business service.Create a risk parameterThe risk on a software model is calculated based on four preconfigured parameters such as external aging risk, internal aging risk, external stage risk, and internal stage risk.Technology risk calculationAssess the technology risks of your business applications by calculating their risks first at the software model level and then at the business application level.Run scheduled job to generate risk valuesThe risks on the software model and business application is time dependent. Based on the external and internal lifecycles the risk changes every day, hence the risk must be calculated daily. A scheduled job is created that runs daily and calculates the risks of the software model and the business application. On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/kingston-it-business-management/page/product/application-portfolio-management/concept/technology-portfolio-management.html | 2018-12-10T02:28:28 | CC-MAIN-2018-51 | 1544376823236.2 | [] | docs.servicenow.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.