content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Software based email filters that look for spam and block it from going to the inbox. If you discover spam that was sent by a SendGrid customer, please report it to our team. We appreciate your help in keeping our email stream clean. Spam Filters are software based email filters that block email on a range of attributes from words or phrases within the email to header information and other factors. The goal is to identify spam before it is delivered to the inbox. Spam filters typically will move the messages they find to the spam folder within the user's respective email application, keeping that email out of the user's inbox entirely. To get more information please check out our Email Infrastructure Guide
https://docs.sendgrid.com/glossary/spam-filter
2022-09-25T05:17:56
CC-MAIN-2022-40
1664030334514.38
[]
docs.sendgrid.com
You are viewing documentation for version: 4 SL_STATUS_OK if successful. Error code otherwise. uint8array data Data related to the error; this field can be empty.
https://docs.silabs.com/bluetooth/latest/a00030
2022-09-25T04:56:27
CC-MAIN-2022-40
1664030334514.38
[]
docs.silabs.com
Monitor infrastructure.
https://test2.docs.influxdata.com/influxdb/v2.3/monitor-alert/templates/infrastructure/
2022-09-25T05:10:23
CC-MAIN-2022-40
1664030334514.38
[]
test2.docs.influxdata.com
May 31 We’re happy to announce the release of the Sprint 118 edition of Quamotion. The version number is 0.118.41. In this release, we’ve improved the stability and reliability of the WebDriver and xcuitrunner. WebDriver improvements - We’ve fixed an issue where the application under test would always be resigned. In some scenarios, this could cause the New-Sessioncommand to significantly slow down. - We’ve added a new Flickmethod, which allows you to specify the direction, speed and delta. xcuitrunner improvements - With this release, we’re introducing a new command line utility ios-deploy, which runs on Linux and Mac. This utility is compatible with ios-deploywhich already exists on macOS, and allows you to run native Appium tests for iOS devices on Linux and Windows. Last modified October 25, 2019: Move docs to docs/ (519bf39)
http://docs.quamotion.mobi/docs/release-notes/2019/2019-05-31/
2021-04-11T01:25:15
CC-MAIN-2021-17
1618038060603.10
[]
docs.quamotion.mobi
A Hardware Security Module (HSM) is a secure cryptographic processor that generates encrypted zone keys for secure DNS zone signing. Address Manager and DNS/DHCP Server support HSMs through DNSSEC. An HSM extends and improves DNSSEC functionality by localizing key generation and master zone signing on the HSM server instead of the BlueCat appliance/VM. BlueCat’s HSM implementation supports integration with Thales nShield Connect® HSM appliances. Note: Thales appliances with nCSS version 11.70 are compatible with BlueCat Enterprise DNS.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/HSM/8.3.2
2021-04-11T00:52:54
CC-MAIN-2021-17
1618038060603.10
[]
docs.bluecatnetworks.com
LOCK Synopsis LOCK:pc L:pc LOCK:pc +lockname#locktype:timeout,... L:pc +lockname#locktype:timeout,... LOCK:pc +(lockname#locktype,...):timeout,... L:pc +(lockname#llocktype,...):timeout,... Arguments Description There are two basic forms of the LOCK command: LOCK without Arguments The argumentless LOCK releases (unlocks) all locks currently held by the process in all namespaces. This includes exclusive and shared locks, both local and global. It also includes all accumulated incremental locks. For example, if there are three incremental locks on a given lock name, InterSystems IRIS releases all three locks and removes the lock name entry from the lock table. If you issue an argumentless LOCK during a transaction, InterSystems IRIS places all locks currently held by the process in a Delock state until the end of the transaction. When the transaction ends, InterSystems IRIS releases the locks and removes the corresponding lock name entries from the lock table. The following example applies various locks during a transaction, then issues an argumentless LOCK to release all of these locks. The locks are placed in a Delock state until the end of the transaction. The HANG commands give you time to check the lock’s ModeCount in the Lock Table: TSTART LOCK +^a(1) // ModeCount: Exclusive HANG 2 LOCK +^a(1)#"E" // ModeCount: Exclusive/1+1e HANG 2 LOCK +^a(1)#"S" // ModeCount: Exclusive/1+1e,Shared HANG 2 LOCK // ModeCount: Exclusive/1+1e->Delock,Shared->Delock HANG 10 TCOMMIT // ModeCount: locks removed from table Argumentless LOCK releases all locks held by the process without applying any locks. Completion of a process also releases all locks held by that process. LOCK with Arguments LOCK with arguments specifies one or more lock names on which to perform locking and unlocking operations. What lock operation InterSystems IRIS performs depends on the lock operation indicator argument you use: LOCK lockname unlocks all locks previously held by the process in all namespaces, then applies a lock on the specified lock name(s). LOCK +lockname applies a lock on the specified lock name(s) without unlocking any previous locks. This allows you to accumulate different locks, and allows you to apply incremental locks to the same lock. LOCK -lockname performs an unlock operation on the specified lock name(s). Unlocking decrements the lock count for the specified lock name; when this lock count decrements to zero, the lock is released. A lock operation may immediately apply the lock, or it may place the lock request on a wait queue pending the release of a conflicting lock by another process. A waiting lock request may time out (if you specify a timeout) or may wait indefinitely (until the end of the process). LOCK with Multiple Lock Names You can specify multiple locks with a single LOCK command in either of two ways: Without Parentheses: By specifying multiple lock arguments without parentheses as a comma-separated list, you can specify multiple independent lock operations, each of which can have its own timeout. (This is functionally identical to specifying a separate LOCK command for each lock argument.) Lock operations are performed in strict left-to-right order. For example: LOCK var1(1):10,+var2(1):15Copy code to clipboard Multiple lock arguments without parentheses each can have their own lock operation indicator and their own timeout argument. However, if you use multiple lock arguments, be aware that a lock operation without a plus sign lock operation indicator unlocks all prior locks, including locks applied by an earlier part of the same LOCK command. For example, the command LOCK ^b(1,1), ^c(1,2,3), ^d(1) would be parsed as three separate lock commands: the first releasing the processes’ previously held locks (if any) and locking ^b(1,1), the second immediately releasing ^b(1,1) and locking ^c(1,2,3), the third immediately releasing ^c(1,2,3) locking ^d(1). As a result, only ^d(1) would be locked. With Parentheses: By enclosing a comma-separated list of lock names in parentheses, you can perform these locking operations on multiple locks as a single atomic operation. For example: LOCK +(var1(1),var2(1)):10Copy code to clipboard All lock operations in a parentheses-enclosed list are governed by a single lock operation indicator and a single timeout argument; either all of the locks are applied or none of them are applied. A parentheses-enclosed list without a plus sign lock operation indicator unlocks all prior locks then locks all of the listed lock names. The maximum number of lock names in a single LOCK command is limited by several factors. One of them is the number of argument stacks available to a process: 512. Each lock reference requires 4 argument stacks, plus 1 additional argument stack for each subscript level. Therefore, if the lock references have no subscripts, the maximum number of lock names is 127. If the locks have one subscript level, the maximum number of lock names is 101. This should be taken as a rough guide; other factors may further limit the number of locks names in a single LOCK. There is no separate limit on number of locks for a remote system. Arguments pc An optional postconditional expression that can make the command conditional. InterSystems IRIS executes the LOCK command if the postconditional expression is true (evaluates to a nonzero numeric value). InterSystems IRIS does not execute the command if the postconditional expression is false (evaluates to zero). You can specify a postconditional expression on an argumentless LOCK command or a LOCK command with arguments. For further details, refer to Command Postconditional Expressions in Using ObjectScript. lock operation indicator The lock operation indicator is used to apply (lock) or remove (unlock) a lock. It can be one of the following values: If your LOCK command contains multiple comma-separated lock arguments, each lock argument can have its own lock operation indicator. InterSystems IRIS parses this as multiple independent LOCK commands. lockname A lockname is the name of a lock for a data resource; it is not the data resource itself. That is, your program can specify a lock named ^a(1) and a variable named ^a(1) without conflict. The relationship between the lock and the data resource is a programming convention; by convention, processes must acquire the lock before modifying the corresponding data resource. Lock names are case-sensitive. Lock names follow the same naming conventions as the corresponding local variables and global variables. A lock name can be subscripted or unsubscripted. Lock subscripts have the same naming conventions and maximum length and number of levels as variable subscripts. In InterSystems IRIS, the following are all valid and unique lock names: a, a(1), A(1), ^a, ^a(1,2), ^A(1,1,1). For further details, see the Variables chapter of Using ObjectScript. For performance reasons, it is recommended you specify lock names with subscripts whenever possible. For example, ^a(1) rather than ^a. Subscripted lock names are used in documentation examples. Lock names can be local or global. A lock name such as A(1) is a local lock name. It applies only to that process, but does apply across namespaces. A lock name that begins with a caret (^) character is a global lock name; the mapping for this lock follows the same mapping as the corresponding global, and thus can apply across processes, controlling their access to the same resource. (See Global Structure in Using Globals.) Process-private global names can not be used as lock names. Attempting to use a process-private global name as a lock name performs no operation and completes without issuing an error. A lock name can represent a local or global variable, subscripted or unsubscripted. It can be an implicit global reference, or an extended reference to a global on another computer. (See Global Structure in Using Globals.) The data resource corresponding to a lock name does not need to exist. For example, you may lock the lock name ^a(1,2,3) whether or not a global variable with the same name exists. Because the relationship between locks and data resources is an agreed-upon convention, a lock may be used to protect a data resource with an entirely different name. locktype A letter code specifying the type of lock to apply or remove. locktype is an optional argument; if you omit locktype, the lock type defaults to an exclusive non-escalating lock. If you omit locktype, you must omit the pound sign (#) prefix. If you specify locktype, the syntax for lock type is a mandatory pound sign (#), followed by quotation marks enclosing one or more lock type letter codes. Lock type letter codes can be specified in any order and are not case-sensitive. The following are the lock type letter codes: S: Shared lock Allows multiple processes to simultaneously hold nonconflicting locks on the same resource. For example, two (or more) processes may simultaneously hold shared locks on the same resource, but an exclusive lock limits the resource to one process. An existing shared lock prevents all other processes from applying an exclusive lock, and an existing exclusive lock prevents all other processes from applying a shared lock on that resource. However, a process can first apply a shared lock on a resource and then the same process can apply an exclusive lock on the resource, upgrading the lock from shared to exclusive. Shared and Exclusive lock counts are independent. Therefore, to release such a resource it is necessary to release both the exclusive lock and the shared lock. All locking and unlocking operations that are not specified as shared (“S”) default to exclusive. A shared lock may be incremental; that is, a process may issue multiple shared locks on the same resource. You may specify a shared lock as escalating (“SE”) when locking and unlocking. When unlocking a shared lock, you may specify the unlock as immediate (“SI”) or deferred (“SD”). To view the current shared locks with their increment counts for escalating and non-escalating lock types, refer to the system-wide lock table, described in the “Lock Management” chapter of Using ObjectScript. E: Escalating lock Allows you to apply a large number of concurrent locks without overflowing the lock table. By default, locks are non-escalating. When applying a lock, you can use locktype “E” to designate that lock as escalating. When releasing an escalating lock, you must specify locktype “E” in the unlock statement. You can designate both exclusive locks and shared (“S”) locks as escalating. Commonly, you would use escalating locks when applying a large number of concurrent locks at the same subscript level. For example LOCK +^mylock(1,1)#"E",+^mylock(1,2)#"E",+^mylock(1,3)#"E".... The same lock can be concurrently applied as a non-escalating lock and as an escalating lock. For example, ^mylock(1,1) and ^mylock(1,1)#"E". InterSystems IRIS counts locks issued with locktype “E” separately in the lock table. For information on how escalating and non-escalating locks are represented in the lock table, refer to the “Lock Management” chapter of Using ObjectScript. When the number of “E” locks at a subscript level reaches a threshold number, the next “E” lock requested for that subscript level automatically attempts to lock the parent node (the next higher subscript level). If it cannot, no escalation occurs. If it successfully locks the parent node, it establishes one parent node lock with a lock count corresponding to the number of locks at the lower subscript level, plus 1. The locks at the lower subscript level are released. Subsequent “E” lock requests to the lower subscript level further increment the lock count of this parent node lock. You must unlock all “E” locks that you have applied to decrement the parent node lock count to 0 and de-escalate to the lower subscript level. The default lock threshold is 1000 locks; lock escalation occurs when the 1001st lock is requested. Note that once locking is escalated, lock operations preserve only the number of locks applied, not what specific resources were locked. Therefore, failing to unlock the same resources that you locked can cause “E” lock counts to get out of sync. In the following example, lock escalation occurs when the program applies the lock threshold + 1 “E” lock. This example shows that lock escalation both applies a lock on the next-higher subscript level and releases the locks on the lower subscript level: Main TSTART SET thold=$SYSTEM.SQL.Util.GetOption("LockThreshold") WRITE "lock escalation threshold is ",thold,! SET almost=thold-5 FOR i=1:1:thold+5 { LOCK +dummy(1,i)#"E" IF i>almost { IF ^$LOCK("dummy(1,"_i_")","OWNER") '= "" {WRITE "lower level lock applied at ",i,"th lock ",! } ELSEIF ^$LOCK("dummy(1)","OWNER") '= "" {WRITE "lock escalation",! WRITE "higher level lock applied at ",i,"th lock ",! QUIT } ELSE {WRITE "No locks applied",! } } } TCOMMITCopy code to clipboard Note that only “E” locks are counted towards lock escalation. The following example applies both default (non-“E”) locks and “E” locks on the same variable. Lock escalation only occurs when the total number of “E” locks on the variable reaches the lock threshold: Main TSTART SET thold=$SYSTEM.SQL.Util.GetOption("LockThreshold") WRITE "lock escalation threshold is ",thold,! SET noE=17 WRITE "setting ",noE," non-escalating locks",! FOR i=1:1:thold+noE { IF i < noE {LOCK +a(6,i)} ELSE {LOCK +a(6,i)#"E"} IF ^$LOCK("a(6)","OWNER") '= "" { WRITE "lock escalation on lock a(6,",i,")",! QUIT } } TCOMMITCopy code to clipboard Unlocking “E” locks is the reverse of the above. When locking is escalated, unlocks at the child level decrement the lock count of the parent node lock until it reaches zero (and is unlocked); these unlocks decrementing a count, they are not matched to specific locks. When the parent node lock count reaches 0, the parent node lock is removed and “E” locking de-escalates to the lower subscript level. Any subsequent locks at the lower subscript level create specific locks at that level. The “E” locktype can be combined with any other locktype. For example, “SE”, “ED”, “EI”, “SED”, “SEI”. When combined with the “I” locktype it permits unlocks of “E” locks to occur immediately when invoked, rather than at the end of the current transaction. This “EI” locktype can minimize situations where locking is escalated. Commonly, “E” locks are automatically applied for SQL INSERT, UPDATE, and DELETE operations within a transaction. However, there are specific limitations on SQL data definition structures that support “E” locking. Refer to the specific SQL commands for details. I: Immediate unlock Immediately releases a lock, rather than waiting until the end of a transaction: Specifying “I” when unlocking a non-incremented (lock count 1) lock immediately releases the lock. By default, an unlock does not immediately release a non-incremented lock. Instead, when you unlock a non-incremented lock InterSystems IRIS maintains that lock in a delock state until the end of the transaction. Specifying “I” overrides this default behavior. Specifying “I” when unlocking an incremented lock (lock count > 1) immediately releases the incremental lock, decrementing the lock count by 1. This is the same behavior as a default unlock of an incremented lock. The “I” locktype is used when performing an unlock during a transaction. It has the same effect on InterSystems IRIS unlock behavior whether the lock was applied within the transaction or outside of the transaction. The “I” locktype performs no operation if the unlock occurs outside of a transaction. Outside of a transaction, an unlock always immediately releases a specified lock. “I” can only be specified for an unlock operation; it cannot be specified for a lock operation. “I” can be specified for an unlock of a shared lock (#"SI") or an exclusive lock (#"I"). Locktypes “I” and “D” are mutually exclusive. “IE” can be used to immediately unlock an escalating lock. This immediate unlock is shown in the following example: TSTART LOCK +^a(1) // apply lock ^a(1) LOCK -^a(1) // remove (unlock) ^a(1) // An unlock without a locktype defers the unlock // of a non-incremented lock to the end of the transaction. WRITE "Default unlock within a transaction.",!,"Go look at the Lock Table",! HANG 10 // This HANG allows you to view the current Lock Table LOCK +^a(1) // reapply lock ^a(1) LOCK -^a(1)#"I" // remove (unlock) lock ^a(1) immediately // this removes ^a(1) from the lock table immediately // without waiting for the end of the transaction WRITE "Immediate unlock within a transaction.",!,"Go look at the Lock Table",! HANG 10 // This HANG allows you to view the current Lock Table // while still in the transaction TCOMMITCopy code to clipboard D: Deferred unlock Controls when an unlocked lock is released during a transaction. The unlock state is deferred to the state of the previous unlock of that lock. Therefore, specifying locktype “D” when unlocking a lock may result in either an immediate unlock or a lock placed in delock state until the end of the transaction, depending on the history of the lock during that transaction. The behavior of a lock that has been locked/unlocked more than once differs from the behavior of a lock that has only been locked once during the current transaction. The “D” unlock is only meaningful for an unlock that releases a lock (lock count 1), not an unlock that decrements a lock (lock count > 1). An unlock that decrements a lock is always immediately released. “D” can only be specified for an unlock operation. “D” can be specified for a shared lock (#"SD") or an exclusive lock (#"D"). “D” can be specified for a escalating (“E”) lock, but, of course, the unlock must also be specified as escalating (“ED”). Lock types “D” and “I” are mutually exclusive. This use of “D” unlock within a transaction is shown in the following examples. The HANG commands give you time to check the lock’s ModeCount in the Lock Table. If the lock was only applied once during the current transaction, a “D” unlock immediately releases the lock. This is the same as “I” behavior. This is shown in the following example: TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"D" // Lock Table ModeCount: null (immediate unlock) HANG 10 TCOMMITCopy code to clipboard If the lock was applied more than once during the current transaction, a “D” unlock reverts to the prior unlock state. If the last unlock was a standard unlock, the “D” unlock reverts unlock behavior to that prior unlock’s behavior — to defer unlock until the end of the transaction. This is shown in the following examples: TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1) // Lock Table ModeCount: Exclusive WRITE "1st unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive->Delock WRITE "2nd unlock",! HANG 5 TCOMMITCopy code to clipboard TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1) // Lock Table ModeCount: Exclusive->Delock WRITE "1st unlock",! HANG 5 LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive->Delock WRITE "2nd) // Lock Table ModeCount: Exclusive WRITE "2nd unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive->Delock WRITE "3rd unlock",! HANG 5 TCOMMITCopy code to clipboard If the last unlock was an “I” unlock, the “D” unlock reverts unlock behavior to that prior unlock’s behavior — to immediately unlock the lock. This is shown in the following examples: TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"I" // Lock Table ModeCount: null (immediate unlock) WRITE "1st unlock",! HANG 5 LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"D" // Lock Table ModeCount: null (immediate unlock) WRITE "2nd unlock",! HANG 5 TCOMMITCopy code to clipboard TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"I" // Lock Table ModeCount: Exclusive WRITE "1st unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: null (immediate unlock) WRITE "2nd unlock",! HANG 5 TCOMMITCopy code to clipboard If the last unlock was a “D” unlock, the “D” unlock follows the behavior of the last prior non-“D” lock: TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive WRITE "1st unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: null (immediate unlock) WRITE "2nd unlock",! HANG 5 TCOMMITCopy code to clipboard TSTART LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK +^a(1) // Lock Table ModeCount: Exclusive LOCK -^a(1) // Lock Table ModeCount: Exclusive/2 WRITE "1st unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive WRITE "2nd unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: Exclusive->Delock WRITE "3rd)#"D" // Lock Table ModeCount: Exclusive WRITE "2nd unlock",! HANG 5 LOCK -^a(1)#"D" // Lock Table ModeCount: null (immediate unlock) WRITE "3rd unlock",! HANG 5 TCOMMITCopy code to clipboard timeout The number of seconds or fractions of a second to wait for a lock request to succeed before timing out. timeout is an optional argument. If omitted, the LOCK command waits indefinitely for a resource to be lockable; if the lock cannot be applied, the process will hang. The syntax for timeout is a mandatory colon (:), followed by an numeric value or an expression that evaluates to an numeric value. Valid values are seconds with or without fractional tenths or hundredths of a second. Thus the following are all valid timeout values: :5, :5.5, :0.5, :.5, :0.05, :.05. Any value smaller than :0.01 is parsed as zero. A value of zero invokes one locking attempt before timing out. A negative number is equivalent to zero. Commonly, a lock will wait if another process has a conflicting lock that prevents this lock request from acquiring (holding) the specified lock. The lock request waits until either a lock is released that resolves the conflict, or the lock request times out. Terminating the process also ends (deletes) pending lock requests. Lock conflict can result from many situations, not just one process requesting the same lock held by another process. A detailed explanation of lock conflict and lock request wait states is provided in the “Lock Management” chapter of Using ObjectScript. If you use timeout and the lock is successful, InterSystems IRIS sets the $TEST special variable to 1 (TRUE). If the lock cannot be applied within the timeout period, InterSystems IRIS sets $TEST to 0 (FALSE). Issuing a lock request without a timeout has no effect on the current value of $TEST. Note that $TEST can also be set by the user, or by a JOB, OPEN, or READ timeout. The following example applies a lock on lock name ^abc(1,1), and unlocks all prior locks held by the process: LOCK ^abc(1,1) This command requests an exclusive lock: no other process can simultaneously hold a lock on this resource. If another process already holds a lock on this resource (exclusive or shared), this example must wait for that lock to be released. It can wait indefinitely, hanging the process. To avoid this, specifying a timeout value is strongly recommended: LOCK ^abc(1,1):10 If a LOCK specifies multiple lockname arguments in a comma-separated list, each lockname resource may have its own timeout (syntax without parentheses), or all of the specified lockname resources may share a single timeout (syntax with parentheses). Without Parentheses: each lockname argument can have its own timeout. InterSystems IRIS parses this as multiple independent LOCK commands, so the timeout of one lock argument does not affect the other lock arguments. Lock arguments are parsed in strict left-to-right order, with each lock request either completing or timing out before the next lock request is attempted. With Parentheses: all lockname arguments share a timeout. The LOCK must successfully apply all locks (or unlocks) within the timeout period. If the timeout period expires before all locks are successful, none of the lock operations specified in the LOCK command are performed, and control returns to the process. InterSystems IRIS performs multiple operations in strict left-to-right order. Therefore, in LOCK syntax without parentheses, the $TEST value indicates the outcome of the last (rightmost) of multiple lockname lock requests. In the following examples, the current process cannot lock ^a(1) because it is exclusively locked by another process. These examples use a timeout of 0, which means they make one attempt to apply the specified lock. The first example locks ^x(1) and ^z(1). It sets $TEST=1 because ^z(1) succeeded before timing out: LOCK +^x(1):0,+^a(1):0,+^z(1):0 The second example locks ^x(1) and ^z(1). It sets $TEST=0 because ^a(1) timed out. ^z(1) did not specify a timeout and therefore had no effect on $TEST: LOCK +^x(1):0,+^a(1):0,+^z(1) The third example applies no locks, because a list of locks in parentheses is an atomic (all-or-nothing) operation. It sets $TEST=0 because ^a(1) timed out: LOCK +(^x(1),^a(1),^z(1)):0 Using the Lock Table to View and Delete Locks System-wide InterSystems IRIS maintains a system-wide lock table that records all locks that are in effect and the processes that have applied them. The system manager can display the existing locks in the Lock Table or remove selected locks using the Management Portal interface or the ^LOCKTAB utility, as described in the “Lock Management” chapter of Using ObjectScript. You can also use the %SYS.LockQuery class to read lock table information. From the %SYS namespace you can use the SYS.Lock class to manage the lock table. You can use the Management Portal to view held locks and pending lock requests system-wide. Go to the Management Portal, select System Operation, select Locks, then select View Locks. For further details on the View Locks table refer to the “Lock Management” chapter of Using ObjectScript. You can use the Management Portal to remove (delete) locks currently held on the system. Go to the Management Portal, select System Operation, select Locks, then select Manage Locks. For the desired process (Owner) click either “Remove” or “Remove All Locks for Process”. Removing a lock releases all forms of that lock: all increment levels of the lock, all exclusive, exclusive escalating, and shared versions of the lock. Removing a lock immediately causes the next lock waiting in that lock queue to be applied. You can also remove locks using the SYS.Lock.DeleteOneLock() and SYS.Lock.DeleteAllLocks() methods. Removing a lock requires WRITE permission. Lock removal is logged in the audit database (if enabled); it is not logged in messages.log. Incremental Locking and Unlocking Incremental locking permits you to apply the same lock multiple times: to increment the lock. An incremented lock has a lock count of > 1. Your process can subsequently increment and decrement this lock count. The lock is released when the lock count decrements to 0. No other process can acquire the lock until the lock count decrements to 0. The lock table maintains separate lock counts for exclusive locks and shared locks, and for escalating and non-escalating locks of each type. The maximum incremental lock count is 32,766. Attempting to exceed this maximum lock count results in a <MAX LOCKS> error. You can increment a lock as follows: Plus sign: Specify multiple lock operations on the same lock name with the plus sign lock operation indicator. For example: LOCK +^a(1) LOCK +^a(1) LOCK +^a(1) or LOCK +^a(1),+^a(1),+^a(1) or LOCK +(^a(1),^a(1),^a(1)). All of these would result in a lock table ModeCount of Exclusive/3. Using the plus sign is the recommended way to increment a lock. No sign: It is possible to increment a lock without using the plus sign lock operation indicator by specifying an atomic operation performing multiple locks. For example, LOCK (^a(1),^a(1),^a(1)) unlocks all prior locks and incrementally locks ^a(1) three times. This too would result in a lock table ModeCount of Exclusive/3. While this syntax works, it is not recommended. Unlocking an incremented lock when not in a transaction simply decrements the lock count. Unlocking an incremented lock while in a transaction has the following default behavior: Decrementing Unlocks: each decrementing unlock immediately release the incremental unlock until the lock count is 1. By default, the final unlock puts the lock in delock state, deferring release of the lock to the end of the transaction. This is always the case when you delock with the minus sign lock operation indicator, whether or not the operation is atomic. For example: LOCK -^a(1) LOCK -^a(1) LOCK -^a(1) or LOCK -^a(1),-^a(1),-^a(1) or LOCK -(^a(1),^a(1),^a(1)). All of these begin with a lock table ModeCount of Exclusive/3 and end with Exclusive->Delock. Unlocking Prior Resources: an operation that unlocks all prior resources immediately puts an incremented lock into a delock state until the end of the transaction. For example, either LOCK x(3) (lock with no lock operation indicator) or an argumentless LOCK would have the following effect: the incremented lock would begin with a lock table ModeCount of Exclusive/3 and end with Exclusive/3->Delock. Note that separate lock counts are maintained for the same lock as an Exclusive lock, a Shared lock, a Exclusive escalating lock and a Shared escalating lock. In the following example, the first unlock decrements four separate lock counts for lock ^a(1) by 1. The second unlock must specify all four of the ^a(1) locks to remove them. The HANG commands give you time to check the lock’s ModeCount in the Lock Table. LOCK +(^a(1),^a(1)#"E",^a(1)#"S",^a(1)#"SE") LOCK +(^a(1),^a(1)#"E",^a(1)#"S",^a(1)#"SE") HANG 10 LOCK -(^a(1),^a(1)#"E",^a(1)#"S",^a(1)#"SE") HANG 10 LOCK -(^a(1),^a(1)#"E",^a(1)#"S",^a(1)#"SE") If you attempt to unlock a lock name that has no current locks applied, no operation is performed and no error is returned. Automatic Unlock When a process terminates, InterSystems IRIS performs an implicit argumentless LOCK to clear all locks that were applied by the process. It removes both held locks and lock wait requests. Locks on Global Variables Locking is typically used with global variables to synchronize the activities of multiple processes that may access these variables simultaneously. Global variables differ from local variables in that they reside on disk and are available to all processes. The potential exists, then, for two processes to write to the same global at the same time. In fact, InterSystems IRIS processes one update before the other, so that one update overwrites and, in effect, discards the other. Global lock names begin with a caret (^) character. To illustrate locking with global variables, consider the case in which two data entry clerks are concurrently running the same student admissions application to add records for newly enrolled students. The records are stored in a global array named ^student. To ensure a unique record for each student, the application increments the global variable ^index for each student added. The application includes the LOCK command to ensure that each student record is added at a unique location in the array, and that one student record does not overwrite another. The relevant code in the application is shown below. In this case, the LOCK controls not the global array ^student but the global variable ^index. ^index is a scratch global that is shared by the two processes. Before a process can write a record to the array, it must lock ^index and update its current value (SET ^index=^index+1). If the other process is already in this section of the code, ^index will be locked and the process will have to wait until the other process releases the lock (with the argumentless LOCK command). READ !,"Last name: ",!,lname QUIT:lname="" SET lname=lname_"," READ !,"First name: ",!,fname QUIT:fname="" SET fname=fname_"," READ !,"Middle initial: ",!,minit QUIT:minit="" SET minit=minit_":" READ !,"Student ID Number: ",!,sid QUIT:sid="" SET rec = lname_fname_minit_sid LOCK ^index SET ^index = ^index + 1 SET ^student(^index)=rec LOCK The following example recasts the previous example to use locking on the node to be added to the ^student array. Only the affected portion of the code is shown. In this case, the ^index variable is updated after the new student record is added. The next process to add a record will use the updated index value to write to the correct array node. LOCK ^student(^index) SET ^student(^index) = rec SET ^index = ^index + 1 LOCK /* release all locks */ Note that the lock location of an array node is where the top level global is mapped. InterSystems IRIS ignores subscripts when determining lock location. Therefore, ^student(name) is mapped to the namespace of ^student, regardless of where the data for ^student(name) is stored. Locks in a Network In a networked system, one or more servers may be responsible for resolving locks on global variables. You can use the LOCK command with any number of servers, up to 255. You can use ^$LOCK to list remote locks, but it cannot list the lock state of a remote lock. Remote locks held by a client job on a remote server system are released when you call the ^RESJOB utility to remove the client job. Local Variable Locks The behavior is as follows: Local (non-careted) locks acquired in the context of a specific namespace, either because the default namespace is an explicit namespace or through an explicit reference to a namespace, are taken out in the manager's dataset on the local machine. This occurs regardless of whether the default mapping for globals is a local or a remote dataset. Local (non-careted) locks acquired in the context of an implied namespace or through an explicit reference to an implied namespace on the local machine, are taken out using the manager's dataset of the local machine. An implied namespace is a directory path preceded by two caret characters: "^^dir". Referencing explicit and implied namespaces is further described in Global Structure in Using Globals. See Also ^$LOCK structured system variable Lock Management in Using ObjectScript Using ObjectScript for Transaction Processing in Using ObjectScript The Monitoring Locks section of the “Monitoring InterSystems IRIS Using the Management Portal” chapter in Monitoring Guide The article Locking and Concurrency Control
https://docs.intersystems.com/irisforhealthlatest/csp/docbook/Doc.View.cls?KEY=RCOS_clock
2021-04-11T02:21:54
CC-MAIN-2021-17
1618038060603.10
[]
docs.intersystems.com
Server Architecture¶ DjangoLDP server. Warning The section should be improve LDP packages¶ Once you have your server DjangoLDP installed, you can plug to it DLP packages which are extention to treat a specific shape of data. Each component has its own datas shapes. The server that will manage datas of a specific component has to get the specfic package related. You’ll find the right package in the documentation of each component. Warning The section should be improve
https://docs.startinblox.com/import_documentation/server-architecture.html
2021-04-11T01:05:12
CC-MAIN-2021-17
1618038060603.10
[]
docs.startinblox.com
Labeling Jobs allow assigning annotation tasks to labelers. While you can call annotation interface directly from the Projects page, there are some difficulties you may face: Job management - the need to describe a particular task: what kind of objects to annotate and how Progress monitoring - tracking annotation status and reviewing submitted results Access permissions - limiting access only to specific datasets and classes within a single job Labeling Jobs address these problems in the following way: Consider this case: we want to annotate large project with two people, split the whole process into separate jobs and track overall progress. How to do that? This step is optional, but it's much more easier to manage access permissions when you have specialized teams for annotation jobs. Click on the teams selector in the left menu to open teams list. Add a new team, for example, "Labeling team". You can upload datasets to be annotated here or use "Clone to..." action to copy existing datasets from the other team. You can invite an existing user from "Members" page or create new ones from "Users" section of admin user. In this example we will show both ways. Click "Signup" button and enter login and password for a new labeler account. We will check "Restricted" option - in this case the new user will have no access to Explore section, won't be able to create or switch teams and no personal team will be created for them during signup. Now go to the "Members" page and invite some existing user to your "Labeling team". Choose an "Annotator" role - that means that user won't have access to any page apart from "Labeling Jobs". You can choose a "Developer" role if you want that user to be able to upload new data, but we advice to upload datasets in a separate "Working" team and then clone them to "Labeling" team - that way it will be easier to separate research from annotation and manage labeling jobs. Let's make some README for the project to be annotated. Open the "Projects" page, choose your project and go to the "Info" tab. Click the "edit" button to edit project readme information. We support markdown so you can attach images with annotation examples, describe valid and invalid cases and so on. Open "Labeling Jobs" page and click "Add" in the top left corner . You can change "labeling job" name, add description and readme if you want. Then assign one or more labelers to this job. In our exmaple we selected two users so images in selected datasets will be equally divided between them. Then in "Data to annotate" section choose your project and dataset. Then in section "Annotation settings" select classes and tags that will be available to the labeler. Additionally, you can limit the number of objects and tags that the labeler can create on each image. In "Images filtering" section you can specify the parameters by which images will be filtered for annotation. In our case, the "Images range" parameter is disabled, since we assigned this "labeling job" to two users and it will be calculated automatically Click "Create" to finish. You will be redirected to "labeling jobs" page. Here we see that for each labeler a separate "labeling job" was created Let's sign in as a labeler. As you can see, "labeler1" can only access labeling jobs in this team. But they see all jobs assigned to them. The "Info" button provides the necessary information about the labeling job and project readme we set up earlier. If we click on the job title you will be redirected to the annotation tool. This will automatically change the job status from "Pending" to "In progress" so that manager can see that the job has been started. As you can see, we can't create new classes and only classes that were selected on creating "labeling job" are available to us. Annotate an image and when we are done, click "Confirm" to mark that image as completed. Manager will see our progress in the jobs list. He can also track labeling statistics like created objects count, annotation average time per class, etc. After all images was annotated go back to "Labeling Jobs" and click the "Submit" button. This will mark job as completed and remove it from labeler list. Let's get back to the Manager account. After the annotator completes the "labeling job", its status will change to "On review". Click on the name of the "labeling job" and accept or decline the annotations of each image. After all the images are checked, go back to the "labeling jobs" list and complete the "labeling job" or, if there are rejected images, restart it There are two types of statistics: Member stats "Labeling job" stats Go to the "Members" page and click "Stats" button "Member" statistics contain information about user actions in the current team. It can be filtered by the selected time period and contains information about: Labeling time Completed jobs Labeled images Labeled objects Reviewed images To view annotation click "Stats" button under the "labeling job". "Labeling job" statistics divided into three parts: Job activity - general information about "labeling job" Labeled images - annotated images count Job duration - Total time from creating "labeling job" Editing duration - total time of objects editing Annotation duration - total time spent in annotation interface Statistics per class Statistics per image
https://docs.supervise.ly/labeling/jobs
2021-04-11T00:40:31
CC-MAIN-2021-17
1618038060603.10
[]
docs.supervise.ly
The Conflict detection attempts parameter on a Windows DHCP server instructs the server to attempt to ping the IP address it is about to assign to determine if the IP address is already in use. This helps to prevent duplicate IP addresses on the network. In Windows, this parameter takes the number of pings as its value. In Address Manager, you can set the conflict detection option at the server level. Note: The ping check option also modifies the conflict detection parameter in Windows. You can set ping check at different levels. However, unlike the conflict detection option, you can only use the ping check option to enable or disable conflict detection. To set conflict detection: - Select the server for which you want to add the conflict detection option. - Select the Deployment Options tab. - Under Deployment Options, click New, then select DHCP Service Option. - From the Options drop-down menu, select Conflict Detection. - Select the Enabled check box. - Under Change Control, add comments, if required. - Click Add.For more information on configuring DHCP Service options, refer to Adding DHCPv4 service deployment options.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Setting-Conflict-Detection/8.3.2
2021-04-11T01:49:58
CC-MAIN-2021-17
1618038060603.10
[]
docs.bluecatnetworks.com
Build and Flash with Eclipse IDE¶ ESP-IDF V4.0 will be released with a new CMake-based build system as the default build system. Eclipse CDT IDE support for CMake-based build system will be available before the ESP-IDF V4.0 release but is not available yet. We apologise for the inconvenience. If you require Eclipse IDE support for this pre-release version of ESP-IDF, you can follow the legacy GNU Make build system Getting Started guide which has steps for Building and Flashing with Eclipse IDE.
https://docs.espressif.com/projects/esp-idf/en/v4.0/get-started/eclipse-setup.html
2021-04-11T01:51:41
CC-MAIN-2021-17
1618038060603.10
[]
docs.espressif.com
Seeing is believing. Therefore, seeing a demo or trying out our services can open a raft of possibilities to understand how our software works and how you can imagine it to be integrated into your own identity verification flow. You can book a free trial or a demo here. Once we have a quick chat with you about your needs, we will create an account for you to login to the dashboard and get started. Once you have logged into the dashboard, and changed your default password upon first login, you will need to activate the account by filling out a few company and contact details and accepting our standard subscription agreement and privacy policy. If you are a large enterprise with very high volumes and would like a separate contract to be executed, please write to us at [email protected]. Get started by generating sandbox keys to integrate and test the services. You will be given a certain number of free credits to carry out your sandbox testing. You can also go for a developer license which gives you continued access with further free credits up to three months to complete your integration and testing. Once you are ready, you can migrate to production by swapping your Sandbox keys for Production keys and pointing the environment to production. You will also have to choose a Service plan and complete your billing before you can access the production services. If you are a large enterprise with high volumes and would like monthly billing options with a purchase order, please write to us at [email protected]. Whatever plan you choose, you will automatically receive new updates bug fixes as long as you have an active account and sufficient credits. You can also get support by writing to [email protected].
https://docs.frslabs.com/introduction/get-started
2021-04-11T01:22:24
CC-MAIN-2021-17
1618038060603.10
[]
docs.frslabs.com
Sending domains and IPs Here you can see the domains and IP addresses that Phish Threat uses to send campaign emails. Go toto review your domains and IP addresses. You must allow email and web traffic to and from these IPs and domains on your email gateway, web proxy, firewall appliance, or anywhere else in your environment where email and web filtering is done. You can also find out more about how Office 365 ATP Safe Link and Safe Attachments interact with Phish Threat V2. This list updates when we add new IPs and domains. IP addresses To ensure successful delivery of Phish Threat emails, you must add the following IP addresses to your allow list: - 54.240.51.52 - 54.240.51.53 Domain names You must also add the domains listed below to your allow lists. If you're using an external email proxy (including Central Email), you may also need to amend your SPF records. Links contained within campaign emails are configured to redirect users to an awstrack.me URL. This is expected behavior, as Phish Threat uses AWS tracking to determine which users have clicked on the malicious links. - amaz0nprime.store - auditmessages.com - awstrack.me - bankfraudalerts.com - buildingmgmt.info - corporate-realty.co - court-notices.com - e-billinvoices.com - e-documentsign.com - e-faxsent.com - e-receipts.co - epromodeals.com - fakebookalerts.live - global-hr-staff.com - gmailmsg.com - goog1e-mail.com - helpdesk-tech.com - hr-benefits.site - it-supportdesk.com - linkedn.co - mail-sender.online - memberaccounts.co - micros0ft.tech - myhr-portal.site - online-statements.site - outlook-mailer.com - secure-alerts.co - secure-bank-alerts.com - shipping-updates.com - tax-official.com - toll-citations.com - trackshipping.online - voicemailbox.online - itunes.e-reciepts.co - sophos-phish-threat.go-vip.co - go-vip.co Office 365 ATP Safe Links and Safe Attachments Office 365 Advanced Threat Protection (ATP) offers security features such. If Phish Threat V2 IP address and domain names are not included in the allow list, Office 365 executes the links. This makes it seem like an end user has clicked on the links. To ensure the proper execution of Phish Threat V2 with Office 365, set up an exception for the phish threat for both Safe Links and Safe Attachments in Office 365. For instructions on how to set up these exceptions, see IP addresses and domains. Other 3rd party email scanning products and Phish Threat V2 Other 3rd party email security products may apply their own scanning techniques that open links and attachments in emails as they are processed. If this is the case you may receive reports indicating that your users have clicked links. Please make sure the above IPs and domains are added to allow lists within the 3rd party product. We are aware that some 3rd party solutions do not allow their security features to be bypassed in this way. We are actively investigating ways to prevent false positive campaign results caused by 3rd party security products. We hope to include these in Phish Threat in the near future.
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/PhishThreatSettings.html
2021-04-11T02:09:50
CC-MAIN-2021-17
1618038060603.10
[]
docs.sophos.com
supports the creation of a tenant using the SAML (Security Assertion Markup Language) Security Manager. Users in this tenant can log into frevvo Google, Shibboleth, OpenSSO, ADFS, and PingFederate, OneLogin. In a SAMLenvironment, integration with an LDAP server for authentication is common. In general, here's how it works: On this page: frevvo only supports/certifies the SAML Security Manager when frevvo frevvo. SAML will authenticate the user but not retrieve any of the attributes. You may choose to use this mode if you: do not want to add frevvo frevvo tenant is required. The frevvo frevvo.. In the directions given below, the Service Provider refers to frevvo frevvo. The metadata for your frevvo SAML tenant must be obtained first. Customers will need to configure the frevvo metadata when creating the SAML tenant. Cloud customers can skip the Generate Your Certificate and Install the Java Cryptography steps. These instructions are provided for On-premise customers only.: password, substitute the new password in the command: keytool -exportcert -alias frevvo -file frevvo.rfc -rfc -keystore frevvoKeystore.jks -storepass p@ssw0rd: c:[Type == "", Issuer == "AD AUTHORITY"] => add(store = "Active Directory", types = (""), query = ";Manager;{0}", param = c.Value); Manager SAM1 c:[Type == ""] => add(Type = "", Value = RegExReplace(c.Value, ",[^\n]*", "")); ManagerAccountName c:[Type == ""] => issue(Type = "", Value = RegExReplace(c.Value, "^CN=", "")); In the frevvo frevvo frevvo as the Service Provider to configure Single Sign On. These instructions are for Cloud. On-Premise customers follow the same steps with one additional step to generate a certificate:: userId,tenant,firstName,lastName,email,enabled,reportsTo,roles,transaction {user}@{domain},{tenant},123,{first},{last},{email},true,,frevvo.Designer|frevvo.TenantAdmin,. frevvo. To successfully create a frevvo tenant using the SAML Security manager, you will need the following: frevvo metadata file frevvo cloud customers, migrating your tenant to the SAML Security Manager, will make the changes via the Edit Tenant screen. Once accessed, follow these steps beginning with step 2. Log onto frevvo as the superuser (on-premise) or the tenant admin (cloud). Check the Authentication Only checkbox if you want SAML to handle authentication and provide user identification but all other user attributes come from the frevvo database. When checked, the screen display changes as attribute mapping, other than the mapping for the user id and custom attributes, is no longer necessary. All users requiring access to frevvo frevvo frevvo frevvo. You may choose this mode if: When Authentication Only is selected (checked) there is no discovery of Users & Roles. They must be created in your tenant manually. The CSV upload is a good way to do this. When Authentication Only is not selected (unchecked) frevvo, frevvo will send an email to the tenant admin indicating that the user is unknown. Routing based on the user's manager will fail. Routing based on a role will succeed but the user will receive no notification. Manually creating/uploading users/roles in frevvo ahead of time avoids this situation. It is important to know that a SAML tenant with Authentication Only unchecked, means that authentication and authorization are handled by SAML/LDAP. Users are added/updated through discovery. If a tenant admin modifies user information in the frevvo UI, for example, changes an email address or adds a role for a user, the changes will stay in effect until the user logs out of the tenant and then logs back in. When the user logs back in, the changes made in the frevvo. Discovery updates only occur when the user logs into the tenant. The admin "login as" feature will not execute a discovery update. Browse the URL below to initiate the SAML authentication process by redirecting to the Identity Provider login page. Cloud Customers:{t}/login. Replace {t} with the name of your SAML tenant. Please see this documentation on the use of the sameSiteCookies attribute to ensure compatibility with your SAML configuration. Just a reminder that the tenant admin account can login directly into frevvo or use the SAML login. When you create/edit a new tenant you are prompted to set up/modify a tenant admin user id, password and email address. This tenant admin does not authenticate via your SAML IDP. It only exists in frevvo.>. If your tenant originally used the Default Security Manager and then you changed to the SAML Security Manager, this tenant admin account has already been setup. If you have forgotten the password, you can change it by: The frevvo superuser admin (Cloud customers) and the in-house superuser can change the password for the built-in userid from the Edit Tenant page. What if you do not remember the userid of your original tenant admin? Follow these steps: The frevvo (Cloud customers) and in-house superuser can see the built-in tenant admin userid from the Edit Tenant page. If your SAML userIds are in the format <username>@<domain name>, when you login to frevvo the frevvo tenant name is appended to the userId . This is as designed. You will see <username@domain name@frevvo tenant name> as the logged in user at the top of the screen. If your domain name is the same as your frevvo tenant name, it will appear as if the domain name is listed twice. Session timeouts are configured in frevvo and in your SAML Security Manager, will work in the following scenarios : Embedding forms and workflows into your website (and other use of the Link (Email/Webpage) share URL). Users will see an error like this one if you open your browser's console: Refused to display 'https://....' in a frame because it set 'X-Frame-Options' to 'deny'. If the tenant is using a SAML security manager, always use the Raw form link (see this documentation) to access your forms. This link will not load the form in a frame and login will work as expected. If you are embedding your forms inside another website, then make certain that user has to login to IDP before they can see that web page. If the user is already logged in, the form will load correctly (even inside a frame). Customer's using the SAML Security Manager, often want to schedule a daily upload batch job to automatically handle the synchronization between their Active Directory and frevvo. frevvo frevvo:, if this. Esnure you have the support version of JDK installed which includes the JCE.: Chrome browser: - You may see "Connection is not private" messages -skip these warnings and the login page displays Internet Explorer - You may see "site is not secure" or "Content was blocked because it was not signed by a valid security certificate" - skip these warnings to see the login page. The table below lists errors you may encounter when configuring your tenant with the SAML Security Manager. Verify the recommended values to resolve.
https://docs.frevvo.com/d/pages/viewpage.action?pageId=22449799
2021-04-11T00:50:36
CC-MAIN-2021-17
1618038060603.10
[]
docs.frevvo.com
Endpoint: Application Control Application control lets you detect and block applications that are not a security threat, but that you decide are unsuitable for use in the office.. - Select an application category, for example Browser plug-in.A full list of the applications in that category is displayed in the right-hand table. - We recommend that you select Select all applications. You'll refine your selection later. - Click Save to List and repeat for each category you want to control. If you want to control an application that isn't in the list supplied by Sophos, you can ask to have it added. Click the "Application Control Request" link at the bottom of Application Control settings. - In Detection Options: - Click Detect controlled applications when users access them (You will be notified). - Click Block the detected application.. If you switch off Desktop Messaging you will not see any notification messages related to Application Control. - Enter the text you want to add.
https://docs.sophos.com/central/Partner/help/en-us/central/common/tasks/ConfigureAppControl.html
2021-04-11T01:13:08
CC-MAIN-2021-17
1618038060603.10
[]
docs.sophos.com
The Freshdesk supports full refresh and incremental sync. You can choose if this connector will copy only the new or updated data, or all rows in the tables and columns you set up for replication, every time a sync is run. There are two types of incremental sync: server level (native) - when API supports filter on specific columns that Airbyte use to track changes ( updated_at, created_at, etc) client level - when API doesn't support filter and Airbyte performs filtration on its side. Several output streams are available from this source: If there are more endpoints you'd like Airbyte to support, please create an issue. The Freshdesk connector should not run into Freshdesk API limitations under normal usage. Please create an issue if you see any rate limit issues that are not automatically retried successfully. Freshdesk Account Freshdesk API Key Please read How to find your API key.
https://docs.airbyte.io/integrations/sources/freshdesk
2021-04-11T01:47:40
CC-MAIN-2021-17
1618038060603.10
[]
docs.airbyte.io
Create an Asset¶ Assets are the digital version of the system, machine, or equipment being monitored. It's a collection of signals sourced from connected devices (sensors, protocol inquiries, status information, etc), transforms, rules, and content. Hint To begin adding Assets you need to have data channels from connected devices hooked into ExoSense. By default the device simulator IoT Connector is setup. A quick introduction to the ExoSense data flow¶ ExoSense's Digital Twin Assets represent a physical asset, such as a piece of equipment. The ExoSense assets store streaming data as Signals. For example, if you have a temperature sensor, you'll have a signal that represents this in the Asset. The signal includes properties such as the data type (Temperature) and unit to ensure that any use of this signal in the application knows that this is a temperature piece of information. IoT Device Channel Asset Signal All Signals have a source in ExoSense, which are from connected IoT devices. Specifically the connected devices have specified a configuration file which defines the Channels it has. The channels represent streaming sensor, status, and control information. The channel configuration has properties (data type, data unit, etc) for these channels so it's known from end to end what the source of truth is for the type of data being processed and visualized. When creating an asset in ExoSense, you are setting up signals subscribed to these IoT device channels. Device to Asset mapping ExoSense's data pipeline is extremely flexible to fix different IoT device deployment needs: 1 Device 1 Asset One IoT device can send channel data to one asset, which is the most common use case. Many Devices 1 Asset Multiple IoT devices can send channel data to 1 asset. For example, multiple wireless devices collecting sensor data for room environment can all be mapped as signals in one asset in ExoSense. 1 Device Many Assets One IoT device may send channels to multiple Assets. This deployment topology happens when a single gateway may be communicating on a fieldbus talking to sensors on a few machines - each of which will be represented as their own Asset. Duplicate Signals The same IoT Device channel can be the source for signals on multiple Assets. There are times when having the same device channel flow of data is useful to see on two different assets. Important Note: This will multiply the data points generated and stored. Assets also contain Dashboards, Content, and meta information which is covered in more detail in the ExoSense guide reference material. Create an Asset from a device¶ This method uses a connected device's channel configuration to create an Asset with Signals to match each of the Device's channels. It's the quickest and simplest way to create an asset with useful Signals. A note about simulated devices¶ When you deploy ExoSense, a special IoT Connector with simulated devices is also created to be sure users have some data to try out to create digital Assets. The simulator sends data in ExoSense's required data schema. ExoSense starting point templates included simulators that will automatically begin running with a set of data for that application type. Instead, if the base ExoSense starting point is used, the device simulator application needs to have devices added to it. Check out the guide for using the ExoSense Device Simulator for more information. Next steps¶ - Use the reference information about Assets in Exosense to build-out and customize your asset and learn about creating assets from templates. - Build your asset dashboards - Create Asset Templates
https://docs.exosite.io/getting-started/creating-assets/
2021-04-11T02:10:58
CC-MAIN-2021-17
1618038060603.10
[array(['../../assets/platform/create_asset_from_device.gif', None], dtype=object) array(['../../assets/exosense/add_csv_data_simulator.gif', None], dtype=object) array(['../../assets/platform/add_and-assign_unclaimed_device.gif', None], dtype=object) ]
docs.exosite.io
Store SSL session keys on connected Trace appliances This procedure shows you how to enable the storage of SSL session keys on connected Trace appliances. Keys are stored for all sessions that the Discover appliance can decrypt. These keys include SSL session keys derived from SSL decryption keys you upload on the SSL Decryption Keys page, and keys received from PFS session key forwarders. - Log into the Admin UI on the Discover appliance. - In the System Configuration section, click Capture. - Click SSL Session Key Storage. - Select Enable SSL Session Key Storage. - Click Save. Next steps For more information about downloading session keys, see Download session keys with packet captures. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/7.5/session-key-storage/
2021-04-11T00:50:08
CC-MAIN-2021-17
1618038060603.10
[]
docs.extrahop.com
Feature Control¶ Administrators with the Setup - Feature permission enabled are able to enable/disable certain features of the application. These features are able to be turned on / off for at least one of the following reasons: - Features that require connected device hardware support - Features that have a high level of complexity, beyond the the level of support that the admin may want to expose. - Features that are in BETAmeaning they are available for use but require an understanding that they have known limitations. - Features that have direct or indirect usage costs Features that can be enabled / disabled include: Info Additional features may be listed on your Features tab based on unreleased features being enabled or paid add-on features. Last update: April 10, 2021
https://docs.exosite.io/exosense/admin/feature-control/
2021-04-11T01:50:54
CC-MAIN-2021-17
1618038060603.10
[]
docs.exosite.io
The nova project is large, and there are lots of complicated parts in it where it helps to have an overview to understand how the internals of a particular part work. The following is a dive into some of the internals in nova. ComputeDriver.update_provider_treemethod.. Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.
https://docs.openstack.org/nova/rocky/reference/index.html
2021-04-11T01:57:44
CC-MAIN-2021-17
1618038060603.10
[]
docs.openstack.org
Your existing videos can benefit greatly from transparent P2P streaming. Transparent P2P streaming allows viewers of your video to optimally share pieces of the video with each other while they watch - increasing the streaming capacity of your server and increasing video download speed. To understand P2P streaming view the demo at At the moment we are working on releasing an open source transparent P2P video streaming library as well as a hosted solution. For updates please register at For now you try out the BitTorrent streaming API below. To stream from BitTorrent in the browser all you need is a torrentId. A torrentId can be an infoHash, magnet or .torrent file. To stream a magnet, add the magnet to the seedess player URL, The following magnet is for an mp4 video. magnet:?xt=urn:btih:ab3f1350ebe4563a710545d0e33e09a7b7943ecf The player URL loads the video player. You can generate a video player for any magnet by appending the magnet to the end of Copy and paste the below player example to your browser URL. Video Player URL The HTML embed can be embedded directly into you HTML page. Video Player HTML Embed<</iframe> For more examples view the BitTorrent Streaming section.
https://docs.seedess.com/
2021-04-11T02:01:34
CC-MAIN-2021-17
1618038060603.10
[]
docs.seedess.com
MaXX Compositor XCompMng still relevant, if used for the right reasons As of MaXX Interactice Desktop Indy v1.0, we ship a Composition Manager called XcompMgr. XcompMgr is our tweaked version of Keith Packard's Compositor with many contributors over the years. People are saying it's a bit old and they are right! However, it's simple, easy to maintain and it works just fine for what we need it to be. The reasoning behind using XcompMgr is mainly to leverage server-side composition and window content caching. This reduce dramatically Expose events (redraws) that forces X11 windows to redraw themselves over and over when damaged. On a complex graphic application, well your machine is wasting valuable resources redrawing itself. The drawback in that due to the nature of X11, there is a lot of back and forth to and from the XServer... However, if you have a decent system with a good GPU card, what the heek, go for drop shadows... And it still way less eye-candy *crap happening than on *others. The performance hit is marginal on fast hardware, reducing expose events by a BIG factor and it looks smashing (if you want to). Try these two variations of XcompMgr on a winterm window: - No shadow but super fast server-side composition with reduced Expose events (default) $ XcompMgr -a - For nice shadows and fewer Expose events (but less efficient from a X11 protocol point of view) $ XcompMgr -C -f Testing out To see how it works, just move any window over let say, gmemuage or gr_osview and you will understand... try without XcompMgr first, then with the two options. There are lots of options, I invite you to try them... XcompMgr -h for help $ XcompMgr v1.1.5 beta MaXX Desktop Edition usage: XcompMgr [options] Options -d display Specifies which display should be managed. -r radius Specifies the blur radius for client-side shadows. (default 12) -o opacity Specifies the translucency for client-side shadows. (default .75) -l left-offset Specifies the left offset for client-side shadows. (default -15) -t top-offset Specifies the top offset for clinet-side shadows. (default -15) -I fade-in-step Specifies the opacity change between steps while fading in. (default 0.028) -O fade-out-step Specifies the opacity change between steps while fading out. (default 0.03) -D fade-delta-time Specifies the time between steps in a fade in milliseconds. (default 10) -a Use automatic server-side compositing. Faster, but no special effects. -c Draw client-side shadows with fuzzy edges. -C Avoid drawing shadows on dock/panel windows. -f Fade windows in/out when opening/closing. -F Fade windows during opacity changes. -n Normal client-side compositing with transparency support -s Draw server-side shadows with sharp edges. -S Enable synchronous operation (for debugging).
https://docs.maxxinteractive.com/books/customization/page/maxx-compositor
2021-04-11T00:09:35
CC-MAIN-2021-17
1618038060603.10
[]
docs.maxxinteractive.com
Device Encryption Policy Device Encryption allows you to manage BitLocker Drive Encryption on Windows computers and FileVault on Macs. Encrypting hard disks keeps data safe, even when a device is lost or stolen. Go toto manage device encryption. You set up encryption as follows: - The Device Encryption agent is installed on Windows computers automatically when you use the standard Windows agent installer (if you have the required license). You must manually install the Device Encryption agent on Macs. - Create a Device Encryption policy and apply the policy to users as described below. - Computers are encrypted when those users log in.Note FileVault encryption is user-based; every user of an endpoint must have encryption turned on. For full details of how computers are encrypted, see Device Encryption administrator guide. To set up a policy, do as follows: - Create a Device Encryption policy. - Open the policy's Settings tab and configure it as described below. Make sure the policy is turned on. Settings Device Encryption is on/off: A computer is encrypted as soon as one of the users the policy applies to logs in. A Windows endpoint stays encrypted even if a different user who isn't included in the policy logs in. Encrypt boot volume only: This option allows you to encrypt the boot volume only. Data volumes are ignored. Advanced Windows settings Require startup authentication: This option is turned on by default. It enforces authentication via TPM+PIN, passphrase, or USB key. If you turn it off, TPM-only logon protection is installed on supported computers. For more information on authentication methods, see Device Encryption administrator guide. Require new authentication password/PIN from users: This option is turned off by default. It forces a change of the BitLocker password or PIN after the specified time. An event is logged when users change their password or PIN. If users close the dialog without entering a new password or PIN, the dialog is shown again after 30 seconds, until they enter a new one. After users have closed the dialog five times without changing the password or PIN, an alert is logged. Encrypt used space only:This option is turned off by default. It allows you to encrypt used space only instead of encrypting the whole drive. You can use it to make initial encryption (when the policy is first applied to a computer) much faster. Password protect files for secure sharing (Windows only) You can protect files up to 50Mb. Enable right-click context menu: If you turn on this option, a Create password-protected file option is added to the right-click menu of files. Users can attach password-protected files to emails when sending sensitive data to recipients outside your corporate network. Files are wrapped in a new HTML file with encrypted content. Recipients can open the file by double-clicking it and entering the password. They can send the received file back and protect it with the same or a new password, or they can create a new password-protected file. Enable Outlook add-in: This option adds encryption of email attachments to Outlook. Users can protect attachments by selecting Protect Attachments on the Outlook ribbon. All unprotected attachments are wrapped in a new HTML attachment with encrypted content, and the email is sent. Always ask how to proceed with attached files: If you turn on this option, users must choose how to send attachments whenever the message contains one. They can send them password protected or unprotected. You can enter excluded domains for which the Always ask how to proceed with attached files option does not apply. For example, your organization's domain. If recipients belong to such a domain, the senders aren't asked how they want to handle attachments. Enter only complete domain names and separate them by commas.
https://docs.sophos.com/central/Customer/help/en-us/central/Customer/concepts/ConfigureDeviceEncryption.html
2021-04-11T01:03:40
CC-MAIN-2021-17
1618038060603.10
[]
docs.sophos.com
mmcif_pdbx¶ This is yet another PyPI package for. It emphasizes a simple and pure Python interface to basic mmCIF functionality. The canonical mmCIF Python package can be found at. It is full-featured and includes C/C++ code to accelerate I/O functions. This package provides the module pdbx. More information about the pdbx module can be found in the API reference section. Origin of this software¶ All of the code in this repository is based on. Specifically, this code is directly derived from linked from. See for more information about this package, including examples. Versions¶ Versions 0.* maintain API compatibility with the original code. Subsequent versions break that compatibility, primarily by renaming methods in compliance with PEP8. Installation¶ This python package can be installed via setuptools, pip install ., or via PyPI. Testing¶ The software can be tested with pytest by running: python -m pytest from the top-level directory.
https://mmcif-pdbx.readthedocs.io/en/latest/
2021-04-11T01:51:32
CC-MAIN-2021-17
1618038060603.10
[]
mmcif-pdbx.readthedocs.io
SAP on AWS High Availability with Overlay IP Address Routing SAP specialists, Amazon Web Services (AWS) Last updated: June 2020 This guide is part of a content series that provides detailed information about hosting, configuring, and using SAP technologies in the Amazon Web Services (AWS) Cloud. For the other guides in the series, ranging from overviews to advanced topics, see SAP on AWS Technical Documentation Overview This guide provides SAP customers and partners instructions to set up a highly available SAP architecture that uses overlay IP addresses on Amazon Web Services (AWS). This guide includes two configuration approaches: AWS Transit Gateway serves as central hub to facilitate network connection to an overlay IP address. Elastic Load Balancing where a Network Load Balancer enables network access to an overlay IP address. This guide is intended for users who have previous experience installing and operating highly available SAP environments and systems.
https://docs.aws.amazon.com/sap/latest/sap-hana/sap-ha-overlay-ip.html
2021-04-11T02:09:08
CC-MAIN-2021-17
1618038060603.10
[]
docs.aws.amazon.com
. You can change the behavior of your forms by adding frevvo URL parameters to the form URLs found in the various share choices. These frevvo: frevvo as the same user. To utilize this feature, try this example:.
https://docs.frevvo.com/d/display/frevvo90/URL+Parameters
2021-04-11T00:35:05
CC-MAIN-2021-17
1618038060603.10
[]
docs.frevvo.com
Best practices for network connectivity and security in Azure Kubernetes Service (AKS) As you create and manage clusters in Azure Kubernetes Service (AKS), you provide network connectivity for your nodes and applications. These network resources include IP address ranges, load balancers, and ingress controllers. To maintain a high quality of service for your applications, you need to strategize and configure these resources. This best practices article focuses on network connectivity and security for cluster operators. In this article, you learn how to: - Compare the kubenet and Azure Container Networking Interface (CNI) network modes in AKS. - Plan for required IP addressing and connectivity. - Distribute traffic using load balancers, ingress controllers, or a web application firewall (WAF). - Securely connect to cluster nodes. Choose the appropriate network model Best practice guidance Use Azure CNI networking in AKS for integration with existing virtual networks or on-premises networks. This network model allows greater separation of resources and controls in an enterprise environment. Virtual networks provide the basic connectivity for AKS nodes and customers to access your applications. There are two different ways to deploy AKS clusters into virtual networks: Azure CNI networking Deploys into a virtual network and uses the Azure CNI Kubernetes plugin. Pods receive individual IPs that can route to other network services or on-premises resources. Kubenet networking Azure manages the virtual network resources as the cluster is deployed and uses the kubenet Kubernetes plugin. For production deployments, both kubenet and Azure CNI are valid options. CNI Networking Azure CNI is a vendor-neutral protocol that lets the container runtime make requests to a network provider. It assigns IP addresses to pods and nodes, and provides IP address management (IPAM) features as you connect to existing Azure virtual networks. Each node and pod resource receives an IP address in the Azure virtual network - no need for extra routing to communicate with other resources or services. Notably, Azure CNI networking for production allows for separation of control and management of resources. From a security perspective, you often want different teams to manage and secure those resources. With Azure CNI networking, you connect to existing Azure resources, on-premises resources, or other services directly via IP addresses assigned to each pod. When you use Azure CNI networking, the virtual network resource is in a separate resource group to the AKS cluster. Delegate permissions for the AKS cluster identity to access and manage these resources. The cluster identity used by the AKS cluster must have at least Network Contributor permissions on the subnet within your virtual network. If you wish to define a custom role instead of using the built-in Network Contributor role, the following permissions are required: Microsoft.Network/virtualNetworks/subnets/join/action Microsoft.Network/virtualNetworks/subnets/read By default, AKS uses a managed identity for its cluster identity. However, you are able to use a service principal instead. For more information about: - AKS service principal delegation, see Delegate access to other Azure resources. - Managed identities, see Use managed identities. As each node and pod receives its own IP address, plan out the address ranges for the AKS subnets. Keep in mind: - The subnet must be large enough to provide IP addresses for every node, pods, and network resource that you deploy. - With both kubenet and Azure CNI networking, each node running has default limits to the number of pods. - Each AKS cluster must be placed in its own subnet. - Avoid using IP address ranges that overlap with existing network resources. - Necessary to allow connectivity to on-premises or peered networks in Azure. - To handle scale out events or cluster upgrades, you need extra IP addresses available in the assigned subnet. - This extra address space is especially important if you use Windows Server containers, as those node pools require an upgrade to apply the latest security patches. For more information on Windows Server nodes, see Upgrade a node pool in AKS. To calculate the IP address required, see Configure Azure CNI networking in AKS. When creating a cluster with Azure CNI networking, you specify other address ranges for the cluster, such as the Docker bridge address, DNS service IP, and service address range. In general, make sure these address ranges: - Don't overlap each other. - Don't overlap with any networks associated with the cluster, including any virtual networks, subnets, on-premises and peered networks. For the specific details around limits and sizing for these address ranges, see Configure Azure CNI networking in AKS. Kubenet networking Although kubenet doesn't require you to set up the virtual networks before the cluster is deployed, there are disadvantages to waiting: - Since nodes and pods are placed on different IP subnets, User Defined Routing (UDR) and IP forwarding routes traffic between pods and nodes. This extra routing may reduce network performance. - Connections to existing on-premises networks or peering to other Azure virtual networks can be complex. Since you don't create the virtual network and subnets separately from the AKS cluster, Kubenet is ideal for: - Small development or test workloads. - Simple websites with low traffic. - Lifting and shifting workloads into containers. For most production deployments, you should plan for and use Azure CNI networking. You can also configure your own IP address ranges and virtual networks using kubenet. Like Azure CNI networking, these address ranges shouldn't overlap each other and shouldn't overlap with any networks associated with the cluster (virtual networks, subnets, on-premises and peered networks). For the specific details around limits and sizing for these address ranges, see Use kubenet networking with your own IP address ranges in AKS. Distribute ingress traffic Best practice guidance To distribute HTTP or HTTPS traffic to your applications, use ingress resources and controllers. Compared to an Azure load balancer, ingress controllers provide extra features and can be managed as native Kubernetes resources. While an Azure load balancer can distribute customer traffic to applications in your AKS cluster, it's limited in understanding that traffic. A load balancer resource works at layer 4, and distributes traffic based on protocol or ports. Most web applications using HTTP or HTTPS should use Kubernetes ingress resources and controllers, which work at layer 7. Ingress can distribute traffic based on the URL of the application and handle TLS/SSL termination. Ingress also reduces the number of IP addresses you expose and map. With a load balancer, each application typically needs a public IP address assigned and mapped to the service in the AKS cluster. With an ingress resource, a single IP address can distribute traffic to multiple applications. There are two components for ingress: - An ingress resource - An ingress controller Ingress resource The ingress resource is a YAML manifest of kind: Ingress. It defines the host, certificates, and rules to route traffic to services running in your AKS cluster. The following example YAML manifest would distribute traffic for myapp.com to one of two services, blogservice or storeservice. The customer is directed to one service or the other based on the URL they access. kind: Ingress metadata: name: myapp-ingress annotations: kubernetes.io/ingress.class: "PublicIngress" spec: tls: - hosts: - myapp.com secretName: myapp-secret rules: - host: myapp.com http: paths: - path: /blog backend: serviceName: blogservice servicePort: 80 - path: /store backend: serviceName: storeservice servicePort: 80 Ingress controller An ingress controller is a daemon that runs on an AKS node and watches for incoming requests. Traffic is then distributed based on the rules defined in the ingress resource. While the most common ingress controller is based on NGINX, AKS doesn't restrict you to a specific controller. You can use Contour, HAProxy, Traefik, etc. Ingress controllers must be scheduled on a Linux node. Indicate that the resource should run on a Linux-based node using a node selector in your YAML manifest or Helm chart deployment. For more information, see Use node selectors to control where pods are scheduled in AKS. Note Windows Server nodes shouldn't run the ingress controller. There are many scenarios for ingress, including the following how-to guides: - Create a basic ingress controller with external network connectivity - Create an ingress controller that uses an internal, private network and IP address - Create an ingress controller that uses your own TLS certificates - Create an ingress controller that uses Let's Encrypt to automatically generate TLS certificates with a dynamic public IP address or with a static public IP address Secure traffic with a web application firewall (WAF) Best practice guidance To scan incoming traffic for potential attacks, use a web application firewall (WAF) such as Barracuda WAF for Azure or Azure Application Gateway. These more advanced network resources can also route traffic beyond just HTTP and HTTPS connections or basic TLS termination. Typically, an ingress controller is a Kubernetes resource in your AKS cluster that distributes traffic to services and applications. The controller runs as a daemon on an AKS node, and consumes some of the node's resources, like CPU, memory, and network bandwidth. In larger environments, you'll want to: - Offload some of this traffic routing or TLS termination to a network resource outside of the AKS cluster. - Scan incoming traffic for potential attacks. For that extra layer of security, a web application firewall (WAF) filters the incoming traffic. With a set of rules, the Open Web Application Security Project (OWASP) watches for attacks like cross-site scripting or cookie poisoning. Azure Application Gateway (currently in preview in AKS) is a WAF that integrates with AKS clusters, locking in these security features before the traffic reaches your AKS cluster and applications. Since other third-party solutions also perform these functions, you can continue to use existing investments or expertise in your preferred product. Load balancer or ingress resources continually run in your AKS cluster and refine the traffic distribution. App Gateway can be centrally managed as an ingress controller with a resource definition. To get started, create an Application Gateway Ingress controller. Control traffic flow with network policies Best practice guidance Use network policies to allow or deny traffic to pods. By default, all traffic is allowed between pods within a cluster. For improved security, define rules that limit pod communication. Network policy is a Kubernetes feature available in AKS that lets you control the traffic flow between pods. You allow or deny traffic to the pod based on settings such as assigned labels, namespace, or traffic port. Network policies are a cloud-native way to control the flow of traffic for pods. As pods are dynamically created in an AKS cluster, required network policies can be automatically applied. To use network policy, enable the feature when you create a new AKS cluster. You can't enable network policy on an existing AKS cluster. Plan ahead to enable network policy on the necessary clusters. Note Network policy should only be used for Linux-based nodes and pods in AKS. You create a network policy as a Kubernetes resource using a YAML manifest. Policies are applied to defined pods, with ingress or egress rules defining traffic flow. The following example applies a network policy to pods with the app: backend label applied to them. The ingress rule only allows traffic from pods with the app: frontend label: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: backend-policy spec: podSelector: matchLabels: app: backend ingress: - from: - podSelector: matchLabels: app: frontend To get started with policies, see Secure traffic between pods using network policies in Azure Kubernetes Service (AKS). Securely connect to nodes through a bastion host Best practice guidance Don't expose remote connectivity to your AKS nodes. Create a bastion host, or jump box, in a management virtual network. Use the bastion host to securely route traffic into your AKS cluster to remote management tasks. You can complete most operations in AKS using the Azure management tools or through the Kubernetes API server. AKS nodes are only available on a private network and aren't connected to the public internet. To connect to nodes and provide maintenance and support, route your connections through a bastion host, or jump box. Verify this host lives in a separate, securely-peered management virtual network to the AKS cluster virtual network. The management network for the bastion host should be secured, too. Use an Azure ExpressRoute or VPN gateway to connect to an on-premises network, and control access using network security groups. Next steps This article focused on network connectivity and security. For more information about network basics in Kubernetes, see Network concepts for applications in Azure Kubernetes Service (AKS)
https://docs.microsoft.com/nb-no/azure/aks/operator-best-practices-network
2021-04-11T01:31:44
CC-MAIN-2021-17
1618038060603.10
[array(['media/operator-best-practices-network/advanced-networking-diagram.png', 'Diagram showing two nodes with bridges connecting each to a single Azure VNet'], dtype=object) array(['media/operator-best-practices-network/aks-ingress.png', 'Diagram showing Ingress traffic flow in an AKS cluster'], dtype=object) array(['media/operator-best-practices-network/web-application-firewall-app-gateway.png', 'A web application firewall (WAF) such as Azure App Gateway can protect and distribute traffic for your AKS cluster'], dtype=object) array(['media/operator-best-practices-network/connect-using-bastion-host-simplified.png', 'Connect to AKS nodes using a bastion host, or jump box'], dtype=object) ]
docs.microsoft.com
Pyramid Change History¶ 1.9.3 (Unreleased)¶ - Set appropriate codeand titleattributes on the HTTPClientErrorand HTTPServerErrorexception classes. This prevents inadvertently returning a 520 error code. See 1.9.2 (2018-04-23)¶ - Pin to webob >= 1.7.0instead of 1.7.0rc2to avoid accidentally opting users into pre-releases because a downstream dependency allowed it. See - Fix pyramid.scripting.get_rootwhich was broken by the execution policy feature added in the 1.9 release. See 1.9.1 (2017-07-13)¶ - Add a _depthand _categoryarguments to all of the venusian decorators. The _categoryargument can be used to affect which actions are registered when performing a config.scan(..., category=...)with a specific category. The _depthargument should be used when wrapping the decorator in your own. This change affects pyramid.view.view_config, pyramid.view.exception_view_config, pyramid.view.forbidden_view_config, pyramid.view.notfound_view_config, pyramid.events.subscriberand pyramid.response.response_adapterdecorators. See and - Fix a circular import which made it impossible to import pyramid.viewderiversbefore pyramid.config. See - Improve documentation to show the pyramid.config.Configuratorbeing used as a context manager in more places. See 1.9a1 (2017-05-01)¶ Major Features¶ The file format used by all p*command line scripts such as pserveand pshell, as well as the pyramid.paster.bootstrapfunction is now replaceable thanks to a new dependency on plaster. For now, Pyramid is still shipping with integrated support for the PasteDeploy INI format by depending on the plaster_pastedeploy binding library. This may change in the future. first library to use this feature is pyramid_retry.config¶ - straightfoward. improvments -methodAPInamedtonowand route_remainder_namearguments to request.resource_urlandand locale_nameproperties (reified) to the request. See. Note that the pyramid.i18n.get_localizerand pyramid.i18n.get_locale_namefunctions now simply look up these properties on the request. Add pdistreportscript,and PATCHrequests. See. add support for submitting OPTIONSand PROPFINDrequests, and allow users to specify basic authentication credentials in the request via a --loginargument to the script. See.hasobjectakodue to upstream markupsafedroppingso you can fill in values in parameterized .inifile,in the above example might be ignored), because they usually existed as attributes of the event anyway. You could usually get the same value by doing event.contexticatesubscriberpredicate to expect to receive in its __call__either a single eventargument even. -¶ -mightand pyramid.response.Responsedone within the __init__.pyof. ('text/html' vs. 'text/HTML') will now raise an error. - Forward-port from 1.3 branch: when registering multiple views with an acceptpredicate in a Pyramid application runing under Python 3, you might have received a TypeError: unorderable types: function() < function()exception. Features¶ Python 3.3 compatibility.predicate,has. -feature. - -WS static_path. The pyramid.request.Request.static_urlAPI .now offers more built-in global variables by default (including appmethod - 'n' filter. For example, ${ myhtml | n }. See. -now+. - wild - Using testing.setUpnow registers an ISettings utility as a side effect. Some test code which queries for this utility after testing.setUpvia queryAdapter will expect a return value of None. This code will need to be changed. - 'high 'sub 'TTW' 'rend 'real' WebOb request rather than a FakeRequest when it sets up the request as a threadlocal. The repoze.bfg.traversal.traverseAPI now uses a 'real' 'now' 'normal' '.. 'POST'). -['bfg (broken link); 'traversal' module's 'model_path', 'model_path_tuple', and ''s 'repoze.bfg.push:pushpage' decorator, which creates BFG views from callables which take (context, request) and return a mapping of top-level names. - Added ACL-based security. - Support for XSLT templates via a render_transform method
https://pyramid.readthedocs.io/en/latest/changes.html
2018-09-18T15:49:54
CC-MAIN-2018-39
1537267155561.35
[]
pyramid.readthedocs.io
Package bidirule Overview ▹ Overview ▾ Package bidirule implements the Bidi Rule defined by RFC 5893. This package is under development. The API may change without notice and without preserving backward compatibility. indicates a label is invalid according to the Bidi Rule. var ErrInvalid = errors.New("bidirule: failed Bidi Rule") func Direction ¶ func Direction(b []byte) bidi.Direction Direction reports the direction of the given label as defined by RFC 5893. The Bidi Rule does not have to be applied to labels of the category LeftToRight. func DirectionString ¶ func DirectionString(s string) bidi.Direction DirectionString reports the direction of the given label as defined by RFC 5893. The Bidi Rule does not have to be applied to labels of the category LeftToRight. func Valid ¶ func Valid(b []byte) bool Valid reports whether b conforms to the BiDi rule. func ValidString ¶ func ValidString(s string) bool ValidString reports whether s conforms to the BiDi rule. type Transformer ¶ Transformer implements transform.Transform. type Transformer struct { // contains filtered or unexported fields } func New ¶ func New() *Transformer New returns a Transformer that verifies that input adheres to the Bidi Rule. func (*Transformer) Reset ¶ func (t *Transformer) Reset() Reset implements transform.Transformer. func (*Transformer) Span ¶ func (t *Transformer) Span(src []byte, atEOF bool) (n int, err error) Span returns the first n bytes of src that conform to the Bidi rule. func (*Transformer) Transform ¶ func (t *Transformer) Transform(dst, src []byte, atEOF bool) (nDst, nSrc int, err error) Transform implements transform.Transformer. This Transformer has state and needs to be reset between uses.
http://docs.activestate.com/activego/1.8/pkg/golang.org/x/text/secure/bidirule/
2018-09-18T15:40:22
CC-MAIN-2018-39
1537267155561.35
[]
docs.activestate.com
How to configure an inSync client to use Proxy Server settings (v5.4 & above) Summary This article contains instructions for configuring inSync Client to communicate with a proxy server. inSync administrators might find this article useful (v5.4 & above). Introduction During inSync activation, if inSync has to communicate with the inSync Master through a proxy server, you are prompted to provide the proxy server details to inSync. You can configure inSync for the proxy details using one of the following methods: - You can leverage the system proxy server settings used by the web browser on your device. inSync can use the same IP address and port number configured for the proxy server. However, inSync cannot reuse proxy settings if automatic configuration scripts are used for your browser. - You can manually enter the IP address and port number of the proxy server on inSync. - You can upload a PAC file or provide its WPAD URL on inSync. inSync supports both authenticated and unauthenticated access to a proxy server: - If your proxy server requires a unique username and password for authentication, you can enter those details on inSync. - If your organization uses your Active Directory credentials for authentication at the proxy server, inSync automatically facilitates it. To learn more, for 5.4 click here and for 5.4.1 click here.
https://docs.druva.com/?title=Knowledge_Base/inSync/How_To/How_to_configure_an_inSync_client_to_use_Proxy_Server_settings_(v5.4_%26_above)
2018-09-18T16:08:07
CC-MAIN-2018-39
1537267155561.35
[]
docs.druva.com
Analyze data backed up using inSync inSync On-premise 5.4 Administrator Guide On-premise Editions: Private Cloud Enterprise Professional Overview With inSync Data Analytics, you can easily gain insight about the data that inSync users back up. The Data Analytics page displays graph that provides a high-level view of the data that users back up using inSync. Access the Data Analytics page To access the Data Analytics page - On the inSync Master Management Console menu bar click Analytics. The Data Analytics page appears. Graphs on the Data Analytics page The following table describes the graphs that appear on the Data Analytics page. inSync updates these graphs every ten minutes.
https://docs.druva.com/010_002_inSync_On-premise/020_5.4/040_Monitor/Analytics/010_inSync_Analytics/010_Analyze_data_backed_up_using_inSync
2018-09-18T15:34:15
CC-MAIN-2018-39
1537267155561.35
[array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'tick.png'], dtype=object) array(['https://docs.druva.com/@api/deki/files/3644/tick.png?revision=2', 'tick.png'], dtype=object) ]
docs.druva.com
: - Domain models (for more information, see How to Create a Basic Data Layer) - Overview and detail pages (for more information, see How to Create Your First Two Overview and Detail Pages) - Menu items (for more information, see How to Set Up the Navigation Structure) Create the following domain model: Create overview and detail pages to manage objects of type Customer and Order. Create menu items to access the Order and the Customer overview pages. Add the following customer data to your app: Add the following order data to your app: Add search field > by selecting Add search in the Properties pane on the right, change Data source > Type to XPath, and then click the XPath Constraint field: Enter the following expression in the XPath Constraint editor: in the Properties pane on the right, click the XPath Constraint field Open orders or orders with a minimum price of 50.00, you have to insert an orstatement in the XPath constraint: [OrderStatus = 'Open'] or [TotalPrice >= 50]. Run your application to see all the orders with the order status ‘Open’ or with a total price higher or equal to 50. To constrain the results in the order overview to only the Open orders and orders with a minimum price of 50.00, you have to insert an andstatement in the XPath constraint: [OrderStatus = 'Open'] orders from customers in Rotterdam, enter the following XPath into the XPath Contraint editor: [Sales.Order_Customer/Sales.Customer/City = 'Rotterdam']. Run your application to only see the orders of customers in Rotterdam. 9 Related Content - How to Set Up the Mendix UI Framework with Scout - How to Set Up the Mendix UI Framework with Koala - How to Set Up the Mendix UI Framework with Just CSS - How to Perform the Scout and Windows 10 Workaround - How to Use Layouts and Snippets - How to Set Up the Navigation Structure - How to Create Your First Two Overview and Detail Pages - How to Find the Root Cause of Runtime Errors - XPath Constraints
https://docs.mendix.com/howto/ux/filtering-data-on-an-overview-page
2018-09-18T16:03:50
CC-MAIN-2018-39
1537267155561.35
[]
docs.mendix.com
Try it now and let us know what you think. Switch to the new look >> You can return to the original look by selecting English in the language selector above. AWS::EC2::NetworkAcl Specifies a network ACL for your VPC. Syntax To declare this entity in your AWS CloudFormation template, use the following syntax: Properties An arbitrary set of tags (key–value pairs) for this ACL. Required: No Update requires: No interruption VpcId The ID of the VPC for the network ACL. Required: Yes Type: String Update requires: Replacement Return Values Ref When you pass the logical ID of this resource to the intrinsic Ref function, Ref returns the resource name. For more information about using the Ref function, see Ref. Examples Network ACL The following example creates a Network ACL in a VPC. JSON "myNetworkAcl" : { "Type" : "AWS::EC2::NetworkAcl", "Properties" : { "VpcId" : { "Ref" : "myVPC" }, "Tags" : [ { "Key" : "foo", "Value" : "bar" } ] } } YAML myNetworkAcl: Type: AWS::EC2::NetworkAcl Properties: VpcId: Ref: myVPC Tags: - Key: foo Value: bar See Also CreateNetworkAcl in the Amazon EC2 API Reference Network ACLs in the Amazon Virtual Private Cloud User Guide
https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-resource-ec2-network-acl.html
2019-09-15T08:03:59
CC-MAIN-2019-39
1568514570830.42
[]
docs.aws.amazon.com
Onegini Token Server An introduction The Onegini Token Server is a complete solution for managing your customer’s authorizations. It provides a comprehensive security Token Server that integrates with enterprise Identity and Access Management systems based on the latest Web and API security standards such as OAuth 2.0. Luckily Onegini is here to help you out. Token Server main components The Onegini Token Server consists of two applications: - Token Server Admin: a web application to configure the Token Server, check its statistics and the activity of its clients and users. - Token Server Engine: the heart of the Token Server. All the interactions with clients and external components are performed in this application. How the documentation is organized The Token Server has quite a lot of documentation. A high-level overview of how it’s organized will help you know where to look for certain things: - Quick start section guides you through the steps to install the Token Server and helps you to create the setup for the Onegini Example App. - Configuration describes all configuration properties to customise your Token Server installation. - API reference describes all APIs exposed by the Token Server that are available for your developers. - Topic guides describes key topics and concepts at a fairly high level to provide background information and explanation.
https://docs.onegini.com/msp/token-server/10.2.0/index.html
2019-09-15T08:28:53
CC-MAIN-2019-39
1568514570830.42
[array(['images/logo.png', None], dtype=object) array(['images/introduction.png', None], dtype=object)]
docs.onegini.com
Router Advertisements¶ radvd (the service responsible for this functionality). Router Advertisements (Mode)¶ The mode selection contains some predefined settings for radvd, which influence a set of configuration options and are intended for specific implementation scenarios. They define the type of client deployment used in your network. A detailed overview of the radvd settings determined by the mode can be found below:
https://docs.opnsense.org/manual/radvd.html
2019-09-15T08:01:09
CC-MAIN-2019-39
1568514570830.42
[]
docs.opnsense.org
Use Access Control (ACL) on a field to secure it.Note: This example shows how to secure the Ethnicity field on an HR Profile form so users with the HR Manager [hr_manager] role or below cannot view it. Procedure Navigate to HR Profile. Elevate your role to add security_admin. Select your login name and select Elevate Roles. Check security_admin. Select OK. Select an HR profile record to edit. It can be any record. Right-click the Ethnicity field and select Configure Security. The Security Mechanic menu appears. Change the Operation to secure field to read. Move a role that you want to view the field to the Selected column. For example, move sn_hr_core.admin. By selecting read and adding sn_hr_core.admin, any role below this role, cannot read the field. For example, sn_hr_core.manager is below this role and cannot view this field after completing the steps. Click OK. Because HRSM is a scoped application, you have to ensure that you are in the scoped version within the form. Click the Settings icon. Select Developer under System Settings. Change Application to Human Resources: Core. Close the window. Related topicsAccess control list rules On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-hr-service-delivery/page/product/human-resources/task/FieldSecurity.html
2019-09-15T08:44:35
CC-MAIN-2019-39
1568514570830.42
[]
docs.servicenow.com
Everything Removing Contour Strokes The Remove Contour Stroke option is used to remove any permanently invisible lines that were created while scanning and vectorizing drawings or manually adding contour strokes. This is useful if you want to remove the intersection triangles created during vectorization. How to remove contour strokes In the Tools toolbar, select the Select tool. In the Camera or Drawing view, use the Select tool to select the drawing objects you want to remove contour strokes for. From the top menu, select Drawing> Optimize > Remove Contour Strokes.
https://docs.toonboom.com/help/harmony-16/advanced/drawing/remove-contour-stroke.html
2019-09-15T07:34:14
CC-MAIN-2019-39
1568514570830.42
[]
docs.toonboom.com
Centrify Zero Trust Privilege Services Documentation: Authentication Service, Privilege Elevation Service, and Audit & Monitoring Service This page references all documentation for releases of Centrify Zero Trust Privilege Services, Centrify Infrastructure Services, and Centrify Server Suite, plus application notes that show you how to set up Active Directory authentication for third-party applications. Check the release notes for information about what’s included in a specific release, that release’s system requirements and supported platforms, and any additional information that may not be included in the accompanying documentation. The upgrade guides describe the steps necessary to successfully upgrade to a specific version, with particular regard to those computers running multiple Centrify packages.
https://docs.centrify.com/en/css/
2019-09-15T08:20:36
CC-MAIN-2019-39
1568514570830.42
[]
docs.centrify.com
chainer.functions.connectionist_temporal_classification¶ chainer.functions. connectionist_temporal_classification(x, t, blank_symbol, input_length=None, label_length=None, reduce='mean')[source]¶ Connectionist Temporal Classification loss function. Connectionist Temporal Classification(CTC) [Graves2006] is a loss function of sequence labeling where the alignment between the inputs and target is unknown. See also [Graves2012] The output is a variable whose value depends on the value of the option reduce. If it is 'no', it holds the samplewise loss values. If it is 'mean', it takes the mean of loss values. - Parameters x (list or tuple of Variable) – A list of unnormalized probabilities for labels. Each element of x, x[i]is a Variableobject, which has shape (B, V), where Bis the batch size and Vis the number of labels. The softmax of x[i]represents the probabilities of the labels at time i. t ( Variableor N-dimensional array) – A matrix including expected label sequences. Its shape is (B, M), where Bis the batch size and Mis the maximum length of the label sequences. All elements in tmust be less than V, the number of labels. blank_symbol (int) – Index of blank_symbol. This value must be non-negative. input_length ( Variableor N-dimensional array) – Length of sequence for each of mini batch x(optional). Its shape must be (B,). If the input_lengthis omitted or None, it assumes that all of xis valid input. label_length ( Variableor N-dimensional array) – Length of sequence for each of mini batch t(optional). Its shape must be (B,). If the label_lengthis omitted or None, it assumes that all of tis valid input. reduce (str) – Reduction option. Its value must be either 'mean'or 'no'. Otherwise, ValueErroris raised. - Returns A variable holding a scalar value of the CTC loss. If reduceis 'no', the output variable holds array whose shape is (B,) where B is the number of samples. If it is 'mean', it holds a scalar. - Return type - Note You need to input xwithout applying to activation functions(e.g. softmax function), because this function applies softmax functions to xbefore calculating CTC loss to avoid numerical limitations. You also need to apply softmax function to forwarded values before you decode it. Note This function is differentiable only by x. Note This function supports (batch, sequence, 1-dimensional input)-data. - Graves2006 Alex Graves, Santiago Fernandez, Faustino Gomez, Jurgen Schmidhuber, Connectionist Temporal Classification: Labelling Unsegmented Sequence Data with Recurrent Neural Networks - Graves2012 Alex Graves, Supervised Sequence Labelling with Recurrent Neural Networks
https://docs.chainer.org/en/stable/reference/generated/chainer.functions.connectionist_temporal_classification.html
2019-09-15T08:35:03
CC-MAIN-2019-39
1568514570830.42
[]
docs.chainer.org
Install and configure Note: Use the following sequence to set up your Workspace Environment Management (WEM) service deployment. Review the entire process before starting the deployment, so you know what to expect. Links are provided to product documentation and videos. If you are not familiar with the components and terminology used in a WEM service deployment, see Workspace Environment Management service. Get started by signing up for a Citrix account and requesting a WEM service trial. Set up resource locations and install Cloud ConnectorsSet up resource locations and install Cloud Connectors Resource locations contain infrastructure servers (such as Active Directory and Citrix Cloud Connectors), and the machines that deliver apps and desktops to users. Before you install the WEM agent, you must set up resource locations and install at least one Citrix Cloud Connector in each. For high availability, Citrix recommends that you install two Cloud Connectors in each resource location. See Resource locations and Cloud Connector Installation. Install and configure the WEM agentInstall and configure the WEM agent Note: To access resources published in Citrix StoreFront stores as application shortcuts from the administration console, ensure that Citrix Workspace app for Windows is installed on the agent host machine. For more information, see System requirements. Step 1: Join agent host machines to AD Agent host machines must belong to the same AD domain as the configured Cloud Connectors. Ensure that the agent host machines in each resource location are joined correctly. Step 2: Download the agent Download the WEM agent package (Citrix-Workspace-Environment-Management-Agent-Setup.zip) from the WEM service Downloads tab and save a copy on each agent host. Step 3: Configure group policies (optional) Optionally, you can choose to configure the group policies. The Citrix Workspace Environment Management Agent Host Configuration.admx administrative template, provided in the agent package, adds the Agent Host Configuration policy. Use the Group Policy Management Editor to configure a GPO with the following settings: Infrastructure server. Not required for WEM service. Leave state “Not configured.” Agent service port. Not required for WEM service. Leave state “Not configured.” Cache synchronization port. Not required for WEM service. Leave state “Not configured.” Citrix Cloud Connectors. Configure at least one Citrix Cloud Connector. Agent host machines must be in the same AD domain as the configured Cloud Connector machines.. For example,. VUEMAppCmd extra sync delay. Specifies how long the agent application launcher (VUEMAppCmd.exe) waits before Citrix Virtual Apps and Desktops published resources are started. This ensures that the necessary agent work completes first. The default value is 0. Step 4: Install the agent The agent setup program Citrix Workspace Environment Management Agent Setup Setup.exe on your machine. - Select “I agree to the license terms and conditions” and then click Install. On the Welcome page, click Next. Note: The Welcome page can take some time to appear. This happens when the required software is missing and is being installed in the background. On the Destination Folder page, click Next. - By default, the destination folder field is automatically populated with the default folder path. If you want to install the agent to another folder, click Change to navigate to the folder and then click Next. - If. - Configure Citrix Cloud Connectors. Lets you configure the Citrix Cloud Connectors by typing a comma-separated list of FQDNs or IP addresses of the Cloud Connectors. Note: Type the FQDN or IP address of each Citrix Cloud Connector. Make sure to separate the FQDNs or IP addresses with commas (,). On the Advanced Settings page, configure advanced settings for the agent and then click Next. - Alternative Cache Location (Optional). Lets you specify an alternative location for the agent cache. Click Browse to navigate to the applicable folder. - VUEMAppCmd Extra Sync Delay (Optional). Lets you specify how long the agent application launcher (VUEMAppCmd.exe) waits before published resources_wem_agent_bundle 5: Build the agent service cache (optional) By default, the agent service cache is built the first time the agent runs. You can choose to build the agent service cache before the agent runs. This is useful if you want to build an image that includes the WEM agent host as pre-installed software. To build or rebuild the agent service cache, run AgentCacheUtility.exe in the agent installation folder using command line. The executable accepts the following command-line arguments: - -help: displays a list of allowed arguments - -refreshcache or -r: triggers a cache build or refresh Step 6: Restart the machine to complete the installation Good to knowGood to know The agent executable accepts custom arguments as described below. Agent settings See below for the WEM agent settings. AgentLocation. Lets you specify the agent installation location. You must. GpNetworkStartTimeoutPolicyValue. Lets you configure the value, in seconds, of the GpNetworkStartTimeoutPolicyValue registry key created during installation. This argument specifies how long Group Policy waits for network availability notifications during policy processing on logon. The argument accepts any whole number in the range of 1 (minimum) to 600 (maximum). By default, this value is 120. SyncForegroundPolicy. Lets you configure whether the SyncForegroundPolicy registry key created during installation is active. This argument configures the agent host to wait for a complete network initialization before allowing a user to log on. Accepted values: 0, 1. If not specified, the key is not created during installation. WaitForNetwork. Lets you configure the value, in seconds, of the WaitForNetwork registry key created during installation. This argument specifies how long the agent host waits for the network to be completely initialized and available. The argument accepts any whole number in the range of 0 (minimum) to 300 (maximum). By default, this value is 30. for it to have an impact. ServicesPipeTimeout. Lets you configure the value of the ServicesPipeTimeout registry key. The key is created during installation under HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control. This registry key adds a delay before the service control manager is allowed to report on the state of the WEM agent service. The delay prevents the agent from failing by keeping the agent service from launching before the network is initialized. This argument accepts any value, in milliseconds. If not specified, a default value of 60000 (60 seconds) is used. Note: If the settings above are not configured using the command line, they are not processed by the WEM agent installer during installation. Examples You can also configure the settings using the following command-line format: - citrix_wem_agent_bundle.exe <key=value> For example: - Specify the agent installation location and Citrix Cloud Connectors - citrix_wem_agent_bundle.exe /quiet AgentLocation=”L:\WEM Agent” Cloud=1 CloudConnectorList=cc1.qa.local,cc2.qa.local - Set “user logon network wait time” to 60 seconds - citrix_wem_agent_bundle.exe WaitForNetwork=60
https://docs.citrix.com/en-us/workspace-environment-management/service/install-and-configure.html
2019-09-15T08:36:38
CC-MAIN-2019-39
1568514570830.42
[array(['/en-us/workspace-environment-management/service/media/wem-group-policy-man-editor2.png', 'Group Policy management Editor'], dtype=object) ]
docs.citrix.com
ObjectFrame.OLETypeAllowed property (Access) You can use the OLETypeAllowed property to specify the type of OLE object that a control can contain. Read/write Byte. Syntax expression.OLETypeAllowed expression A variable that represents an ObjectFrame object. Remarks The OLETypeAllowed property uses the following settings. Note For unbound object frames and charts, you can't change the OLETypeAllowed setting after an object is created. For bound object frames, you can change the setting after the object is created. Changing the OLETypeAllowed property setting only affects new objects that you add to the control. To determine the type of OLE object a control already contains, you can use the OLEType property. Example The following example creates a linked OLE object by using an unbound object frame named OLE1, and sizes the control to display the object's entire contents when the user chooses Support and feedback Have questions or feedback about Office VBA or this documentation? Please see Office VBA support and feedback for guidance about the ways you can receive support and provide feedback.
https://docs.microsoft.com/en-us/office/vba/api/access.objectframe.oletypeallowed
2019-09-15T08:57:37
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
New Business Intelligence Development Studio Business Intelligence Development Studio is a new project development and management tool for business intelligence solution developers. You can use BI Development Studio to design end-to-end business intelligence solutions that integrate projects from Microsoft SQL Server 2005 Analysis Services (SSAS), Microsoft SQL Server 2005 Integration Services (SSIS), and Microsoft SQL Server 2005 Reporting Services (SSRS). Fully integrated with the Microsoft Visual Studio 2005 development environment, BI Development Studio hosts the designers, wizards, browsers, and development dialog boxes for those components in the same shell. For example, in BI Development Studio you can examine and analyze data sources and define cubes and mining models by using Analysis Services; create extraction, transformation, and loading (ETL) packages by using Integration Services; design reports by using Reporting Services; and then deploy the whole solution to a test or production environment. This topic provides information about the principal features of BI Development Studio. Integrated Development BI Development Studio integrates all the functionality that was previously available in Analysis Manager and the development and management environment used in earlier versions of Analysis Services. It also adds many new capabilities within a single, configurable development environment. The following table describes some of the features of BI Development Studio. - Solution Explorer Solution Explorer provides an organized view of your projects and files, and also ready access to the commands that pertain to them. A toolbar that is associated with this window offers frequently used commands for the item you highlight in the list. - Designers and code windows BI Development Studio lets you view and edit object definitions, such as cubes, mining models, packages, and reports, either through a graphical user interface, called a designer, or directly by editing the XML-based code that defines the object. - Nonmodal dialog boxes You can now access multiple resources at the same time in BI Development Studio. For example, if you are using Report Designer to develop a report in a Reporting Services project that is based on a cube in an Analysis Services project, you can display the properties of the cube without closing Report Designer. (Report Designer is the designer for Reporting Services for developing and visualizing reports.) - Configurability You can customize BI Development Studio to match your working habits, environment, and preferences, including code editing preferences, deployment settings, and profiles. - Extensibility BI Development Studio is designed to be fully extensible, and allows you to programmatically extend the development environment to meet your needs. Enhanced Project Management In earlier versions of SQL Server Analysis Services and SQL Server Integration Services, you modified the data and metadata of objects directly only during the design process. However, now you can use the project-based development in BI Development Studio to revise, adjust, and browse objects in a production environment without fear of causing damage. You can use Solution Explorer, designed for business intelligence developers who create and manage applications, to organize object definitions, such as data source views, and other files into projects to form a solution. If you have created applications by using Microsoft Visual Studio .NET, you will find Solution Explorer very familiar. The project functionality is used by Analysis Services, Integration Services, and Reporting Services. Source control is also fully integrated into BI Development Studio, so you can manage the versioning of your business intelligence solutions from within the development environment. Enhanced Deployment With BI Development Studio, you can deploy individual business intelligence solutions to multiple environments, either completely or incrementally as needed, and each deployment is configured as part of an overall solution. Enhanced Project Templates BI Development Studio includes templates for Analysis Services, Integration Services, and Reporting Services projects. It also provides functionality so that you can define custom templates for all the projects that BI Development Studio supports. See Also Other Resources Introducing Business Intelligence Development Studio Help and Information Getting SQL Server 2005 Assistance
https://docs.microsoft.com/en-us/previous-versions/sql/sql-server-2005/ms170338(v%3Dsql.90)
2019-09-15T08:34:31
CC-MAIN-2019-39
1568514570830.42
[]
docs.microsoft.com
Using external authentication with Keystone¶ When Keystone is executed in a web server like Apache HTTPD, it is possible to have the web server also handle authentication. This enables support for additional methods of authentication that are not provided by the identity store backend and the authentication plugins that Keystone supports. Having the web server handle authentication is not exclusive, and both Keystone and the web server can provide different methods of authentication at the same time. For example, the web server can provide support for X.509 or Kerberos authentication, while Keystone provides support for password authentication (with SQL or an identity store as the backend). When the web server authenticates a user, it sets environment variables, usually REMOTE_USER, which can be used in the underlying application. Keystone can be configured to use these environment variables to determine the identity of the user. Configuration¶ In order to activate the external authentication mechanism for Identity API v3, the external method must be in the list of enabled authentication methods. By default it is enabled, so if you don’t want to use external authentication, remove it from the methods option in the auth section. To configure the plugin that should be used set the external option again in the auth section. There are two external authentication method plugins provided by Keystone: DefaultDomain: This plugin won’t take into account the domain information that the external authentication method may pass down to Keystone and will always use the configured default domain. The REMOTE_USERvariable is the username. This is the default if no plugin is given. Domain: This plugin expects that the REMOTE_DOMAINvariable contains the domain for the user. If this variable is not present, the configured default domain will be used. The REMOTE_USERvariable is the username. Caution You should disable the external auth method if you are currently using federation. External auth and federation both use the REMOTE_USER variable. Since both the mapped and external plugin are being invoked to validate attributes in the request environment, it can cause conflicts. For example, imagine there are two distinct users with the same username foo, one in the Default domain while the other is in the BAR domain. The external Federation modules (i.e. mod_shib) sets the REMOTE_USER attribute to foo. The external auth module also tries to set the REMOTE_USER attribute to foo for the Default domain. The federated mapping engine maps the incoming identity to foo in the BAR domain. This results in user_id conflict since both are using different user_ids to set foo in the Default domain and the BAR domain. To disable this, simply remove external from the methods option in keystone.conf: methods = external,password,token,oauth1 Using HTTPD authentication¶ Web servers like Apache HTTP support many methods of authentication. Keystone can profit from this feature and let the authentication be done in the web server, that will pass down the authenticated user to Keystone using the REMOTE_USER environment variable. This user must exist in advance in the identity backend to get a token from the controller. To use this method, Keystone should be running on HTTPD. X.509 example¶ The following snippet for the Apache conf will authenticate>
https://docs.openstack.org/keystone/latest/admin/external-authentication.html
2019-09-15T08:17:10
CC-MAIN-2019-39
1568514570830.42
[]
docs.openstack.org
NAT GW Load-balance with AZ affinity¶ Pre-4.2 Behavior¶ In VPC with multiple private route tables (ex: rtb1, rtb2, rtb3), and multiple NAT gateways (ex: gw1, gw2), the default route in the route tables points to the NAT gateway round-robbinly. - rtb1 –> gw1 - rtb2 –> gw2 - rtb3 –> gw1 4.2 Behavior¶ In 4.2, AZ is considered when assign route tables to gateways. Route table is assigned to same AZ gateway first, if no same AZ gateway, it will assign to other AZ gateway. Before program the default route, all route tables are grouped according to AZ. Note - AWS route table has no AZ attribute, we use the first associated subnet to decide the route table AZ. It is user’s responsibility to make sure route table associated subnets belong to the same AZ. - If route table doesn’t have any associated subnets, it is considered to be first AZ (i.e, us-east-1a for example). Case study¶ We use the an example to show how it works. Suppose we have the following setting: - AZ1: rtb1, rtb2, rtb3, gw1, gw2 - AZ2: rtb4, no gateway - AZ3: no rtb, gw3 Round 1: Within Single AZ - AZ1: 3 rtbs are programed with 2 gateway round-robbinly. * rtb1 –> gw1 * rtb2 –> gw2 * rtb3 –> gw1 - AZ2: no gateway, rtb4 is added to a pending list - AZ3: no rtb, nothing to do After 1st round, gw1 has 2 rtbs, gw2 has 1 rtbs, gw3 has 0 rtbs. There is 1 rtb in pending list. If the pending list is empty, meaning all route tables are programed to its same AZ gateway. Round 2 is skipped. Round 2: Cross AZ In this example, pending list has rtb4. We sort the gateways according to number of route tables it’s already assigned to, get a list of all available gateways: [gw3 (0), gw2 (1), gw1 (2)] In this round, we work on route table in the pending list with the sorted list of gateways round-robbinly. - rtb4 –> gw3 Finally, all route table has default route configured to one of the NAT gateways. - rtb1 –> gw1, same AZ - rtb2 –> gw2, same AZ - rtb3 –> gw1, same AZ - rtb4 –> gw3, cross AZ
https://docs.aviatrix.com/HowTos/nat_gw_LoadBalance_AZ.html
2021-07-23T23:04:09
CC-MAIN-2021-31
1627046150067.51
[]
docs.aviatrix.com
Most database tables have a column that is a "primary key" column. The values in this column uniquely identify the row in the table, most commonly by means of an integer value. When rows are inserted into the table these values are automatically generated. The values themselves should be thought of as as being meaningless and are just used to identify the row in the database. To be used in IJC a table must have such a column that can be used to uniquely identify each row in the table. Typically this is the primary key column and IJC will use this as the ID field for the Entity that uses the table. Prior to IJC 5.3 IJC could only use tables with integer values for this column, but since 5.3 it now has some support for text based columns. IJC uses a "value generator" that defines how the values for the ID field (primary key column) are generated. Different table types need different types of value generator, and the type is dependent on the type of database. The value generator is defined when the entity is created or promoted and from that point onwards the way the IDs are generated is transparent to the IJC user. IJC currently supports these types of value generator. Other types may be added in future. Autoincrement Sequence JChem standard GUID None ROWID Currently IJC only supports tables that have a single column as their primary key. Composite primary keys are not yet supported. Currently IJC can only create tables that use integer based primary keys. Tables with text based primary keys must be created externally and then "promoted" into IJC. Integer primary keys only support a limited number of mechanisms for ID generation: Autoincrement (MySQL) Identity (Derby) Sequence (Oracle) Use of these is transparent to the IJC user. Text primary keys only support a limited number of mechanisms for ID generation: 32 character GUID (all databases) Use is transparent to the IJC user. If you have a table which uses an alternative mechanism for generating values for the primary key column (e.g. trigger based approaches) then you cannot currently use this mechanism in IJC (we plan to add this in future). However you can specify that you want to use no value generator when you promote the table and this will allow you to use the table but not insert any new rows into it. The table must still have a column that can be used as the ID field (e.g. a primary key column) as described above. When promoting a database view there is no primary key column that can be directly determined from the database, but IJC still needs a column that can be used for the ID field. As the column cannot be determined automatically the user must specify a suitable column. Again this column is subject to the restrictions described above, and of course it must contain values that uniquely identify each row in the view.
https://docs.chemaxon.com/display/lts-helium/about-primary-keys.md
2021-07-23T23:37:23
CC-MAIN-2021-31
1627046150067.51
[]
docs.chemaxon.com
Writing queries against your data The workspace is also where you interact with your data by writing queries in either SQL or SPARQL If you're unfamiliar with one or both of them, see our documentation for data.world's SQL dialect, SQL tutorial, or SPARQL tutorial for more information. To write a query against your data click on + Add in the upper left of your workspace and select SQL Query: The query editor will open in the middle of your window with the cursor already in place, and a list of the tables and columns in your dataset will be on the right. Write your query in the editor and then select Run query to see the results:
https://docs.data.world/en/55156-55163-7--Writing-queries-against-your-data.html
2021-07-23T23:07:28
CC-MAIN-2021-31
1627046150067.51
[array(['image/uuid-067b06ca-e971-4474-fd16-8701a4d26197.png', 'Add_SQL_query.png'], dtype=object) array(['image/uuid-09cce8af-19d2-9f7a-1fdf-e35fa19cea5f.png', 'Screen_Shot_2020-03-23_at_5.16.25_PM.png'], dtype=object)]
docs.data.world
Another integration option is a manual upload of product file/feed generated by Koongo, it might be useful in some cases. Download the Amazon channel file Once you have set up the feed, download the channel file - Koongo as a Service - URL link - Koongo Connector for Magento - URL link - Koongo Connector for Magento 2 - URL link Upload channel file to Amazon Select Add Products via Upload Select Inventory → Add Products via Upload and "Need to create a product file?" → Continue Select File type Select File type Inventory Files for non-Media Categories and insert your channel file. Click Upload. Check the Upload status report Fixing error - update product data You can check and update your product data directly in your Amazon account. It allows you to fix the single product errors. Select Add Products via Upload Select Inventory → Add Products via Upload and Need to create a product file? → Continue Select Monitor Upload Status Select Monitor Upload Status. You get the list of submitted feeds. Select the latest submitted version of the Amazon products feed and click Complete Drafts. Select Edit draft Select specific product and click Edit draft. If there are no products listed, please clear the Filter field. Fix the issue Check the product error, fix it in your Amazon account and click Save button.
https://docs.koongo.com/display/koongo/Amazon+integration+-+manual+submission
2021-07-23T22:45:03
CC-MAIN-2021-31
1627046150067.51
[]
docs.koongo.com
GDPR compliance The private policy is in compliance with GDPR. Preamble Koongo takes the online privacy of its visitors and customers very seriously. This Privacy Policy (hereinafter referred to as the ‘Privacy Policy’) sets out the principles and procedures for the processing of personal data and customer rights, in accordance with Regulation (EU) 2016/679 of the European Parliament, (hereafter referred to as the "GDPR") and of the Council of 27 April 2016 on the protection of individuals with regard to the processing of personal data and on the free movement of such data and repealing Directive 95/46 / EC (hereafter "the Regulation"), and Act No. 480/2004 Coll., on Certain Information Society Services, as amended. The Koongo websites (.com, .de, .nl, .dk, .es, .gr) , online store store.koongo.com and hosted online application my.koongo.com (hereafter referred to as the “Service” or “Services”) are owned and operated by NoStress Commerce s.r.o., a Czech registered company at the Czech Chamber of Commerce with number 28977475 located at Vyšehradská 1349/2, Praha 2, Czech Republic (hereafter referred to as “Koongo”, “we” or “us”), in accordance with this Privacy Policy. This Privacy Policy govern your use of the Service. By accessing or using the Service, you acknowledge that you have read, understood and familiar this privacy policy and its terms. IF YOU DO NOT AGREE TO THE TERMS OF USE AND/OR THE PRIVACY POLICY OR OTHER POLICIES, GUIDELINES OR INSTRUCTIONS POSTED ON THE SERVICE, DO NOT USE THE SERVICE. If you have additional questions not answered here, contact the Koongo support to request more information. Updated: Nov 12, 2019 1. Basic information 1.1 Contact person The data administrator for the personal data protection according to Article 4 (7) of GDPR is Koongo. For all matters concerning the protection of personal data please contact: Jiri Zahradka, Koongo CEO NoStress Commerce s.r.o. Vyšehradská 1349/2, Nové Město, 128 00 Praha 2, Czech Republic 1.2 Principle of processing personal data The Koongo processes personal data in the compliance with the following principles arising from the Regulation: - legality, correctness, and transparency of processing; - purpose limitation - collection only for certain, expressly expressed, and legitimate purposes; - minimization of data - adequacy, relevance, and limitation of processing to the extent necessary in relation to the purpose; - accuracy and timeliness - the Koongo shall take all reasonable steps to ensure that personal data which are inaccurate, taking into account the purposes for which they are processed, are erased or corrected without delay; - limited storage - personal data shall be stored in a form which permits the identification of data subjects for no longer than is necessary for the purposes for which they are processed provided that the appropriate technical and organizational measures required by the existing legislation are in place to guarantee the rights and freedoms of the data subject ; - integrity and confidentiality - personal data are processed in a manner that ensures their proper security, including their protection through appropriate technical or organizational measures against unauthorized or unlawful processing and against accidental loss, destruction or damage 2. Sources and categories of processed personal data 2.1 Personally Identifiable Information While using our Service, we may gather or ask you to provide us with certain personally identifiable information that can be used to contact, identify and invoice you. Except as stated in this Privacy Policy, Koongo only collects personally identifiable information that you voluntarily provide to us. Personally identifiable information will include your name, email address, company contact person details provided by the customer, postal address, business id, vat id, Application user name, password, phone number, authentication certificates, social networking identifiers and communication platforms (e.g. Skype) and other information (hereinafter referred to as the "Personal data"). Any comments or questions you submit through Koongo support desk are recorded as well. By using Service, you are giving Koongo permission to use and store such information consistent with this Privacy Policy. 2.2 Aggregate User and Tracking Information While using our Service, some information about your visit may be automatically collected using cookies, log analysis software and other aggregate tracking technologies. This information provides insights on how customers and visitors use the Koongo website, Service and other Koongo products. This data is anonymous in nature and does not contain any personally identifiable information. 2.2 visit the Cookies section. 2.2.2 Log File Analysis Koongo also collects information from your computer each time you visit the Service using log file analysis software. Information measured, analyzed and collected in this manner may include but is not limited to: your IP address, operating system, platform information, date and time of your click stream behavior, cookie information, session information (includes: page interaction information, browser interaction information, page response times, length of visits to individual pages, and download errors) and web browser software. Information collected on how you use the Service does not include personally identifiable information. Though some personally identifiable information may be collected in this process, the information collected is not used to track an individual’s use through the Service. 2.2.3 Third party services In addition, we use third party services such as Google Analytics that collect, monitor and analyze this type of information in order to increase our Service's functionality. To improve the way we do things, we need some data about you, our customers. For this reason, we use third-party service called Smartlook that allow us to record and analyse your behavior. Thanks to this, we can see trends and patterns which help us improve many areas of our business activities. This data is anonymous in nature and does not contain any personally identifiable information. If you don’t want to be tracked by Smartlook on our website, you can opt out her. These third party service providers have their own privacy policies addressing how they use such information. 2.2.4 Data processing Finally, we collect Personal Data where we act as a data processor. We process Personal Data on behalf of our customers in the context of supporting the Service. Where a customer subscribes to our Service for their online store, system or platform, they are able to retrieve Personal Data from consumers and/or businesses from their account(s) at online Marketplaces or Platforms as, but not limited to, Amazon, Beslist.nl, and Bol.com. When an order has been placed at such a Marketplace or Platform, we collect and store the order data (including Personal Data) to collectively send it to the system or platform of our customers. 3. Legal reason and purpose of personal data processing 3.1 Legal reason for your personal data The legal reason for your personal data processing is: - the fulfillment of the contract according to Article 6 (1) b) GDPR, fulfillment of the statutory obligation of the administrator pursuant to Article 6 (1) c) the GDPR and the legitimate interest of the Provider pursuant to Article 6 (1) f) GDPR. Koongo will send you Service setup instruction, support period expiration notification or any major update/upgrade information related to Service or other Koongo products. - negotiation of the contract or its preparation pursuant to Article 6 (1) b) GDPR (e.g. filling in the inquiry or contact form, creating a user account in the eshop or failing to complete an purchase) - the legitimate interest of the Koongo in providing direct marketing (in particular for sending business messages and newsletters) under Article 6 (1) f) GDPR. The personal information is required when placing the order or activating the Service (connecting the customer store to Service). 3.2 Purpose of personal data processing Koongo will not sell, exchange or otherwise distribute your personally identifiable information without your consent, except to the extent required by law, in accordance with your instructions, or as identified in this privacy policy. The purpose of personal data processing is: - fulfilling your order or activate the Service and performing the rights and obligations arising from the contractual relationship between you and the Koongo; personal data (name, address, contact) is a necessary requirement for the conclusion and performance of the contract, without the personal data it is not possible to conclude the contract or to fulfill it by the Koongo, - sending business messages and performing other marketing activities. The personal Service, to better understand our users, to protect against wrongdoing, to enforce our Terms and Conditions, and to generally manage our business - To process transactions and to provide Services to our customers and end-users - For recruitment purposes, where you apply for a job with us - To administer a contest, promotion, survey, or other site features - To improve advertising campaigns, primarily in an effort to prevent targeting of impressions via third-party channels when they are not relevant - To send e-mails, both either periodic or one off. The e-mail addresses you provide for invoicing or support questions will be used to send you information regarding payments, invoicing or answers to your questions as well as proactively for important updates of our Service. When it is in accordance with your marketing preferences, we will send occasional marketing e-mails about our Service, which you can unsubscribe from at any time using the link provided in the message. 3.3 Automated decision-making process The Koongo does not use automated individual decision-making process within the meaning of Article 22 GDPR. 3.4 Other Websites Koongo does not share personally identifiable information with business partners who display or offer Koongo products or services on their Websites. 3.5 External Links The Website may include links to other websites whose privacy policies Koongo does not control. Once you leave the Koongo website, use of any information you provide is governed by the privacy policy of the operator of the website you are visiting. That policy may differ from Koongo's policy and is not covered within this privacy policy. 3.6 What you get from Koongo Koongo believes that honest and clear communications with customers and users is a key part of a great experience. All of Koongo's email communications contain information on how to unsubscribe, and Koongo honors all requests as quickly as possible. Below are examples (without limitation) of communications you may receive from Koongo: 3.6.1 Registration Email Messages When you join for an account, you will receive an email confirming your registration (as applicable). The email message will contain your name, the email address you provided and your Koongo sign-in information. 3.6.2 Marketing or Promotional Emails In case the Koongo processes the customer's personal data for other purposes that can not be subordinated to the legitimate interest or performance of the contract, he can only do so on the basis of a valid consent to the processing of personal data by the customer, which is an expression of free will of the customer a specific title for such personal data handling. The customer needs to explicitly agree with the personal data processing. The email address will be processed for inclusion in the business messaging database. You can accept consent at any time, for example, by sending a letter, email or a click to a link in your business message. Withdrawal of consent will result in the suspension of commercial communications. 4. Personal data handling 4.1 Personal data handling period The Koongo keeps Personal Data: - for as long as necessary for pre-contractual negotiations, exercise of rights and fulfillment of contractual obligations, and fulfillment of statutory obligations of Koongo, particularly as regards the management of prescribed accounting, tax and similar records. - for the period of maximum 1 years for the customers who registered for Koongo account but not activate the Service or placed any order. We expect that during this period you might be still interested in products purchase or Service activation. After the time period above, the Koongo account will be deleted. 4.2 Personal data handling termination The Koongo terminates the handling of customer data after the termination of the contractual relationship, after the expiry of the period specified in the consent to the processing of personal data or after forfeiture of the legitimate reasons for the archiving of personal data. 5. Personal data processors Personal data are processed automatically and manually and can be available to the Koongo's employees. Personal data can be available also to the processors with whom the Koongo has entered into a contract for the processing of personal data if this is necessary for the fulfillment of their duties. Personal data can be available to another person under the conditions which are compliant with the Regulation. The processing of personal data is carried out by the Koongo, the personal data can be processed also by these processors: accounting. For detailed list of Personal Data processors, please contact us at [email protected]. 6. Personal data - customer rights 6.1 Customer rights Under the terms of the GDPR you have (a) the right to access your personal data under Article 15 of the GDPR, (b) the right to correct personal data under Article 16 of the GDPR, or the restriction of personal data processing under Article 18 GDPR, (c) the right to delete personal data under Article 17 of the GDPR. The Koongo is obliged to delete the personal data without undue delay in compliance with the reasons stated in the Regulation: - personal data are not necessary for the purposes for which they were collected or otherwise processed and there is no other purpose of processing; - customer withdraws the consent to the personal data processing and there is no further legal title for processing; - the customer objects to processing and there are no overriding grounds for further processing; - personal data have been processed unlawfully; - personal data must be erased in order to comply with a legal obligation laid down by the EU or national legislation applicable to the Koongo; - personal data were gathered in connection with the provision of information society services. Details and exceptions to this right are governed by the Regulation; (d) the right to object to personal data processing under Article 21 GDPR and (e) the right to personal data portability under Article 20 GDPR (f) the right to withdraw consent to product data processing, referred in the section 3.2, in writing or electronically to the address or email address of the Koongo. (g) the right to file a complaint with the Personal Data Protection Office if you believe that your privacy has been violated. The Koongo limits the processing of the personal data in any of the following cases: - the data subject denies the accuracy of the personal data for the time necessary to verify the accuracy of the personal data; - the processing is unlawful and the data subject refuses the deletion of personal data and instead requests restrictions on its use; - the Koongo no longer needs personal data for processing but the data subject is required to identify, exercise or defend legal claims; - the data subject has objected to processing until it has been ascertained whether the legitimate reasons for the controller outweigh the legitimate reasons for the data subject; 6.2 Accessing and Updating your Information The minimal personally identifiable information for Service activation or purchasing a Koongo product is customer email address, password, billing address and payment method information. Customers may view and change their personally identifiable information by logging into their password-protected account. Once logged in, click the ‘Edit Account’ Koongo, such information may be retained for a period of time in Koongo's backup systems as a precaution against system failures. Some information may be retained for longer periods as required by law, contract or auditing requirements. 7. Personal data protection 7.1 Protection Koongo understands that the safety of your personal information is extremely important to you. That is why Koongo. The processing of personal data may be processed by the processors solely on the basis of a contract for the processing of personal data, with the guarantees of organizational and technical security of these data and with the definition of the purpose of the processing, and the processors may not use the data for other purposes. 7.2 Security breach In the event of a breach of security of data handling or data leakage, the Koongo shall promptly inform the Customer and the Office for Personal Data Protection within 72 hours. 8. Miscellaneous 1. By submitting an order to Koongo, registering for account (at store.koongo.com, my.koongo.com or support.koongo.com) or activating the Service, you acknowledge that you are aware of the privacy policy and that you accept it in its entirety. 2. You acknowledge that you are familiar with these terms by checking your consent via the online form. By confirming your consent, you acknowledge that you are aware of the privacy policy and that you accept it in its entirety. 3. Please visit also Koongo Terms and Conditions section establishing the use, disclaimers, and limitations of liability governing the use of the Website. 4. We may update our Privacy Policy from time to time. We will notify you of any changes by posting the new Privacy Policy in the Service. You are advised to review this Privacy Policy periodically for any changes. Changes to this Privacy Policy are effective when they are posted on this page. If you do not agree to the revised data protection regulations, you can deactivate your account at any time. 9. Questions If you have any questions about this privacy policy, please contact us. Koongo respects your rights and privacy, and will be happy to answer any questions or concerns you might have. Please feel free to contact us through our website or write to us at: NoStress Commerce s.r.o Vyšehradská 1349/2, Nové Město 128 00 Praha 2 Czech Republic
https://docs.koongo.com/display/koongo/Privacy+Policy
2021-07-23T22:41:45
CC-MAIN-2021-31
1627046150067.51
[]
docs.koongo.com
A poorly written application will always have poor performance. A very common way for developers to increase the performance of their application is: just throw more hardware to it just throw more hardware to it The problem with the above approach is two fold. For starters, in most cases the owner is the one that will incur the additional costs. The second issue is that there comes a time that one can no longer upgrade the hardware and will have to resort to load balancers, docker swarms etc. which will skyrocket costs. The problem will remain: the poorly written application In order to speed up your application, you first need to ensure that your application is written with the best way possible that fulfills its requirements. Nothing beats a good design. After that, there are many aspects to consider: - server hardware - clients connecting (location, browsers) - network latency - database hardware and many more. In this article we will try to highlight some scenarios that could provide more insight as to where your application is really slow. NOTE These are recommendations and good practices. You are by no means obligated to follow the advice on this document, and by no means is this list exhaustive. Your performance enhancing strategies rely primarily on the needs of your application. NOTE These are recommendations and good practices. You are by no means obligated to follow the advice on this document, and by no means is this list exhaustive. Your performance enhancing strategies rely primarily on the needs of your application. Profiling is a form of dynamic application analysis that offers metrics regarding your application. Profiling offers the real picture on what is really going on at any given time in your application, and thus guide you to areas where you application needs attention. Profiling should be continuous in a production application. It does have an overhead so that has to be taken into account. The most verbose profiling happens on every request but it will all depend on your traffic. We certainly do not want to increase the load on the server just because we are profiling the application. A common way of profiling is one request per 100 or one per 1,000. After a while you will have enough data to draw conclusions as to where slowdowns occur, why peaks occurred etc. XDebug offers a very handy profiler right out of the box. All you have to do is install the extension and enable profiling in your php.ini: php.ini xdebug.profiler_enable = On Using a tool such as Webgrind will allow you to connect to XDebug and get very valuable information as to what is going on with your code. Webgrind offers statistics on which methods are slower than others and other statistics. Xhprof is another extension to profile PHP applications. To enable it, all you need is to: As mentioned above, profiling can increase the load on your server. In the case of Xhprof, you can introduce a conditional that would start profiling only after X requests. Almost all RDBMs offer tools to identify slow SQL statements. Identifying and fixing slow queries is very important in terms of performance on the server side. MariaDB / MySql / AuroraDb offer configuration settings that enable a slow-query log. The database then keeps its own metrics and whenever a query takes long to complete it will be logged in the slow-query log. The log can then be analyzed by the development team and adjustments can be made. slow-query To enable this feature you will need to add this to my.cnf (don’t forget to restart your database server) my.cnf log-slow-queries = /var/log/slow-queries.log long_query_time = 1.5 Another area to focus on is the client. Improving the loading of assets such as images, stylesheets, javascript files can significantly improve performance and enhance user experience. There are a number of tools that can help with identifying bottlenecks on the client: Most modern browsers have tools to profile a page’s loading time. Those are easily called web inspectors or developer tools. For instance when using Brave or any Chromium based browser you can inspect the page and the developer tools will show a waterfall of what has loaded for the current page (files), how much time it took and the total loading time: A relatively easy fix for increasing client performance is to set the correct headers for assets so that they expire in the future vs. being loaded from the server on every request. Additionally, CDN providers can help with distributing assets from their distribution centers that are closest to the client originating the request. YSlow analyzes web pages and suggests ways to improve their performance based on a set of rules for high performance web pages PHP is becoming faster with every new version. Using the latest version improves the performance of your applications and also of Phalcon. OPcache as many other bytecode caches helps applications reduce the overhead of read, tokenize and parse PHP files in each request. The interpreted results are kept in RAM between requests as long as PHP runs as fcgi (fpm) or mod_php. OPcache is bundled with php starting 5.5.0. To check if it is activated, look for the following entry in php.ini: opcache.enable = On opcache.memory_consumption = 128 ;default Furthermore, the amount of memory available for opcode caching needs to be enough to hold all files of your applications. The default of 128MB is usually enough for even larger codebases. APCu can be used to cache the results of computational expensive operations or otherwise slow data sources like webservices with high latency. What makes a result cacheable is another topic, as a rule of thumb: the operations needs to be executed often and yield identical results. Make sure to measure through profiling that the optimizations actually improved execution time. apc.enabled = On apc.shm_size = 32M ;default As with the aforementioned opcache, make sure, the amount of RAM available suits your application. Alternatives to APCu would be Redis or Memcached - although they need extra processes running on your server or another machine. Based on the requirements of your application, there maybe times that you will need to perform long running tasks. Examples of such tasks could be processing a video, optimizing images, sending emails, generating PDF documents etc. These tasks should be processed using background jobs. The usual process is: - The application initiates a task by sending a message to a queue service - The user sees a message that the task has been scheduled - In the background (or different server), worker scripts peek at the queue - When a message arrives, the worker script detects the type of message and calls the relevant task script - Once the task finishes, the user is notified that their data is ready. The above is a simplistic view of how a queue service for background processing works, but can offer ideas on how background tasks can be executed. There are also a variety of queue services available that you can leverage using the relevant PHP libraries: mod_pagespeed speeds up your site and reduces page load time. This open-source Apache HTTP server module (also available for nginx) automatically applies web performance best practices to pages, and associated assets (CSS, JavaScript, images) without requiring you to modify your existing content or workflow.
https://docs.phalcon.io/4.0/tr-tr/performance
2021-07-23T22:00:04
CC-MAIN-2021-31
1627046150067.51
[]
docs.phalcon.io
GEKKO Optimization Suite¶ Overview¶ GEKKO is a Python package for machine learning and optimization of mixed-integer and differential algebraic equations. It is coupled with large-scale solvers for linear, quadratic, nonlinear, and mixed integer programming (LP, QP, NLP, MILP, MINLP). Modes of operation include parameter regression, data reconciliation, real-time optimization, dynamic simulation, and nonlinear predictive control. GEKKO is an object-oriented Python library to facilitate local execution of APMonitor. More of the backend details are available at What does GEKKO do? and in the GEKKO Journal Article. Example applications are available to get started with GEKKO. Installation¶ A pip package is available: pip install gekko Use the —-user option to install if there is a permission error because Python is installed for all users and the account lacks administrative priviledge. The most recent version is 0.2. You can upgrade from the command line with the upgrade flag: pip install --upgrade gekko Another method is to install in a Jupyter notebook with !pip install gekko or with Python code, although this is not the preferred method: try: from pip import main as pipmain except: from pip._internal import main as pipmain pipmain(['install','gekko']) Project Support¶ There are GEKKO tutorials and documentation in: - GitHub Repository (examples folder) - Dynamic Optimization Course - APMonitor Documentation - GEKKO Documentation - 18 Example Applications with Videos For project specific help, search in the GEKKO topic tags on StackOverflow. If there isn’t a similar solution, please consider posting a question with a Mimimal, Complete, and Verifiable example. If you give the question a GEKKO tag with [gekko], the subscribed community is alerted to your question. Citing GEKKO¶ If you use GEKKO in your work, please cite the following paper: Beal, L.D.R., Hill, D., Martin, R.A., and Hedengren, J. D., GEKKO Optimization Suite, Processes, Volume 6, Number 8, 2018, doi: 10.3390/pr6080106. The BibTeX entry is: @article{beal2018gekko, title={GEKKO Optimization Suite}, author={Beal, Logan and Hill, Daniel and Martin, R and Hedengren, John}, journal={Processes}, volume={6}, number={8}, pages={106}, year={2018}, doi={10.3390/pr6080106}, publisher={Multidisciplinary Digital Publishing Institute}}
https://gekko.readthedocs.io/en/v1.0.0/
2021-07-23T22:11:05
CC-MAIN-2021-31
1627046150067.51
[]
gekko.readthedocs.io
List Box. On Display Member Changed(EventArgs) Method Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Raises the DisplayMemberChanged event. protected: override void OnDisplayMemberChanged(EventArgs ^ e); protected override void OnDisplayMemberChanged (EventArgs e); override this.OnDisplayMemberChanged : EventArgs -> unit Protected Overrides Sub OnDisplayMemberChanged (e As EventArgs)
https://docs.microsoft.com/en-us/dotnet/api/system.windows.forms.listbox.ondisplaymemberchanged?view=net-5.0
2021-07-23T23:42:59
CC-MAIN-2021-31
1627046150067.51
[]
docs.microsoft.com
in the Kentico EMS edition: You can manage the analytics data directly in the interface – save the selected data. Was this page helpful?
https://docs.xperience.io/k12sp/on-line-marketing-features/managing-your-on-line-marketing-features/web-analytics
2021-07-23T21:13:21
CC-MAIN-2021-31
1627046150067.51
[]
docs.xperience.io
User’s Guide¶ Foreword: A note to METcalcpy users This User’s guide is provided as an aid to users of METcalcpy. METcalcpy is a Python version of the statistics calculation functionality of METviewer, METexpress, plotting packages in METplotpy and is a stand-alone package for any other application. It is also a component of the unified METplus verification framework. More details about METplus can be found on the METplus website. It is important to note here that METcalcpy is an evolving software package. This documentation describes the develop release dated 2021-05-10. Intermediate releases may include bug fixes. METcalcpy is also able to accept new modules contributed by the community. If you have code you would like to contribute, we will gladly consider your contribution. Please create a post in the METplus GitHub Discussions Forum. We will then determine if we will be able to include the contribution in a future version. Model Evaluation Tools Calc Py (METcalcpy) TERMS OF USE - IMPORTANT! Copyright 2021,: Win-Gildenmeister, M., T. Burek, H. Fisher, C. Kalb, D. Adriaansen, D. Fillmore, and T. Jensen, 2021: The METcalcpy. Finally, the National Center for Atmospheric Research (NCAR) is sponsored by NSF.
https://metcalcpy.readthedocs.io/en/main_v1.0/Users_Guide/
2021-07-23T22:47:48
CC-MAIN-2021-31
1627046150067.51
[]
metcalcpy.readthedocs.io
Prepare for XDCR Before setting up a replication, make sure you have the appropriate administrative roles. Then, make sure your cluster is appropriately configured and provisioned. Establish Roles for XDCR Couchbase Server enforces Role-Based Access Control. This means that to access specific system-resources, corresponding privileges are required. Privileges have a fixed association with roles, which are assigned to users. Full information on Role-Based Access Control is provided in Authorization. If you possess the role of Full, Cluster, or XDCR Administrator, you can create, edit, and delete cluster references and replications. Prepare Your Cluster for XDCR Before beginning XDCR management: Configure all nodes within the source cluster so that they can individually communicate over the network to all nodes within the target, will require additional RAM and network resources as well. If a cluster is not sized to handle both the existing workload and the new XDCR streams, the performance of both XDCR and the cluster overall may be negatively impacted. Couchbase Server uses TCP/IP port 8091to exchange cluster configuration information. If you are communicating with a destination cluster over a dedicated connection, or over the Internet, ensure that all nodes in the destination and source clusters can communicate with each other over ports 8091and 8092. Next Steps Once your source and target clusters have been prepared, to start XDCR management, Create a Reference.
https://docs.couchbase.com/server/current/manage/manage-xdcr/prepare-for-xdcr.html
2021-07-23T23:15:07
CC-MAIN-2021-31
1627046150067.51
[]
docs.couchbase.com
Crate collide[−][src] Expand description This crate defines a generic collider trait, which is meant to be used by collision detection libraries. You can define new colliders and implement the Collider trait, so this collider can be used with different collision detection libraries, which use this trait. If you create a collision detection library, it can use different collider traits. A collision detection library might be generic over vector types, scalar types and dimensions, or specialized for specific vector types, scalar types and dimensions.
https://docs.rs/collide/0.1.0/x86_64-pc-windows-msvc/collide/index.html
2021-07-23T22:49:01
CC-MAIN-2021-31
1627046150067.51
[]
docs.rs
MedicalAlternative A list of possible transcriptions for the audio. Contents - Entities Contains the medical entities identified as personal health information in the transcription output. Type: Array of MedicalEntity objects Required: No - Items A list of objects that contains words and punctuation marks that represents one or more interpretations of the input audio. Type: Array of MedicalItem objects Required: No - Transcript The text that was transcribed from the audio. Type: String Required: No See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/transcribe/latest/dg/API_streaming_MedicalAlternative.html
2021-07-23T23:04:21
CC-MAIN-2021-31
1627046150067.51
[]
docs.aws.amazon.com
BarSplitButtonItemLink Class Represents a link to a BarSplitButtonItem object. Namespace: DevExpress.Xpf.Bars Assembly: DevExpress.Xpf.Core.v21.1.dll Declaration public class BarSplitButtonItemLink : BarButtonItemLink Public Class BarSplitButtonItemLink Inherits BarButtonItemLink Remarks See Items and Links to learn more. Example This example shows how to create a BarSplitButtonItem, which represents a button with the drop-down functionality. Clicking the button’s Down Arrow displays a popup window. To add a custom content into the popup, use the PopupControlContainer as the PopupControl. To add other bar items, use the PopupMenu. The following image shows the result: private void btnFontColor_ItemClick(object sender, DevExpress.Xpf.Bars.ItemClickEventArgs e) { PopupControlContainer pcc = (e.Item as DevExpress.Xpf.Bars.BarSplitButtonItem).PopupControl as PopupControlContainer; Color color = ((pcc.Content as UserControl).Content as ColorChooser).Color; MessageBox.Show("Color is applied: " + color.ToString()); } Implements Inheritance See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Bars.BarSplitButtonItemLink
2021-07-23T23:45:30
CC-MAIN-2021-31
1627046150067.51
[array(['/WPF/images/e156710507.png', 'E1567'], dtype=object)]
docs.devexpress.com
ifttt Connect to the IFTTT Maker Channel Connect to the IFTTT Maker Channel. An IFTTT Recipe has two components: a Trigger and an Action. In this case, the Trigger will fire every time the Maker Channel receives a web request (made by this fastlane action) to notify it of an event. The Action can be anything that IFTTT supports: email, SMS, etc. 1 Example ifttt( api_key: "...", event_name: "...", value1: "foo", value2: "bar", value3: "baz" ) Parameters * = default value is dependent on the user's system Documentation To show the documentation in your terminal, run fastlane action ifttt CLI It is recommended to add the above action into your Fastfile, however sometimes you might want to run one-offs. To do so, you can run the following command from your terminal fastlane run ifttt To pass parameters, make use of the : symbol, for example fastlane run iftt
https://docs.fastlane.tools/actions/ifttt/
2021-07-23T22:59:21
CC-MAIN-2021-31
1627046150067.51
[]
docs.fastlane.tools
Expense Line Expense lines enable you to track costs and represent when a point-in-time expense incurred. Expense lines can be created manually or generated by the scheduled processing of recurring costs. The Expense Line plugin is active for all instances. To use the Expense Allocations and Expense Allocation Rules modules, activate the Cost Management plugin. The Now Platform generates expense lines automatically when you create an asset, and updates expense lines automatically when you revise the Cost or Quantity field on an asset record. Users with the financial_mgmt_admin and financial_mgmt_user roles can work with expense lines. Expense lines integrate closely with asset management, CMDB, cost management, and contract management, but can be used with any application. The Source ID field on an expense line record can be linked to any record in any table. This identifier allows expenses to be associated with a wide variety of items, such as a contract, an individual asset, a single configuration item, a software installation, a lease, a service contract, a user, or a group. Figure 1. Example expense line for an asset monthly lease Components installed with Expense LineSeveral types of components are installed with the Expense Line plugin.View an expenseExpense lines can be used in various ways, for example, how to view expenses that are associated with a given contract.Expense lines and expense allocationsThe Expense lines application tracks costs and record expenses incurred. Expense allocations let you associate expenses with items such as users, groups, or departments.Domain separation and Expense LineDomain separation is unsupported in Expense Line processing. Domain separation enables you to separate data, processes, and administrative tasks into logical groupings called domains. You can control several aspects of this separation, including which users can see and access data.Related conceptsAsset ManagementConfiguration Management DatabaseContract ManagementRelated referenceCost Management
https://docs.servicenow.com/bundle/quebec-it-service-management/page/product/asset-and-configuration/concept/c_ExpenseLine.html
2021-07-23T22:23:17
CC-MAIN-2021-31
1627046150067.51
[]
docs.servicenow.com
API Key is a part of an API Key and Integrations add-on, click here to learn how to get it on Marketplace ».
https://docs.woodpecker.co/en/articles/5223556-error-codes
2021-07-23T22:26:07
CC-MAIN-2021-31
1627046150067.51
[]
docs.woodpecker.co
Action: Editor Display Only This action is always available, in addition to any other action type you select. It enables you to change the particle shape and the particle colour in the same way as in the emitter's Display tab. Normally, this only causes changes to the display of particles in the editor, and does not change anything at render time. However, if you render particles with the X-Particles Material and choose 'Use Particle Color' in that material then of course the particle colour is used, and if you use the Display Render generator to render the particle shape, then the shape will appear in the render as well as in the viewport. The usefulness of this action becomes apparent when trying to work out why a setup isn't behaving as it should. By changing the colour, shape, etc. of the particles, you can see if and/or when an action should be taking place. If you don't see the change in the editor that you expect, either the Question object which triggers this action is not being passed, or the action is disabled, etc. (For this purpose, see slso the 'Output to Console' action.) You can use the Action settings to change the particle shape and colour in the same way as in the Display tab of the emitter. Change Editor Display Check this switch to change the particle type and colour. Editor Display The particle display type in the editor - Dots, Lines, etc. Particle Color The new particle color. Groups Affected Drag the particle Group object(s) you want to be affected by the modifier into the 'Groups Affected' list. If the list contains at least one group, groups not in the list will not be affected. But if no groups are in the list, all groups are affected.
http://docs.x-particles.net/html/action_editoronly.php
2021-07-23T22:17:27
CC-MAIN-2021-31
1627046150067.51
[array(['../images/actions_editoronly1.jpg', None], dtype=object)]
docs.x-particles.net
Kill Partial Name kill_partial.cfg Location ~/conf/COLLECTION/ Description Defines URL patterns to be killed from a collection during the indexing phase . To access the Kill Partial_partial.cfg -kill Format The file consists of a list of URL patterns of documents to kill, with one URL per line. The pattern is a simple string that is matched as a left-anchored substring against the indexed URL. The pattern does not support wildcards or regular expressions. Example # matches all docs with URLs in the site # exactly the same as the previous line (http protocol is assumed). https # matches all the URLs starting with the https protocol # matches calendar.cgi with any trailing parameters
https://docs.squiz.net/funnelback/docs/latest/reference/configuration-files/kill-partial-cfg.html
2021-07-23T21:59:01
CC-MAIN-2021-31
1627046150067.51
[]
docs.squiz.net
You are viewing version 2.23 of the documentation, which is no longer maintained. For up-to-date documentation, see the latest version. Installing Armory in AKS Overview This guide describes how to install Armory in Azure Kubernetes Service (AKS). To do this, the guide walks you through creating and using the following Azure resources: - An AKS cluster. You can also use an existing cluster. - An AZS (Azure Storage) bucket. You can also use an existing bucket. - An NGINX Ingress controller in your AKS cluster. This resource is only needed if your cluster doesn’t already have an ingress installed. Note that the examples on this page for NGINX only work on Kubernetes version 1.14 or later. This document does not cover the following: - TLS Encryption - Authentication and authorization - Add K8s accounts to deploy to - Add cloud accounts to deploy to See Next Steps for resources related to these topics. Note: This document focuses on Armory’s extended Spinnaker for enterprises and uses the Armory-extended version of Halyard (referred to as ‘Halyard’ in this doc). You can install open source Spinnaker by using an open source Halyard container and a corresponding open source Spinnaker version. Requirements To follow the steps described in this guide, make sure the following prerequisites are met: - You have login credentials to Azure that allow you to create resources - You have an Azure subscription defined where you will install Spinnaker - You have az(the Azure CLI tool) and a recent version of kubectl(the Kubernetes CLI tool) on a machine (referred to as the workstation machine). - You have Docker available and can run containers on a machine (referred to as the Halyard machine). An easy way to install Docker on your machine is with Docker Desktop. - You can transfer files created on the workstation machineto the Docker container that runs Halyard on the Halyard machine. - The workstationand Halyardmachines can be the same machine Workstation machine details On the workstation machine, you need both az and kubectl installed to create and manage Azure and Kubernetes resources. With az, you create and manage the following resources: - AKS clusters - AZS buckets With kubectl, you need to - Have a persistent working directory in which to work in. This guide uses ~/aks-spinnaker - Create AKS resources, such as service accounts that will be permanently associated with your Spinnaker cluster Halyard machine details Armory-extended Halyard (the tool used to install and manage Armory) runs in a Docker container on the Halyard machine. To make this process more seamless, this guide describes how to configure the following volume mounts, which need persisted or preserved to manage your Spinnaker cluster: .haldirectory (mounted to /home/spinnaker/.hal) - Stores all Halyard Spinnaker configurations in a .hal/configYAML file and assorted subdirectories .secretdirectory (mounted to /home/spinnaker/.secret) Stores all external secret keys and files used by Halyard. This includes the kubeconfigfiles and Azure IAM service account keys you create as part of this guide. resourcesdirectory (mounted to /home/spinnaker/resources Installation Summary In order to install Armory, this document covers the following things: - Generating a kubeconfigfile, which is a Kubernetes credential file that Halyard and Spinnaker uses to communicate with the Kubernetes cluster where Spinnaker gets installed - Creating an AZS bucket for Spinnaker to store persistent configurations in - Running the Halyard daemon in a Docker container - Persistent configuration directories from the workstation/host get mounted into the container - Running the halclient interactively in the same Docker container to perform the following actions: - Build out the Halyard config YAML file ( .hal/config) - Configure Armory/Halyard to use the kubeconfigto install Spinnaker - Configure Armory with IAM credentials and bucket information - Turn on other recommended settings (artifacts and http artifact provider) - Install Armory - Expose Armory Create the AKS cluster This guide assumes you have already installed the az CLI on your workstation and are familiar with its use. For more information about az, see The Azure Command-Line Interface. This creates a minimal AKS cluster. Follow the official AKS instructions to set up a different type of AKS cluster. To create an AKS cluster, perform the following steps on the workstation machine, which has az and kubectl installed: Create the local working directory: mkdir ~/aks-spinnaker cd ~/aks-spinnaker For this guide, use the ~/aks-spinnakerdirectory, but this can be any persistent directory on any Linux or OSX machine. Run the following commands to set up the azCLI: az login az account list az account set --subscription <your-subscription-id> Determine which Azure locations (like westus) are available for your account: az account list-locations --query "[].name" Create a resource group for your AKS cluster in a location available for your account. RESOURCE_GROUP="Spinnaker" az group create --name ${RESOURCE_GROUP} --location <location> Skip this step if you are using an existing AKS cluster. Create the AKS cluster: az aks create --resource-group ${RESOURCE_GROUP} --name spinnaker-cluster --node-count 2 --enable-addons monitoring --generate-ssh-keys Configure the Kubernetes context so that kubectluses your AKS cluster: To use the cluster created in the previous step, run the following command: export KUBECONFIG=kubeconfig-aks az aks get-credentials --resource-group ${RESOURCE_GROUP} --name spinnaker-cluster --file ${KUBECONFIG} To use an existing AKS cluster, run the following command: export KUBECONFIG=kubeconfig-aks az aks get-credentials --resource-group <your-resource-group> --name <your-cluster-name> --file ${KUBECONFIG} Verify that you have access to the cluster: kubectl --kubeconfig kubeconfig-aks get nodes Create a kubeconfig file for Halyard and Spinnaker In this guide, we install Armory in its own namespace ( spinnaker-system) in your AKS cluster; you can use a different namespace for this. This section of the guide describes how to Spinnaker. This same kubeconfig is passed to Spinnaker so that Spinnaker can see and manage its own resources. We use). Alternatively, run the following commands:: # If you're not already in the directory cd ~/aks-spinnaker # If you're on Linux instead of OSX, use this URL instead: # curl -L -o spinnaker-tools chmod +x spinnaker-tools Run spinnaker-tools. You can substitute other values for the parameters: SOURCE_KUBECONFIG=kubeconfig-aks} The commands create a file called kubeconfig-spinnaker-system-sa (or something similar if you’re using a different namespace for Spinnaker). Create an AZS source for Armory Armory uses an AZS bucket to store persistent configuration (such as pipeline definitions). This section walks you through creating a storage resource group and a storage account. Create a resource group for your storage account in a location available for your account: STORAGE_RESOURCE_GROUP="SpinnakerStorage" az group create --name ${STORAGE_RESOURCE_GROUP} --location <location> Create a storage account using a globally unique name: STORAGE_ACCOUNT_NAME=<unique-storage-account-name> az storage account create --resource-group ${STORAGE_RESOURCE_GROUP} --sku STANDARD_LRS --name ${STORAGE_ACCOUNT_NAME} STORAGE_ACCOUNT_KEY=$(az storage account keys list --resource-group ${STORAGE_RESOURCE_GROUP} --account-name ${STORAGE_ACCOUNT_NAME} --query "[0].value" | tr -d '"') Keep the following Azure requirements in mind when defining STORAGE_ACCOUNT_NAME: - The name must be between 3 and 24 characters - Only numbers and lowercase characters are valid Stage files on the Halyard machine In the aks-spinnaker working directory, create the following folders: .hal .secret resources WORKING_DIRECTORY=~/aks-spinnaker/ mkdir -p ${WORKING_DIRECTORY}/.hal mkdir -p ${WORKING_DIRECTORY}/.secret mkdir -p ${WORKING_DIRECTORY}/resources The aks-spinnaker working directory should contain the following file: - A kubeconfig file ( kubeconfig-spinnaker-system-sa) with the credentials for a service account in your aks cluster Copy the file into .secret so that:<image_version> Note: For image version, you must enter a valid version number, such as 1.8.1. Do not use latest. Enter the Halyard container From a separate terminal session on your halyard machine, create a second bash/shell session on the Docker container: docker exec -it armory-halyard bash # Once in the container, you can run these commands for a friendlier environment to: # - prompt with information # - alias for ls # - cd to the home directory export PS1="\h:\w \u\$ " alias ll='ls -alh' cd ~ Add the kubeconfig and cloud provider to Spinnaker (via Halyard) From the docker exec terminal session, add (re-export) the relevant environment variables: ###### Use the same values as the start of the document # Enter the namespace that you want to install Spinnaker in. This should have been created in the previous step. export NAMESPACE="spinnaker-system" # Enter the name you want Spinnaker: The --location limits your Armory to deploying to the namespace specified. If you want to be able to deploy to other namespaces, either add a second cloud provider target or remove the --location. An artifact can be a file in a git repository or a file in an S3 bucket. This feature must be explicitly turned on. Enable the “Artifacts” feature and the “http” artifact artifact provider: # Enable artifacts hal config features edit --artifacts true hal config artifact http enable To add specific types of artifacts, additional configuration must be completed. For now, it is sufficient to just turn on the artifacts feature with the http artifact provider. This allows Spinnaker to retrieve files via unauthenticated http. Configure Armory to use your AZS bucket Use the Halyard hal command line tool to configure Spinnaker to use your AZS storage account. storage-container-name is optional and has a default value of “spinnaker”. If you’re using a pre-existing container, update storage-container-name with the name of that container. ####### Inside the armory-halyard container hal config storage azs edit \ --storage-account-name <storage_account_name> \ --storage-account-key <storage_account_key> \ --storage-container-name <name> # test connection to azs storage hal config storage azs # Set the storage source to AZS hal config storage edit --type azs Choose the Armory version Before Halyard installs Armory, you should specify the version of Armory you want to use. Get a list of available versions of spinnaker with this command: hal version list Note that Armory uses a major version numbering scheme that is one version higher than Open Source Spinnaker. For example, Armory 2.x.x correlates to Open Source Spinnaker 1.x.x. After you decide on a version, run the following commands to specify the version: # Replace with version of choice: export VERSION=<version> hal config version edit --version $VERSION Replace <version> with a valid version, such as 2.18. Install Armory Now that your hal config is configured, install Spinnaker with the following hal command: hal deploy apply Once this is complete, congratulations! Spinnaker is installed. Now we have to access and expose it. Connect to Armory using kubectl port-forward Test connecting to Armory from your workstation machine:. Trying to connect from a remote machine and try again or check the status of all of the containers using the command for your cloud provider, such as kubectl get pods --namespace spinnaker. Once the pods are running and Armory is available, you can access Deck (Spinnaker’s UI) at. Note that trying to connect from a remote machine will not work because your browser attempts to access localhost on your local workstation rather than on the remote machine where the port is forwarded. Install the NGINX ingress controller In order to expose Spinnaker AKS because of these limitations of the built-in aks Spinnaker, AKS-specific service: kubectl apply -f Set up the ingress for spin-deck and spin-gate Identify the URLs you will use to expose Spinnaker: spinnaker-nginx. Configure TLS Certificates Configuration of TLS certificates for ingresses is often very organization-specific. In general, you Spinnaker Spinnaker as a Deployment Target) - Add Azure accounts to deploy applications to (see the Open Source Spinnaker documentation) - Add GCP accounts to deploy applications to (see the Open Source Spinnaker documentation) - Add AWS accounts to deploy applications to (see the Open Source Spinnaker documentation)
https://v2-23.docs.armory.io/docs/installation/guide/install-on-aks/
2021-07-23T23:21:22
CC-MAIN-2021-31
1627046150067.51
[]
v2-23.docs.armory.io
Cisco ACI Starting from TKU February 2020 discovery of the physical topology (infrastructure) of the Cisco ACI solution is supported. Compatibility Cisco ACI physical topology discovery feature is only supported starting from BMC Discovery 11.3 version. Prerequisites - Two types of credentials must be configured in order to pattern work properly: - All ACI nodes must first be discovered using SNMP. Once it is done, Cisco_ACI pattern is able to trigger and build physical topology. How it works Pattern triggers on Cisco APIC controller Network Device node, then runs REST calls against /api/node/class/topSystem class in APIC API to get info about ACI nodes (APICs, leafs and spines). Using this info pattern do the following: - Creates two types of clusters: - APIC Cluster with all the APICs - Pod Cluster(s) with all the leaf and spine switches in one ACI Pod. - Creates management relationship(s) between APIC Cluster and Pod Cluster(s). - Adds ACI-related attributes to the Network Device nodes in ACI Fabric (APICs, leafs and spines): - ACI Fabric Name - ACI Pod Id - ACI Node Role - ACI Node Id - ACI Node State - ACI Tep Pool - Finally, pattern adds IP addresses to the ACI devices (APICs, leafs and spines). Below are some example of the data in Discovery. APIC cluster Pod Cluster Model CMDB mapping NetworkDevice nodes are mapped into BMC_ComputerSystem class. APIC and Pod Clusters are mapped to BMC_Cluster class. Management relationships between clusters are mapped as BMC_Dependency. For more information about the CDM mapping please visit this page:. Below is example of the Cisco ACI data in the CMDB. Known limitations As it was mentioned before, only infrastructure (topology) part is currently supported. Logical (application) part is not supported yet (hence no tenants, application profiles, EPGs, etc.). We encourage our customers to provide us with the requirements for the logical part.
https://docs.bmc.com/docs/discovery/contentref/cisco-aci-997873545.html
2021-07-23T21:06:38
CC-MAIN-2021-31
1627046150067.51
[]
docs.bmc.com
Overview Implementation Behind The Scenes With FastComments it's possible to invoke an API endpoint whenever a comment gets added, updated, or removed from our system. We accomplish this with asynchronous webhooks over HTTP/HTTPS. What are Webhooks A Webhook is a mechanism, or an integration, between two systems where the "producer" (FastComments) fires an event that the "consumer" (You) consumes via an API call. Supported Events & Resources FastComments supports webhooks for the Comment resource only. We support webhooks for comment creation, removal, and on update. Each of these are considered separate events in our system and as such have different semantics and structures for the webhook events. Setup First, navigate to the Webhooks admin. This is accessible via Manage Data -> Webhooks. The configuration page appears as follows: In this page you can specify endpoints for each type of comment event. For each type of event, be sure to click Send Test Payload to ensure you've set up. This is to ensure that you properly authenticate the request. Data Structures The only structure sent via webhooks is the WebhookComment object, outlined in TypeScript below. The WebhookComment. HTTP Methods Used. Security & API Tokens In the request header we'll pass your API Secret in the parameter called "token". If you do not properly check this parameter, your integration will not be marked Verified. This is a safeguard to ensure any integrations with FastComments are secure.. In Conclusion This concludes our Webhooks documentation. We hope you find the FastComments Webhook integration easy to understand and fast to set up. If you feel you have identified any gaps in our documentation, let us know below.
https://docs.fastcomments.com/guide-webhooks.html
2021-07-23T23:26:05
CC-MAIN-2021-31
1627046150067.51
[array(['/images/menu.png', 'Open Menu Menu Icon'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) array(['images/link-internal.png', 'Direct Link Internal Link'], dtype=object) ]
docs.fastcomments.com
PipelineDeals Pipeline is a web-based CRM that helps you track and organize all the deals in your sales pipeline. With Pipeline, you can organize the companies, people & deals - set goals, and easily coordinate with your team to ensure they're pushed through the pipeline. Once installed, the Pipeline app for Help Scout will automatically sync data between your Pipeline and Help Scout accounts, helping to build out and manage your business's customer base. The app also displays a customer's current pipeline value, overall lifetime value and status - as well as up to three deals & their key data points: company name, deal value, pipeline stage and expected close date. Check out the snapshot below of how it will appear below in Help Scout: Activation instructions - 1 - Log in to Pipeline and click on the little cog icon in the upper right-hand corner of the page. From the dropdown menu, select the Account Settings option. - 2 From the sidebar on the Account Settings page, select the Pipeline API option at the very bottom. Enter an email address to enable API Access & then copy the key and head over to Help Scout. - 3 From Help Scout, install the PipelineDeals app. Just paste your API key in to the corresponding field, select which mailboxes you'd to connect, then click on the blue Save button.
https://docs.helpscout.com/article/325-pipelinedeals
2021-07-23T21:10:32
CC-MAIN-2021-31
1627046150067.51
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/524448053e3e9bd67a3dc68a/images/57f5da7c90336079225d2ad3/file-39M5QjvLoj.png', None], dtype=object) ]
docs.helpscout.com
python-can¶ The python-can library provides controller area network support for Python, providing common abstractions to different hardware devices, and a suite of utilities for sending and receiving messages on a can bus. python-can runs any where Python runs; from high powered computers with commercial can to usb devices right down to low powered devices running linux such as a BeagleBone or RaspberryPi. More concretely, some example uses of the library: - Passively logging what occurs on a can bus. For example monitoring a commercial vehicle using its OBD-II port. - Testing of hardware that interacts via can. Modules found in modern cars, motocycles, boats, and even wheelchairs have had components tested from Python using this library. - Prototyping new hardware modules or software algorithms in-the-loop. Easily interact with an existing bus. - Creating virtual modules to prototype can bus communication. Brief example of the library in action: connecting to a can bus, creating and sending a message: Contents: - Installation - Configuration - Library API - CAN Interface Modules - Scripts - Developer’s Overview - History and Roadmap Known Bugs¶ See the project bug tracker on github. Patches and pull requests very welcome! Documentation generated Aug 24, 2017
http://python-can.readthedocs.io/en/latest/
2017-09-19T18:49:04
CC-MAIN-2017-39
1505818685993.12
[]
python-can.readthedocs.io
filtering content (XML/XHTML/HTML etc) in a manner that can be optimised for a given execution context e.g. in a Servlet Container, it can be used to optimise the Servlet Response based on the requesting browsers profile (e.g. pda, landscape, xforms). This provides a Java alternative to XSLT (see Chiba Integration). -!
http://docs.codehaus.org/pages/viewpage.action?pageId=48978
2015-02-01T07:22:47
CC-MAIN-2015-06
1422115855897.0
[]
docs.codehaus.org
JBoss.orgCommunity Documentation The Programmers Guide contains information on how to use BlackTie. This document provides a detailed look at the design and operation of BlackTie. It describes the architecture and the interaction of components within this architecture. This guide is most relevant to engineers who are responsible for developing using BlackTie..
http://docs.jboss.org/blacktie/docs/3.0.0.Final/userguide/ch01.html
2015-02-01T07:34:29
CC-MAIN-2015-06
1422115855897.0
[]
docs.jboss.org
@Immutable public interface OptimizerRule Interface that defines an Optimizer rule. PlanNode execute(QueryContext context, PlanNode plan, LinkedList<OptimizerRule> ruleStack) context- the context in which the query is being optimized; never null plan- the plan to be optimized; never null ruleStack- the stack of rules that will be run after this rule; never null
http://docs.jboss.org/modeshape/2.8.0.Final/api-full/org/modeshape/graph/query/optimize/OptimizerRule.html
2015-02-01T07:57:22
CC-MAIN-2015-06
1422115855897.0
[]
docs.jboss.org
IP. For the next release Renderer has been effected by two changes: There are also tips and tricks that should be noted: If you are a volunteer we could use help improving the. The following changes are planned - details as we figure it out (volunteer!) There are a couple of bug reports concerning Labeling; it looks like they will only be addressed by changing the LabelCache strategy object. This is a very low-level change so most casual users will not be effected.:
http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=77210
2015-02-01T07:31:47
CC-MAIN-2015-06
1422115855897.0
[]
docs.codehaus.org
Page Contents Product Index Sometimes, a dagger or a rapier doesn't cut it, a mace is too blunt, and axes just don't hit enough of your enemies at the same time. If that's the case, then pick up one of these weapons and CLEAVE them! Whether it's the broad-bladed straight scythe, the damascened blade scythe, or the rather monstrous massive Great Cleaver, it's sure to leave a swath of destruction in your wake! Visit our site for further technical support questions or concerns. Thank you and enjoy your new products!
http://docs.daz3d.com/doku.php/public/read_me/index/15046/start
2015-02-01T07:06:13
CC-MAIN-2015-06
1422115855897.0
[]
docs.daz3d.com
Using TLS in Twisted¶ Overview¶ This. TLS echo server and client¶). TLS echo server¶ #!. TLS echo client¶ #!. Connecting To Public Servers¶. Using startTLS¶ . startTLS server¶) startTLS client¶.) Client with certificates¶. TLS Protocol Options¶)..
http://twisted.readthedocs.org/en/latest/core/howto/ssl.html
2015-02-01T07:06:22
CC-MAIN-2015-06
1422115855897.0
[]
twisted.readthedocs.org
The Report Label is the main control used to display data inside reports. This component is the extended version of the Label component with all its functionality, plus report specific properties. To place a Report Label within a report section, click the Label component in the Toolbox. Report labels are similar to standard label controls; however, they contain additional properties that are report-specific. The following report label properties are available:
http://docs.codecharge.com/studio31/html/UserGuide/Controls/ReportLabel/Overview.html
2021-06-12T19:48:21
CC-MAIN-2021-25
1623487586390.4
[]
docs.codecharge.com
For System Administrators¶ Starting from Boundless Desktop 1.1, there are available tools to help system administrators deploy and manage Boundless Desktop in their organization machines. Currently, the tools are solely focused on QGIS, the main component of Boundless Desktop, but it’s planned to provide similar tools for the other applications in the future.
https://docs.boundlessgeo.com/desktop/latest/system_admins/index.html
2021-06-12T20:44:53
CC-MAIN-2021-25
1623487586390.4
[]
docs.boundlessgeo.com
Keyword Suggestions service allows getting keyword suggestions from Google, YouTube, Bing, Amazon, eBay and Instagram. With Keyword Tool API you can get keyword suggestions from the different search engines The data provided by Keyword Suggestions service replicates the results provided in Keyword Tool web interface by Keyword Tool Pro Plus plan. You can get keywords from the desired search vertical of the corresponding search engine by specifying the "category" request parameter. With Keyword Tool API you can get keyword suggestions from the different search verticals of the corresponding search engine Keywords can be localized to the particular country and (or) language by specifying "country" and "language" API parameters. With Keyword Tool API you can get keyword suggestions localized to the different country and language With Keyword Tool API you can get keywords that are provided under Keyword Suggestions, Related Keywords, Questions and Prepositions tab in Keyword Tool web interface. To get the keyword suggestions from the desired tab, please specify the corresponding parameter "type" when making the API requests. With Keyword Tool API you can get keywords that correspond to different tabs in Keyword Tool web interface To get data using the Keyword Suggestions service you will need to make API requests to the Keyword Suggestions Endpoint You can find the detailed API reference, API request syntax and the list of supported parameters on this page: Updated 2 years ago
https://docs.keywordtool.io/docs/keyword-suggestions-service
2021-06-12T20:21:41
CC-MAIN-2021-25
1623487586390.4
[array(['https://files.readme.io/ee5a8e4-search-engines-web-ui.png', 'search-engines-web-ui.png'], dtype=object) array(['https://files.readme.io/ee5a8e4-search-engines-web-ui.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/5ac087e-category-ui-screenshot.png', 'category-ui-screenshot.png'], dtype=object) array(['https://files.readme.io/5ac087e-category-ui-screenshot.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/98a7d86-country-language-ui-screenshot.png', 'country-language-ui-screenshot.png'], dtype=object) array(['https://files.readme.io/98a7d86-country-language-ui-screenshot.png', 'Click to close...'], dtype=object) array(['https://files.readme.io/18d6a99-Screen_Shot_2018-08-08_at_09.54.21.png', 'Screen Shot 2018-08-08 at 09.54.21.png'], dtype=object) array(['https://files.readme.io/18d6a99-Screen_Shot_2018-08-08_at_09.54.21.png', 'Click to close...'], dtype=object) ]
docs.keywordtool.io
UIElement. Preview Stylus Move Event Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. Occurs when the stylus moves while over the element. The stylus must move while being detected by the digitizer to raise this event, otherwise, PreviewStylusInAirMove is raised instead. public: virtual event System::Windows::Input::StylusEventHandler ^ PreviewStylusMove; public event System.Windows.Input.StylusEventHandler PreviewStylusMove; member this.PreviewStylusMove : System.Windows.Input.StylusEventHandler Public Custom Event PreviewStylusMove As StylusEventHandler Event Type Implements RemarksMove attached event and receive the same event data instance. Touch, mouse, and stylus input exist in a particular relationship. For more information, see Input Overview. Routed Event Information The corresponding bubbling event is StylusMove. Override OnPreviewStylusMove to implement class handling for this event in derived classes.
https://docs.microsoft.com/en-us/dotnet/api/system.windows.uielement.previewstylusmove?view=netframework-4.8
2021-06-12T21:45:58
CC-MAIN-2021-25
1623487586390.4
[]
docs.microsoft.com
The OpenStack project is an open source cloud computing platform that supports all types of cloud environments. The project aims for simple implementation, massive scalability, and a rich set of features. Cloud computing experts from around the world contribute to the project. OpenStack provides an Infrastructure-as-a-Service (IaaS) solution through a variety of complementary NTP. Optionally, the controller node runs portions of the Block Storage, Object Storage, Orchestration, and Telemetry services. The controller node requires a minimum of two network interfaces. The compute node runs the hypervisor portion of Compute that operates instances. By default, Compute uses the. The provider networks option deploys the OpenStack Networking service in the simplest way possible with primarily layer-2 (bridging/switching) services and VLAN segmentation of networks. Essentially, it bridges virtual networks to physical networks and relies on physical network infrastructure for layer-3 (routing) services. Additionally, a DHCP service provides IP address information to instances. Warning This option lacks support for self-service (private) networks, layer-3 (routing) services, and advanced services such as LBaaS and FWaaS. Consider the self-service networks option below.
https://docs.openstack.org/ocata/install-guide-rdo/overview.html
2021-06-12T21:20:14
CC-MAIN-2021-25
1623487586390.4
[]
docs.openstack.org
Introduction to BUMO What is BUMOWhat is BUMO BUMO is focusing on the next generation platform of public Blockchain infrastructure and building a future ecosystem of Ubiquitous Trust Network. Therefore, value will be transferred freely on Blockchain just as information is transferred freely on the Internet today. Lots of decentralized applications, such as digital assets and Internet of things, can be developed and deployed rapidly on BUMO network. FeaturesFeatures - Flexible multi-asset and multi-operator structure of accounts and transactions - An improved two-stage and two-layer consensus protocol based on DPoS+BFT, called “Firework” - A novel two-layer polymorphic architecture of multi-child Blockchain will be supported, called "Orbits" - An Inter-Chain of routing value among Blockchains will be supported, called "Canal" - Turing-perfect smart contracts, in support of programming languages of Javascript and WASM - Bunch of signature algorithms are supported, such as ED25519 and SM2 - Built-in joint accounts to control multi signatures - High performance of transaction process with thousands of transactions per second (TPS) - Cross-platform support, such as Linux, MacOS, Windows and Android ArchitectureArchitecture
http://docs.bumo.io/docs/introduction_to_bumo/
2021-06-12T20:36:10
CC-MAIN-2021-25
1623487586390.4
[array(['/docs/assets/arch.png', None], dtype=object)]
docs.bumo.io
[−][src]Crate vint64 vint64: simple and efficient variable-length integer encoding.::decoded_len::signed::encode(-42); assert_eq!(signed.as_ref(), &[0xa7]);
https://docs.rs/vint64/1.0.1/vint64/index.html
2021-06-12T21:18:05
CC-MAIN-2021-25
1623487586390.4
[]
docs.rs
Tutorials¶ This section is a repository of links to instructional videos about sponge plugin development, prepared by trusted developers. We hope these help guide you to great heights in sponge plugin development. Intellij IDEA¶ Long-time Sponge Contributor Sibomots has prepared a series of instructional videos using Intellij IDEA. More information and discussion on these topics can be found on the Sponge Forums. We hope there are many more to come!
https://docs.spongepowered.org/6.0.0/no/plugin/tutorials.html
2021-06-12T20:22:48
CC-MAIN-2021-25
1623487586390.4
[]
docs.spongepowered.org
Flash loans are a developer-oriented feature that allow borrowing of any available amount of assets without collateral. A smart contract is required to acquire the flash loan and repay the amount and interest within the same transaction. Borrow Fee is 0.06% Distribution: · 70% to the liquidity providers · 20% to the governance pool · 10% to reserve pool
https://mcl-docs.multiplier.finance/multi-chain-lend/lite-paper/key-features/flash-loan
2021-06-12T20:56:42
CC-MAIN-2021-25
1623487586390.4
[]
mcl-docs.multiplier.finance
You are viewing documentation for Kubernetes version: v1.19 Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Dancing at the Lip of a Volcano: The Kubernetes Security Process - Explained Editor's note: Today’s post is by Jess Frazelle of Google and Brandon Philips of CoreOS about the Kubernetes security disclosures and response policy. Software running on servers underpins ever growing amounts of the world's commerce, communications, and physical infrastructure. And nearly all of these systems are connected to the internet; which means vital security updates must be applied rapidly. As software developers and IT professionals, we often find ourselves dancing on the edge of a volcano: we may either fall into magma induced oblivion from a security vulnerability exploited before we can fix it, or we may slide off the side of the mountain because of an inadequate process to address security vulnerabilities. The Kubernetes community believes that we can help teams restore their footing on this volcano with a foundation built on Kubernetes. And the bedrock of this foundation requires a process for quickly acknowledging, patching, and releasing security updates to an ever growing community of Kubernetes users. With over 1,200 contributors and over a million lines of code, each release of Kubernetes is a massive undertaking staffed by brave volunteer release managers. These normal releases are fully transparent and the process happens in public. However, security releases must be handled differently to keep potential attackers in the dark until a fix is made available to users. We drew inspiration from other open source projects in order to create the Kubernetes security release process. Unlike a regularly scheduled release, a security release must be delivered on an accelerated schedule, and we created the Product Security Team to handle this process. This team quickly selects a lead to coordinate work and manage communication with the persons that disclosed the vulnerability and the Kubernetes community. The security release process also documents ways to measure vulnerability severity using the Common Vulnerability Scoring System (CVSS) Version 3.0 Calculator. This calculation helps inform decisions on release cadence in the face of holidays or limited developer bandwidth. By making severity criteria transparent we are able to better set expectations and hit critical timelines during an incident where we strive to: - Respond to the person or team who reported the vulnerability and staff a development team responsible for a fix within 24 hours - Disclose a forthcoming fix to users within 7 days of disclosure - Provide advance notice to vendors within 14 days of disclosure - Release a fix within 21 days of disclosure As we continue to harden Kubernetes, the security release process will help ensure that Kubernetes remains a secure platform for internet scale computing. If you are interested in learning more about the security release process please watch the presentation from KubeCon Europe 2017 on YouTube and follow along with the slides. If you are interested in learning more about authentication and authorization in Kubernetes, along with the Kubernetes cluster security model, consider joining Kubernetes SIG Auth. We also hope to see you at security related presentations and panels at the next Kubernetes community event: CoreOS Fest 2017 in San Francisco on May 31 and June 1. As a thank you to the Kubernetes community, a special 25 percent discount to CoreOS Fest is available using k8s25code or via this special 25 percent off link to register today for CoreOS Fest 2017. --Brandon Philips of CoreOS and Jess Frazelle of Google - Post questions (or answer questions) on Stack Overflow - Join the community portal for advocates on K8sPort - Connect with the community on Slack - Get involved with the Kubernetes project on GitHub
https://v1-19.docs.kubernetes.io/blog/2017/05/kubernetes-security-process-explained/
2021-06-12T19:53:56
CC-MAIN-2021-25
1623487586390.4
[]
v1-19.docs.kubernetes.io
ACP Quick Start This section provides a high-level overview to familiarize you with the basics of ACP. Quick Start Pages by Function Reference & Preparation - Before you Begin - Provides a list of items to consider when first using ACP, as well as reference links for supported connector services and their API documentation. Page Overviews - UI Overview - Provides a high-level overview of all major pages in ACP, navigation, and basic tasks. Tutorials - Quick-Start Tutorial 1 - A step-by-step basic introduction to creating a workflow with a single step. This tutorial covers: - Creating a new Workflow - Configuring a Connector - Adding a Step - Basic Step Configuration for Input / Output - Basic Execute / Results View - Basic Debug Mode - Quick-Start Tutorial 2 - A step-by-step intermediate introduction to creating workflow with advanced options and conditions. Tutorial 2 is a continuation of Tutorial 1. This tutorial covers: - Editing Step Inputs - Executing a Workflow with User-Given Variables - Creating / Linking Multiple Steps - Basic / Intermediate Conditions - Formatting Output - Verifying Condition Routing - Reviewing Debug Mode If you are already familiar with the basics of ACP, you may want to review the ACP User Guide instead.
https://docs.6connect.com/pages/diffpagesbyversion.action?pageId=19605958&selectedPageVersions=15&selectedPageVersions=14
2021-06-12T21:23:29
CC-MAIN-2021-25
1623487586390.4
[]
docs.6connect.com
ATmega4808 ID for board option in “platformio.ini” (Project Configuration File): [env:ATmega4808] platform = atmelmegaavr board = ATmega4808 You can override default ATmega4808 settings per build environment using board_*** option, where *** is a JSON object path from board manifest ATmega4808.json. For example, board_build.mcu, board_build.f_cpu, etc. [env:ATmega4808] platform = atmelmegaavr board = ATmega4808 ; change microcontroller board_build.mcu = atmega4808 ; change MCU frequency board_build.f_cpu = 16000000L
https://docs.platformio.org/en/stable/boards/atmelmegaavr/ATmega4808.html
2021-06-12T20:43:14
CC-MAIN-2021-25
1623487586390.4
[]
docs.platformio.org
Installing a Text Editor¶ Articles on SpongeDocs are saved as text files in the reStructuredText markup language. Although your operating system’s default text editor is likely sufficient for editing these files, installing a different text editor may prove to be useful. Downloads¶
https://docs.spongepowered.org/5.1.0/no/preparing/text.html
2021-06-12T21:17:04
CC-MAIN-2021-25
1623487586390.4
[]
docs.spongepowered.org
# GitLab This document describes the use of GitLab as an identity provider with Pomerium. # Setting up GitLab OAuth2 for your Application Log in to your GitLab account or create one here (opens new window). Go to the user settings which can be found in the user profile to create an application (opens new window) like below: - Add a new application by setting the following parameters: Your Client ID and Client Secret will be displayed like below: - Set Client IDand Client Secretin Pomerium's settings. # Service Account To use allowed_groups in a policy an idp_service_account needs to be set in the Pomerium configuration. The service account for Gitlab uses a personal access token generated at: gitlab.com/profile/personal_access_tokens (opens new window) with read_api access: The format of the idp_service_account for Gitlab is a base64-encoded JSON document: { "private_token": "..." } # Pomerium Configuration When a user first uses pomerium to login, they will be presented with an authorization screen similar to the following depending on the scope parameters setup: Please be aware that Group ID (opens new window) will be used to affirm group(s) a user belongs to. # GitLab.com Your configuration should look like the following example: authenticate_service_url: idp_provider: "gitlab" idp_client_id: "REDACTED" // gitlab application ID idp_client_secret: "REDACTED" // gitlab application secret idp_service_account: "REDACTED" // gitlab service account, base64 json # Self-Hosted GitLab Self-hosted CE/EE instances should be configured as a generic OpenID Connect provider: idp_provider: oidc idp_client_id: "REACTED" idp_client_secret: "REDACTED" idp_scopes: openid,email // Intersects with scopes idp_provider_url: // Base URL of GitLab instance idp_service_account: "REDACTED" // gitlab service account, base64 json
https://0-13-0.docs.pomerium.io/docs/identity-providers/gitlab
2021-06-12T20:10:27
CC-MAIN-2021-25
1623487586390.4
[array(['https://d33wubrfki0l68.cloudfront.net/a6f5c021af2b263b8ee04a3ed40e2fbbe88b3774/c9d74/assets/img/gitlab-create-applications.8e3fb6a6.png', 'create an application'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/862ce2981e215e673f75a738cc51b7dda049a9b1/4daf3/assets/img/gitlab-credentials.4f750176.png', 'Gitlab OAuth Client ID and Secret'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/f5e3cb8216549f695f824bf7ce24b97c96d8eb2f/177cc/assets/img/gitlab-personal-access-token.7978a703.png', 'Gitlab Personal Access Token'], dtype=object) array(['https://d33wubrfki0l68.cloudfront.net/dc6f4bbd534987f7d12041df9d33d522db1f17c7/19389/assets/img/gitlab-verify-access.946a5aa9.png', 'gitlab access authorization screen'], dtype=object) ]
0-13-0.docs.pomerium.io
Additional Policies The Council will have a regular face-to-face in November each year. The region will be dependent on the location of Council members at the time. We will also conduct in-person meetings at Flock and DevConf.CZ for Council members in attendance. Fedora Magazine is the venue for user-targeted communication and the Community Blog is the venue for contributor-targeted communication. The Council supports greater efficiency in the infrastructure to allow more to be done, even when this means that we move away from self-hosted or self-maintained infrastructure. The Fedora Project wants to advance free and open source software and as a pragmatic matter we recognize that some infrastructure needs may be best served by using closed source or non-free tools today. Therefore the Council is willing to accept closed source or non-free tools in Fedora’s infrastructure where free and open source tools are not viable or not available. Because we value participation over strict bureaucracy ('Friends' foundation!) candidates for Fedora elections may be accepted within a reasonable grace period after the deadline. However, be aware that this may mean missing out on Fedora Magazine interviews, town halls, and other opportunities for campaigning. The Council has no objections to events being held in the local language or in English, and separate events can be held if there are multiple audiences. We encourage event organizers to specify the language(s) for their event. Unspent budget allocated to the $150 event program will be pulled into the Council budget at the end of each fiscal quarter. The Fedora Council may choose to withdraw Fedora’s support from events or other activities that involve fiscal sponsorship or use of Fedora trademarks when it determines that participation is not in the interests of the Fedora Project. Decisions to withdraw support will be published in venues normally used for Council decisions. Deliberation and reasoning for the decision should be public to the extent possible. The Council will engage with the committee/group/team that is involved with the event in question to ensure their input is considered.
https://docs.fedoraproject.org/ro/council/policies/
2021-06-12T21:38:18
CC-MAIN-2021-25
1623487586390.4
[]
docs.fedoraproject.org
Multi-lingual chatbot. Create flows only once, sync changes, approve translations and you're done Cover every language with just a single bot. Building and maintaining a separate chatbot for each language isn’t efficient. Create the bot once and sync any change across languages Sync changes Easily design flows in a primary language and we'll do an initial translation for you Auto translate Use our advanced AI driven machine translation to generate any additional language Review & Approve Collaborate with translators and maintain different languages with just a single project Multiple regions. Not just language defines what to reply, so does the region Localized Tweak the experience for different locales and add specific regional content Language detection We automatically recognize the language of inbound messages and reply accordingly Restrict channels Limit messaging or voice channels to specific languages and or regions Prototype, test, and collaborate. We’ve made the process of testing out ideas painless and collaborate with translators a breeze Keep it flexible Stay flexible, and iterate the design on a primary language. Sync your changes to any additional language Work together Add translators to the team and allow them to translate and approve changes Full control Updating content for one language wont break any other languages
https://docs.flow.ai/multilingual
2021-06-12T21:07:29
CC-MAIN-2021-25
1623487586390.4
[array(['/assets/images/v3/multi-lang/multi-region.png', 'Multiple regions Multiple regions'], dtype=object) array(['/assets/images/v3/features/test-chatbots.png', 'Prototype, test, and collaborate Prototype, test, and collaborate'], dtype=object) ]
docs.flow.ai
Ore Documentation¶ Our custom built plugin hosting solution provides. Ore is currently still in beta and has only recently been introduced to a production environment. If you find an issue with Ore that you believe is a bug, please take the time to report it to our issue tracker. If you need help using Ore, create a new topic on our Ore Support Forum. If you’d like to submit a plugin to Ore, please read and follow the Ore plugin submission guidelines linked below.
https://docs.spongepowered.org/6.0.0/no/ore/index.html
2021-06-12T20:41:40
CC-MAIN-2021-25
1623487586390.4
[]
docs.spongepowered.org
Introduction This sample demonstrates the functionality of static and dynamic registry resources and the XSLT Mediator. It sends a message from a sample client to a back-end service through the ESB and uses the XSLT Mediator to perform transformations. The XSLT transformations are specified as registry resources. Prerequisites For a list of prerequisites, see the Prerequisites section in ESB Samples Setup. Building the Sample 1. Start the ESB with sample 8 configuration using the instructions given in Starting Sample ESB Configurations. 2. A message should appear in the command or text Linux console stating the server started successfully. 3. The synapse configuration in the ESB used for message mediation in this sample is provided in <ESB_HOME>/repository/samples/ synapse_sample_8.xml as shown below: "/> <in> <!-- transform the custom quote request into a standard quote requst expected by the service --> <xslt key="xslt-key-req"/> </in> <out> <!-- transform the standard response back into the custom format the client expects --> <!-- the key is looked up in the remote registry and loaded as a 'dynamic' registry resource --> <xslt key="transform/transform_back.xslt"/> </out> <send/> </definitions> 4. Deploy the back-end service 'SimpleStockQuoteService' and start the Axis2 server using the instructions given in section Starting Sample Back-End Services. 5. Now you have a running ESB instance and a back-end service deployed. In the next section, we will send a message to the back-end service through the ESB using a sample client. Executing the Sample According to higher preference, it will reload the meta information about the resource and reload its cached copy, if necessary,. 1. The sample client used here is 'Stock Quote Client' which can operate in several modes. For instructions on this sample client and its operation modes, refer to Stock Quote Client. Run the following ant command from <ESB_HOME>/samples/axis2Client directory and analyze the the ESB debug log output on the ESB console. ant stockquote -Daddurl= -Dtrpurl= -Dmode=customquote 2. The incoming message is now. The response from the SimpleStockQuoteService is converted back into the custom format as expected by the client during the out message processing. During the response processing, the SimpleURLRegistry fetches the resource. 3. Run the client again immediately (within 15 seconds of the first request). You will not see the resource being reloaded by the registry as the cached value would be still valid. 4. Leave the system idle for more than 15 seconds and retry the same request. The registry detects that the cached resource has expired 5. Now edit the <ESB_HOME>/repository/samples/resources/transform/transform_back.xslt file and add a blank line at the end and run the client again using ant. 6. If the cache is expired, the resource would be re-fetched from its URL by the registry. This can be seen by the following debug log messages. [HttpClientWorker-1] DEBUG AbstractRegistry - Cached object has expired for key : transform/transform_back.xslt [HttpClientWorker-1] DEBUG SimpleURLRegistry - Perform RegistryEntry lookup for key : transform/transform_back.xslt The SimpleURLRegistry allows resource to be cached and updates detected so that the changes can be reloaded without restarting the ESB instance.
https://docs.wso2.com/pages/viewpage.action?pageId=31885894
2021-06-12T20:26:48
CC-MAIN-2021-25
1623487586390.4
[]
docs.wso2.com
You are viewing documentation for Kubernetes version: v1.20 Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Troubleshooting kubeadm.. kubeadm blocks waiting for control plane during installation add-on. - If you see Pods in the RunContainerError, CrashLoopBackOffor Errorstate after deploying the network add-on and nothing happens to coredns(or kube-dns), it's very likely that the Pod Network add-on (or kube-d certificate errors --decodecommandflag to flannel so that the second interface is chosen. Non-public IP used for containers. Warning: Disabling SELinux or setting allowPrivilegeEscalationto truecan compromise the security of your cluster. etcd pods restart continually node node.
https://v1-20.docs.kubernetes.io/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/
2021-06-12T20:49:36
CC-MAIN-2021-25
1623487586390.4
[]
v1-20.docs.kubernetes.io
# What is Pomerium # Overview?. Pomerium can be used to: - provide a single-sign-on gateway to internal applications. - enforce dynamic access policy based on context, identity, and device state. - aggregate access logs and telemetry data. -.
https://0-13-0.docs.pomerium.io/docs/
2021-06-12T20:50:46
CC-MAIN-2021-25
1623487586390.4
[]
0-13-0.docs.pomerium.io
Azure Stream Analytics on IoT Edge Azure Stream Analytics deploy control logic close to the industrial operations and complement Big Data analytics done in the cloud. Azure Stream Analytics on IoT Edge runs within the Azure IoT Edge framework. Once the job is created in Stream Analytics, you can deploy and manage it using IoT Hub. Common scenarios This section describes the common scenarios for Stream Analytics on IoT Edge. The following diagram shows the flow of data between IoT devices and the Azure cloud. Low-latency command and control Manufacturing safety systems must respond to operational data with ultra-low latency. With Stream Analytics Stream Analytics, Stream Analytics, you can filter or aggregate the data that needs to be sent to the cloud. Compliance Regulatory compliance may require some data to be locally anonymized or aggregated before being sent to the cloud. Edge jobs in Azure Stream Analytics Stream Analytics Edge jobs run in containers deployed to Azure IoT Edge devices. Edge jobs are composed of two parts: A cloud part that is responsible for the job definition: users define inputs, output, query, and other settings, such as out of order events, in the cloud. A module running on your IoT devices. The module contains the Stream Analytics engine and receives the job definition from the cloud. Stream Analytics uses IoT Hub to deploy edge jobs to device(s). For more information, see IoT Edge deployment. Edge job limitations The goal is to have parity between IoT Edge jobs and cloud jobs. Most SQL query language features are supported for both edge and cloud. However, the following features are not supported for edge jobs: - User-defined functions (UDF) in JavaScript. UDF are available in C# for IoT Edge jobs (preview). - User-defined aggregates (UDA). - Azure ML functions. - AVRO format for input/output. At this time, only CSV and JSON are supported. - The following SQL operators: - PARTITION BY - GetMetadataPropertyValue - Late arrival policy Runtime and hardware requirements To run Stream Analytics on IoT Edge, you need devices that can run Azure IoT Edge. Stream Analytics and Azure IoT Edge use Docker containers to provide a portable solution that runs on multiple host operating systems (Windows, Linux). Stream Analytics on IoT Edge is made available as Windows and Linux images, running on both x86-64 or ARM (Advanced RISC Machines) architectures. Input and output Stream Analytics Stream Analytics job, a corresponding endpoint is created on your deployed module. These endpoints can be used in the routes of your deployment. Supported stream input types are: - Edge Hub - Event Hub - IoT Hub Supported stream output types are: - Edge Hub - SQL Database - Event Hub - Blob Storage/ADLS Gen2. License and third-party notices - Azure Stream Analytics on IoT Edge license. - Third-party notice for Azure Stream Analytics on IoT Edge. Azure Stream Analytics module image information This version information was last updated on 2020-09-21: Image: mcr.microsoft.com/azure-stream-analytics/azureiotedge:1.0.9-linux-amd64 - base image: mcr.microsoft.com/dotnet/core/runtime:2.1.13-alpine - platform: - architecture: amd64 - os: linux Image: mcr.microsoft.com/azure-stream-analytics/azureiotedge:1.0.9-linux-arm32v7 - base image: mcr.microsoft.com/dotnet/core/runtime:2.1.13-bionic-arm32v7 - platform: - architecture: arm - os: linux Image: mcr.microsoft.com/azure-stream-analytics/azureiotedge:1.0.9-linux-arm64 - base image: mcr.microsoft.com/dotnet/core/runtime:3.0-bionic-arm64v8 - platform: - architecture: arm64 - os: linux Get help For further assistance, try the Microsoft Q&A question page for Azure Stream Analytics.
https://docs.microsoft.com/en-us/azure/stream-analytics/stream-analytics-edge?WT.mc_id=thomasmaurer-blog-thmaure
2021-06-12T22:15:39
CC-MAIN-2021-25
1623487586390.4
[array(['media/stream-analytics-edge/edge-high-level-diagram.png', 'High level diagram of IoT Edge'], dtype=object) array(['media/stream-analytics-edge/stream-analytics-edge-job.png', 'Azure Stream Analytics Edge job'], dtype=object) ]
docs.microsoft.com
There are several ways to ask for help or get involved with Prebid. See below for more information. For technical & feature requests or questions, it’s best to use the GitHub or Stack Overflow forums. Prebid is worked on full-time by engineering teams from AppNexus and Rubicon Project. There are also many publishers using and contributing to the project. For questions about how an adapter works, it’s best to reach out to the company directly, or ask on GitHub. Each demand adapter should be maintained by the SSPs or exchange behind that adapter. For Prebid news or general questions, we recommend the Ad Ops Slack Channel, Quora, or Twitter. There are serveral Prebid.org members that will install & maintain Prebid on a publisher’s behalf. See the list of Managed Prebid Solutions. Sometimes people have already gotten answers on the GitHub forums. See issues with the ‘question’ tag on the Prebid.js repo Submit a GitHub issue for Prebid.js, Prebid SDK iOS, Prebid Mobile Android or Prebid Server if: For more information about how to contribute, see the Contribute section of the site. If you ask questions on Stack Overflow, please use the following tags: prebid prebid.js Join the Ad Ops Reddit Slack (specifically the #HeaderBidding channel) to connect with other publishers & developers using Prebid. Post on Reddit (Please include the word “Prebid.js” for us to get notified) if: Post on Quora (Please tag the question with “Prebid.js”) if:
https://docs.prebid.org/support/
2021-06-12T19:40:42
CC-MAIN-2021-25
1623487586390.4
[]
docs.prebid.org
Introduction Answers to frequently asked questions. Our github page is found at here. There are five contracts included: LsLMSR.sol - the contract to create a prediction market automated market maker using liquidity sensitive LMSR. ABDKMath64x64.sol - library enabling fixed point arithmetic (written by ABDK Consulting) ConditionalTokens.sol - contract to create conditional tokens (written by Gnosis) CTHelpers.sol - helper contract for conditional tokens (written by Gnosis) FakeDai.sol - sample ERC20 token to be used for testing
https://docs.just.win/docs/developer-guide/introduction/
2021-06-12T19:49:11
CC-MAIN-2021-25
1623487586390.4
[]
docs.just.win
<foreach> Define a sequence of activities to be executed iteratively. Syntax <foreach property="P1" key="K1"> ... </foreach> Copy code to clipboard Details Description The <foreach> element defines a sequence of activities that are executed iteratively, once for every element within a specified collection property. For example: <foreach property="callrequest.Location" key="context.K1"> <assign property="total" value="context.total+context.prices.GetAt(context.K1)"/> </foreach> Copy code to clipboard The <foreach> element can refer to the following variables and their properties. Do not use variables not listed here. Note: There is more information about the business process execution context in documentation of the <assign> element. You can fine-tune loop execution by including <break> and <continue> elements within a <foreach> element. See the descriptions of these elements for details.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=EBPLR_FOREACH
2021-06-12T21:30:43
CC-MAIN-2021-25
1623487586390.4
[]
docs.intersystems.com
If you’re new to header bidding and Prebid, your implementation of Prebid for video demand will likely go much smoother if you first read the following: See Prebid.js Video Overview for a general description and high-level overview of working with video demand in Prebid.js. Start by reading AdOps Getting Started. This will give you a general overview of setting up your price buckets and line items on your ad server. One thing to keep in mind as you set up your line items is price granularity. Be sure to communicate your price granularity requirements to your developers, as they might need to define custom configuration settings, depending on your granularity. If you already have a Prebid integration for banner, you must create a separate set of ad server line items to enable Prebid to monetize instream video. If you’re using Google Ad Manager as your ad server: Once you understand the general setup requirements, follow the instructions for video-specific line item setup in Setting Up Prebid Video in Google Ad Manager. If you’re using another ad server: Follow the instructions for your ad server to create line items for instream video content. The primary points to keep in mind as you set up your line items include: • Line items must target Prebid key-values. • The VAST creative URL must be in the format{hb_cache_id}, where {hb_cache_id} is the value passed to the ad server from Prebid.js. If you already have a Prebid integration for banner, you don’t need to do anything differently for outstream video. Outstream units use the same creative and line item targeting setup as banner creatives. See the Step by Step Guide to Google Ad Manager Setup for instructions. (If you’re not using Google Ad Manager as your ad server, follow your ad server’s guidelines for setting up your line items.) Prebid Server If you’ve decided to conduct your header bidding auctions server-side rather than on the client, you need to have a Prebid Server account or set up your own. See the Prebid Server Overview to begin your integration. Your first step to implementing header bidding for video is to download Prebid.js. Before downloading, select the adapters you want to include. (You can add more adapters later.) Setting up Prebid ad units is almost the same whether you’re working with instream video ads or outstream. The primary difference is specifying the type of video ad (instream or outstream), which you do in the mediaTypes.video.context field: var adUnit1 = { code: 'videoAdUnit', mediaTypes: { video: { context: 'instream', //or 'outstream' playerSize: [640, 480] } The mediaTypes.video.playerSize field is where you define the player size that will be passed to demand partners. If you’re using Prebid Server, you must also include the mediaTypes.video.mimes field, as this is required by OpenRTB. mediaTypes: { video: { context: 'instream', // or 'outstream' playerSize: [640, 480], mimes: ['video/mp4'] In your ad unit you also need to define your list of bidders. For example, including AppNexus as a bidder would look something like this: var adUnit1 = { ... bids: [{ bidder: 'appnexus', params: { placementId: '123456789', } }] The parameters differ depending on which bidder you’re including. For a list of parameters for each bidder, see Bidders’ Params. For full details on creating instream video ad units, see Show Video Ads with Google Ad Manager – Create Ad Unit. For full details on creating outstream video ad units, see Show Outstream Video Ads – Create Ad Unit. After you’ve defined your ad units, you can continue with the rest of your configuration. In most cases for video, the first step will be to define where the VAST XML coming back in the bids will be stored. Some bidders have you covered here – the VAST is stored on their servers. But many bidders don’t have their own server-side cache. Video players expect that the response from the ad server will be a URL that points to somewhere on the internet that stores the video ad creative. This URL can’t point to the browser, so Prebid.js will send bid VAST XML out to a cache so it can be displayed if it wins in the ad server. Configuring the video cache is done with setConfig: pbjs.setConfig({ cache: { url: '' /* Or whatever your preferred video cache URL is */ } }); And this is where setups for instream and outstream diverge. Please follow one of these links: Be sure to note the setting for price granularity. You might need to set up a custom price granularity. (See “Custom CPM Bucket Sizing” under Price Granularity. Or, if you’re monetizing both banner and video inventory with Prebid, you might need to define format-specific price granularity settings through mediaTypePriceGranularity. Prebid Server If you’re using Prebid Server, you also need to configure your server-to-server bidder adapters. See Getting Started with Prebid Server for details and examples. This section contains working examples of instream and outstream video ads for various players.
https://docs.prebid.org/prebid-video/video-getting-started.html
2021-06-12T20:47:34
CC-MAIN-2021-25
1623487586390.4
[]
docs.prebid.org
. This deployment pattern is supported so that a high volume of data can be distributed among multiple SP instances instead of having them accumulated at a single point. Therefore, it is suitable to be used in scenarios where the volume of data handled is too high to be managed in a single SP instance. Creating a distributed Siddhi application This section explains how to write distributed Sidhi applications by assigning executional elements to different execution groups. Executional elements A distributed Siddhi application can contain one or more of the following elements: Annotations The following annotations are used when writing a distributed Siddhi application. Example The following is a sample distributed Siddhi application. @App:name('Energy-Alert-App') @App:description('Energy consumption and anomaly detection') @source(type = 'http', topic = 'device-power', @map(type = 'json'), @dist(parallel='2')) define stream DevicePowerStream (type string, deviceID string, power int, roomID string); @sink(type = 'email', to = '{{autorityContactEmail}}', username = 'john', address = '[email protected]', password = 'test', subject = 'High power consumption of {{deviceID}}', @map(type = 'text', @payload('Device ID: {{deviceID}} of room : {{roomID}} power is consuming {{finalPower}}kW/h. '))) define stream AlertStream (deviceID string, roomID string, initialPower double, finalPower double, autorityContactEmail string); @info(name = 'monitered-filter')@dist(execGroup='001') from DevicePowerStream[type == 'monitored'] select deviceID, power, roomID insert current events into MonitoredDevicesPowerStream; @info(name = 'power-increase-pattern')@dist(parallel='2', execGroup='002') partition with (deviceID of MonitoredDevicesPowerStream) begin @info(name = 'avg-calculator') from MonitoredDevicesPowerStream#window.time(2 min) select deviceID, avg(power) as avgPower, roomID insert current events into #AvgPowerStream; @info(name = 'power-increase-detector') from every e1 = #AvgPowerStream -> e2 = #AvgPowerStream[(e1.avgPower + 5) <= avgPower] within 10 min select e1.deviceID as deviceID, e1.avgPower as initialPower, e2.avgPower as finalPower, e1.roomID insert current events into RisingPowerStream; end; @info(name = 'power-range-filter')@dist(parallel='2', execGroup='003') from RisingPowerStream[finalPower > 100] select deviceID, roomID, initialPower, finalPower, '[email protected]' as autorityContactEmail insert current events into AlertStream; @info(name = 'internal-filter')@dist(execGroup='004') from DevicePowerStream[type == 'internal'] select deviceID, power insert current events into InternaltDevicesPowerStream; When above siddhi application is deployed it creates a distributed processing chain as depicted in the image below. As annotated in the Siddhi application, two passthough query groups are created to accept HTTP traffic and to send those events into the messaging layer. Other execution groups are created as per the given parallelism count. The execution group creation is summarized in the table below.
https://docs.wso2.com/display/SP440/Converting+to+a+Distributed+Streaming+Application
2021-06-12T19:54:25
CC-MAIN-2021-25
1623487586390.4
[]
docs.wso2.com
(WHM >> Home >> Service Configuration >> Apache Configuration) This collection of features allows you to configure Apache. Apache functions as your web server software and handles HTTP requests. - Global Configuration — This interface allows you to adjust several advanced features of the Apache web server. - PHP and suEXEC Configuration — This interface allows you to change Apache’s PHP handlers' configuration, PHP version, and enable or disable the suEXEC program. - DirectoryIndex Priority — This interface allows you to specify filenames that Apache will recognize and display as index pages. - Include Editor — This interface allows you to add other configuration files to your Apache configuration file ( httpd.conf). - Reserved IP Address Editor — This interface allows you to configure Apache to ignore HTTP requests on specific IP addresses. Use this feature if you wish to prevent the assignment of specific IP addresses to new accounts. Memory Usage Restrictions — This interface allows you to calculate and set a new Apache memory limit. Setting a process memory limit increases the stability of your server, but may reduce performance slightly. This limit applies to each Apache process, not to all Apache processes combined. - Log Rotation — This interface allows you to specify which Apache log files cPanel’s cpanellogddaemon should manipulate. - Piped Log Configuration — This interface allows you to pipe Apache access logs to a separate process, so that Apache does not need to restart every time that it processes the logs. This feature's setting defaults to Enabled. To disable this feature, deselect the checkbox for Enable Piped Apache Logs and click Save. Additional documentation
https://docs.cpanel.net/display/74Docs/Apache+Configuration
2019-01-16T08:14:43
CC-MAIN-2019-04
1547583657097.39
[]
docs.cpanel.net
Synergy offers OAuth2 / OpenID-Connect based SSO for the connected applications, acting as a gateway between the ID provider and the application. See more about application authentication in Synergy workshop guide section 7. Implement authentication. Make sure the session timeout of your application isn't too large. In addition to the usual security concerns, this is also important because the user might remain logged in and able to use the application even after the Synergy subscription expires. Id token contents In addition, tokens issued by Synergy contains these fields: authorities : Granted authorities for user in Synergy. Array of roles. group : Groups assigned to user in Synergy. This array contains the name of the groups. Sample token: `{ "sub": "[email protected]", "scope": [ "read", "write", "openid" ], "exp": 1519416204, "authorities": [ "ROLE_TEAM_ADMIN" ], "jti": "112dcab7-594e-464d-8134-c5ac2d8c63f5", "client_id": "b85S-4vYQ6yq2q07g8gfvx3KzE8", "username": "[email protected]", "iss": "", "aud": "b85S-4vYQ6yq2q07g8gfvx3KzE8", "azp": "b85S-4vYQ6yq2q07g8gfvx3KzE8", "iat": 1519373004, "group": [ "TEAM_ADMIN", "group_1", "group_2" ] }`
https://docs.chemaxon.com/display/lts-europium/application-authentication.md
2022-06-25T01:55:37
CC-MAIN-2022-27
1656103033925.2
[]
docs.chemaxon.com