content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
.
Note.
See also: AWS API Documentation
delete-slot-type -. The name is case. | https://docs.aws.amazon.com/zh_cn/cli/latest/reference/lex-models/delete-slot-type.html | 2022-09-25T02:09:41 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.aws.amazon.com |
Frame Gateway 9.12.0
Fixed:
Issue with AD domain-joined Accounts where a Publish was marked as successful even if one or more instances were unable to be joined to the domain. This resulted in a subset of VMs that were unable to be brought to a
Runningstate in order to host user sessions. With this fix, if any instances are unable to join the domain, the entire Publish will be correctly marked as failed and automatically rolled back.
Issue where, when cloning a Sandbox or Utility Server from a source Account to a destination Account, the VM disk size would automatically provision with the larger disk size. This fix ensures that the disk size of the source Account will always be retained.
(Azure) If a VM fails to be provisioned, associated Disk and NIC resources are sometimes reserved for an extended period of time which could result in these resources not being properly deleted.
Issue when the Max Instances capacity (Dashboard > Capacity > Max Instances) for an Account is automatically reduced when a subset of the instances are unable to be brought to a
Runningstate. This issue caused the Max Instances capacity limit to not revert back to the originally configured value during the next Publish.
Issue when initiating Backup All Volumes, if one or more volumes fails to backup, the entire operation would be marked as failed and successful backups would be deleted. With this fix, successful backups will be retained and customers will be notified of any backups that have failed.
Additional reliability fixes.
Frame Terminal 6.18.0
Added:
Improved bandwidth estimation for Bandwidth Indicator within the Frame Status Bar.
Fixed:
Issue with FRP8 when
preferredIceCandidateProtocol=tcpis configured as an Advanced Terminal Argument, all UDP candidates are filtered out including the network path between the Streaming Gateway Appliance (SGA) and the Frame Workload VM. The argument should only filter out UDP candidates for the network path between the end-user device and the SGA.
Additional reliability fixes. | https://docs.frame.nutanix.com/releases/v3-9-0.html | 2022-09-25T01:00:10 | CC-MAIN-2022-40 | 1664030334332.96 | [] | docs.frame.nutanix.com |
Phone Numbers
By default, Kazoo includes appropriate configurations for running the system in the United States. Nothing, however, stops folks from reconfiguring the system to support other country's numbering system.
2600Hz encourages you to consider sticking with the E.164 format for globally routable numbers.
Determine if a number is "global"
The first thing to configure is how to tell when a number is "globally routable" versus an internal extension. This is managed in the
system_config/number_manager configuration document, under the
reconcile_regex key.
"reconcile_regex": "^\\+?1?\\d{10}$|^\\+[2-9]\\d{7,}$|^011\\d{5,}$|^00\\d{5,}$"
Here is the default, which if reading regexes isn't second nature, optionally matches a '+' and a '1' (the country code for the US), followed by any 10 digits, or matches 8-or-more digit numbers (prefixed by a '+'), or the international dialing codes for the US.
This regex must be able to match number formats your carrier(s) will send you. In the US, it is normal to see the 10-digit number (NPA-NXX-XXXX), optionally with a '1' prepended (NPANXXXXXX), or the full E.164 version (+1NPANXXXXXX). The default
reconcile_regex matches all of those. Internal extensions, like 100, 2504, or *97, will obviously fail to be matched with the
reconcile_regex and thus be routable only by authorized devices within an account.
Country samples
France +33
Calls within France are 10-digit numbers with a leading 0; from outside of France, only the last 9 digits (omitting the 0) are dialed after the '+33' country code. Armed with this knowledge, a regex might look like:
"reconcile_regex":"^(?:\\+33\\d{9})|(?:0\\d{9})$"
Note:
(?:) is a non-capturing regex group
This should match calls dialed within France (using the 0 followed by a 9 digit number) as well as calls coming from outside of France (+33 followed by a 9 digit number).
Convertors
This a set of normalization regular expresions used on every number that Kazoo processes. The job of these expressions is to format numbers the same way regardless of where they originated. For example, most US carriers send numbers in E164 format (+14158867900) yet users do not dial +1. One use case is to ensure any US number begins with +1.
"e164_converters":{ "^\\+?1?([2-9][0-9]{2}[2-9][0-9]{6})$":{ "prefix":"+1" }, "^011(\\d{5,})$|^00(\\d{5,})$":{ "prefix":"+" }, "^[2-9]\\d{7,}$": { "prefix": "+" } }
The first regex,
"^\\+?1?([2-9][0-9]{2}[2-9][0-9]{6})$" will capture the 10 digit number (ignoring a + or a 1 if present on the front of the dialed number), and adds a "+1" prefix to the captured number. So a dialed number of
4158867900,
14158867900, or
+14158867900 will all result in
+14158867900. This would cover the main ways users and carriers will send numbers to Kazoo.
The second regex,
"^011(\\d{5,})$|^00(\\d{5,})$", matches how US customers would dial international numbers.
\\d{5,} indicates there must be at least 5 digits following 011 or 00 (to allow people who want 001, 002, etc as extensions within accounts). The result is the number captured being prefixed by '+'. So if
01133123456789 was dialed, the second regex would match it, resulting in
+33123456789 being the number used for Kazoo's internal routing.
The third regex matches international numbers added to the system and prefixes them with a '+'. This can be further delineated (or removed) if you're not adding numbers to the system from multiple countries.
The final version of converted numbers becomes the format for the numbers databases (which controls how globally-routable numbers are assigned to accounts).
Warning!!!
Change these carefully if you have an active system; when numbers are added to the datastore they are first normalized. If you change the these settings such that a number that used to be normalized in one way now results in a different format it will fail to route until it is resaved (causing it to be duplicated in the datastore in the new format).
Country Samples
France +33
Since within France one needs only dial the 10-digit number (0 + 9 digit subscriber number), the convertor regex will look simiarl to the
reconcile_regex:
"^0(\\d{9})$":{ "prefix":"+33" }, "^+33(\\d{9})$":{ "prefix":"+33" }
Only capturing the 9-digit subscriber number, "+33" is prepended to form the E164-formatted version of the number. This checks either internally-dialed French numbers (the first regex) or externally-dialed French numbers (the second regex).
Examples
See the examples for user-contributed samples (and create pull requests of your own!).
Classifiers
This is a set of regexes to group numbers by type and are not used for routing. Classifiers are used to create groups of numbers that can be restricted, pretty print numbers in emails (like voicemail to email) and provide user friendly names in the UI.
"classifiers":{ "tollfree_us":{ "regex":"^\\+1((?:800|888|877|866|855)\\d{7})$", "friendly_name":"US TollFree" }, "toll_us":{ "regex":"^\\+1(900\\d{7})$", "friendly_name":"US Toll" }, "emergency":{ "regex":"^(911)$", "friendly_name":"Emergency Dispatcher" }, "caribbean":{ "regex":"^\\+?1((?:684|264|268|242|246|441|284|345|767|809|829|849|473|671|876|664|670|787|939|869|758|784|721|868|649|340)\\d{7})$", "friendly_name":"Caribbean" }, "did_us":{ "regex":"^\\+?1?([2-9][0-9]{2}[2-9][0-9]{6})$", "friendly_name":"US DID", "pretty_print":"SS(###) ##### - ####" }, "international":{ "regex":"^(011\\d*)$|^(00\\d*)$", "friendly_name":"International", "pretty_print":"SSS011*" }, "unknown":{ "regex":"^(.*)$", "friendly_name":"Unknown" } }
The key is the name of the group of numbers and is arbitrary. Within that sub-object, define a regex pattern that would classify a dialed number as a member of that group (Groups are evaluated in order, so the first group to match a number is the group associated).
Optionally define "friendly_name" which could be used for display purposes in a UI.
Optionally define "pretty_print", allowing the dialed number to be formatted in a more "readable" fashion.
The following characters can be used in a pretty print string to manipulate the number:
- Pound signs will be replaced by the number at the same position
- S - A capital 'S' will skip a number at the same position
- * - An asterisk will add any remaining numbers from that position to the end of the number
If you want a literal '#', 'S', or '*', prefix it with a '\' (so '#', '\S', and '*')
SS(###) ##### - * : this sample will convert numbers in the format of +14158867900 to (415) 886 - 7900
Per-Account dial plans
Users can dial local numbers, just as they do with the PSTN, by providing Kazoo with
dial_plan regular expressions. These regexes will be used on the dialed numbers to correct them to properly routable numbers.
It is possible to set these regexes on an account, user, or device basis. All that needs doing is adding a
dial_plan key at the root level of the account, user, or device document. Kazoo will then apply the regexes in order, preferring the calling device's, then user's (if the calling device has an
owner_id set), and finally the account's dialplan. Failing any of those, the system
e164_convertors will be employed.
Warning: It is possible that these
dial_plan rules will interfere with extension dialing within an account. Please take common extension length into consideration when creating these
dial_plan rules.
See the examples for user-contributed samples (and create pull requests of your own!).
Example
dial_plan object
"dial_plan" : { "^(\\d{9})$": { "description": "Portugal", "prefix": "+351" } ,"^(\\d{10})$": { "description": "USA", "prefix": "+1" } ,"^(\\d{7})$":{ "description": "USA/CA/SF", "prefix": "+1415" }, "^0(\\d{9,})$": { "description": "UK", "prefix": "+44" } }
The
dial_plan key is a regex to match against the dialed number, with
prefix and
suffix rules to prepend and append to the capture group in the regex. Regexes are evaluated in order and the first regex to match is the one used.
Scenarios
One locale for all devices in an account
If all of the users/devices in an account are located in the same city, it would be most convenient to place a
dial_plan at the account level, allowing them to dial as they are used to and converting it for Kazoo processing. For instance, we can poach the "USA/CA/SF" regex from above for an account who's users are all in San Francisco. Then, when a user dials a 7-digit number, it is prepended with the 415 area code (as well as +1).
Globally distributed users
Users within an account may be located anywhere in the world. An account-level
dial_plan may not make sense for them. Instead, place
dial_plan objects on the users' documents to ensure their local dialing preferences are honored.
Adding
dial_plan example
Using the PATCH HTTP verb, you can add the
dial_plan object to an existing document:
curl -X PATCH -H "Content-Type: application/json" -H "X-Auth-Token: {AUTH_TOKEN}"{ACCOUNT_ID}/users/{USER_ID} -d '{"data":{"dial_plan":{"^(\\d7)$":{"prefix":"+1415","description":"USA/CA/SF"}}}}'
You can, of course, POST the full document with the added
dial_plan object.
System dial plans
It is possible to add dial plans to system config. Account/user/device
dial_plan can refer to it adding array of system dial plan names at key
system.
Adding system
dialplan example
Create dialplans doc in case it is still absent in system_config db:
curl -X PUT -H "Content-Type: application/json" -H "X-Auth-Token: {AUTH_TOKEN}" -d '{"data":{"id":"dialplans"}}'
Then create your dialplan:
curl -X POST -H "Content-Type: application/json" -H "X-Auth-Token: {AUTH_TOKEN}" -d '{"data":{"^(2\\d{6})$":{"prefix":"+7383","name":"Novosibirsk"}}}'
or dialplans:
curl -X POST -H "Content-Type: application/json" -H "X-Auth-Token: {AUTH_TOKEN}" -d '{"data":{"^(\\d{7})$":[{"prefix":"+7495","name":"Moscow"},{"prefix":"+7812","name":"Saint Petersburg"}]}}'
Using system
dialplan example
curl -X PATCH -H "Content-Type: application/json" -H "X-Auth-Token: {AUTH_TOKEN}"{ACCOUNT_ID}/users/{USER_ID} -d '{"data":{"dial_plan":{"system":["Novosibirsk"]}}}'
Available system dial plans
All users can view available system dial plans.
curl -X GET -H "Content-Type: application/json" -H "X-Auth-Token": {AUTH_TOKEN}"
Caches to flush
Changes made via Crossbar should flush the appropriate caches automatically. If you make changes to the database directly, or aren't seeing your changes via Crossbar reflected, the following
sup commands should flush the appropriate caches.
Execute on VMs running:
- Crossbar
sup kazoo_couch_maintenance flush [{ACCOUNT_ID} [{DOCUMENT_ID}]]
- Callflow
sup callflow_maintenance flush
If you make a change to
system_config, execute
sup kapps_config flush [{CONFIG_DOC}] | https://docs.2600hz.com/dev/doc/internationalization/numbers/ | 2017-06-22T14:06:46 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.2600hz.com |
OfferSummary
The
OfferSummary response group returns the number of offer listings and the lowest
price for each condition type for each item in the response. Condition types are New,
Used,
Collectible, and Refurbished. For example, this response group returns the lowest
price for each
Condition:
New item
Used item
Collectible item
Refurbished item
Individual offer listings are not returned. The
OfferSummary is dependent only on the ASIN
parameter and is not affected by the MerchantId or Condition parameters (i.e. the
OfferSummary will always be the same for a given ASIN independent of other parameters).
Note
This response group is not returned for Amazon Kindle digital books. An Amazon
Kindle ASIN can be verified through the
Binding,
Format, and
ProductTypeName response elements.
Relevant Operations
Operations that can use this response group include:
Response Elements
The following table describes the elements returned by
OfferSummary.
OfferSummary also returns the elements that all response groups return, as described in Elements Common to All Response Groups.
Parent Response Group
The following response groups are parent response groups of
OfferSummary.
Child Response Group
The following response groups are child response groups of
OfferSummary.
None
Sample REST Use Case
The following request uses the
OfferSummary response group.
Copy? Service=AWSECommerceService& AWSAccessKeyId=
[AWS Access Key ID]& AssociateTag=
[Associate ID]& Operation=ItemLookup& ItemId=B000A3UB2O& ResponseGroup=OfferSummary& Version=2013-08-01 &Timestamp=[YYYY-MM-DDThh:mm:ssZ] &Signature=[Request Signature]
Sample Response Snippet
The following response snippet shows the elements returned by
OfferSummary.
Copy
<OfferSummary> <LowestNewPrice> <Amount>801</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$8.01</FormattedPrice> </LowestNewPrice> <LowestUsedPrice> <Amount>799</Amount> <CurrencyCode>USD</CurrencyCode> <FormattedPrice>$7.99</FormattedPrice> </LowestUsedPrice> <TotalNew>45</TotalNew> <TotalUsed>20</TotalUsed> <TotalCollectible>0</TotalCollectible> <TotalRefurbished>0</TotalRefurbished> </OfferSummary> | http://docs.aws.amazon.com/AWSECommerceService/latest/DG/RG_OfferSummary.html | 2017-06-22T14:13:23 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.aws.amazon.com |
- value of
mem.mappedexceeds the value of
maxSize... | https://docs.mongodb.com/v3.4/reference/command/addShard/ | 2017-06-22T14:05:46 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.mongodb.com |
- Reference >
- Operators >
- Aggregation Pipeline Operators >
- Date Aggregation.
$isoWeek has the following operator expression syntax:
{ $isoWeek: <date expression> }
The argument can be any valid expression that resolves to a BSON ISODate object, a BSON Timestamp object, or a Date object.
Example¶
A collection called
deliveries contains the following documents:
{ "_id" : 1, "date" : ISODate("2006-10-24T00:00:00Z"), "city" : "Boston" } { "_id" : 2, "date" : ISODate("2011-08-18T00:00:00Z"), "city" : "Detroit" }
The following operation returns the week number for each
date field.
db.deliveries.aggregate( [ { $project: { _id: 0, city: "$city", weekNumber: { $isoWeek: "$date" } } } ] )
The operation returns the following results:
{ "city" : "Boston", "weekNumber" : 43 } { "city" : "Detroit", "weekNumber" : 33 } | https://docs.mongodb.com/v3.4/reference/operator/aggregation/isoWeek/ | 2017-06-22T14:06:03 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.mongodb.com |
ABlog v0.4 released¶
ABlog v0.4 is released. This version comes with the following improvements and bug fixes:
- Added
blog_feed_titles,
blog_feed_length, and
blog_archive_titlesconfiguration options (see issue 24).
- Set the default for
blog_feed_archivesto
False, which was set to
Truealthough documented to be otherwise.
- Fixed issues with
- Fixed issue 2, relative size of tags being the minimum size when all tags have the same number of posts. Now, mean size is used, and max/min size can be controlled from template.
- Fixed issue 19. Yearly archives are ordered by recency.
- Fixed duplicated post title in feeds, issue 21.
- Fixed issue 22,
postlistdirective listing more than specified number of posts.
-
postlistdirective accepts arguments to format list items (issue 20). | http://ablog.readthedocs.io/release/ablog-v0.4-released/ | 2017-06-22T14:14:04 | CC-MAIN-2017-26 | 1498128319575.19 | [] | ablog.readthedocs.io |
public class GroovyRecognizer extends LLkParser
JSR-241 Groovy Recognizer. Run 'java Main [-showtree] directory-full-of-groovy-files' [The -showtree option pops up a Swing frame that shows the AST constructed from the parser.] Contributing authors: John Mitchell johnm
An appended block follows any expression. If the expression is not a method call, it is given an empty argument list.
A single argument in (...) or [...]. Corresponds to to a method or closure parameter. May be labeled. May be modified by the spread operator '*' ('*:' for keywords).).
For lookahead only. Fast approximate parse of an argumentLabel followed by a colon.
Fast lookahead across balanced brackets of all sorts.
A block body is a parade of zero or more statements or expressions.).
If two statements are separated by newline (not SEMI), the second had better not look like the latter half of an expression. If it does, issue a warning.
Also, if the expression starts with a closure, it needs to have an explicit parameter list, in order to avoid the appearance of a compound statement. This is a hard error.
These rules are different from Java's "dumb expression" restriction. Unlike Java, Groovy blocks can end with arbitrary (even dumb) expressions, as a consequence of optional 'return' and 'continue' tokens.
To make the programmer's intention clear, a leading closure must have an explicit parameter list, and must not follow a previous statement separated only by newlines.
Clones the token.
Simple names, as in {x|...}, are completely equivalent to {(def x)|...}. Build the right AST.
Closure parameters are exactly like method parameters, except that they are not enclosed in parentheses, but rather are prepended to the front of a block, just after the brace. They are separated from the closure body by a CLOSABLE_BLOCK_OP token '->'.
Lookahead to check whether a block begins with explicit closure arguments.
A member name (x.y) or element name (x[y]) can serve as a command name, which may be followed by a list of arguments. Unlike parenthesized arguments, these must be plain expressions, without labels or spread operators.
In Java, "if", "while", and "for" statements can take random, non-braced statements as their bodies. Support this practice, even though it isn't very Groovy.
Numeric, string, regexp, boolean, or null constant.
Numeric constant.
I've split out constructors separately; we could maybe integrate back into variableDefinitions later on if we maybe simplified 'def' to be a type declaration?
Used to look ahead for a constructor.
AST effect: Create a separate Type/Var tree for each var in the var list. Must be guarded, as in (declarationStart) => declaration.
After some type names, where zero or more empty bracket pairs are allowed. We use ARRAY_DECLARATOR to represent this. TODO: Is there some more Groovy way to view this in terms of the indexed property syntax?
If a dot is followed by a parenthesized or quoted expression, the member is computed dynamically, and the member selection is done only at runtime. This forces a statically unchecked member access.
Comma-separated list of one or more enum constant definitions.
Guard for enumConstants.
Catch obvious constructor calls, but not the expr.super(...) calls
An expression statement can be any general expression.
An expression statement can also be a command, which is a simple method call in which the outermost parentheses are omitted.
Certain "suspicious" looking forms are flagged for the user to disambiguate.
lookahead predicate for usage of generics in methods
as parameter for the method. Example:
static
A block known to be a closure, but which omits its arguments, is given this placeholder. A subsequent pass is responsible for deciding if there is an implicit 'it' parameter, or if the parameter list should be empty.
An expression may be followed by [...]. Unlike Java, these brackets may contain a general argument list, which is passed to the array element operator, which can make of it what it wants. The brackets may also be empty, as in T[]. This is how Groovy names array types.
Returned AST is [INDEX_OP, indexee, ELIST].]
Some malformed constructor expressions are not detected in the parser, but in a post-pass. Bad examples: [1,b:2], [a:1,2], [:1]. (Note that method call arguments, by contrast, can be a mix of keyworded and non-keyworded arguments.)
This factory is the correct way to wire together a Groovy parser and lexer..
If the methodCallArgs are absent, it is a property reference. If there is no property, it is treated as a field reference, but never a method reference.} ])
A list of one or more modifier, annotation, or "def".
A list of zero or more modifiers, annotations, or "def".
This is the grammar for what can follow a dot: x.a, x.@a, x.&a, x.'a', etc.
Note:
typeArguments is handled by the caller of
namePart.
object instantiation. Trees are built as illustrated by the following input/tree pairs: new T() new | T -- ELIST | arg1 -- arg2 -- .. -- argn new int[] new | int -- ARRAY_DECLARATOR new int[] {1,2} new | int -- ARRAY_DECLARATOR -- ARRAY_INIT | EXPR -- EXPR | | 1 2 new int[3] new | int -- ARRAY_DECLARATOR | EXPR | 3 new int[1][2] new | int -- ARRAY_DECLARATOR | ARRAY_DECLARATOR -- EXPR | | EXPR 1 | 2
Zero or more insignificant newlines, all gobbled up and thrown away.
Zero or more insignificant newlines, all gobbled up and thrown away, but a warning message is left for the user, if there was a newline.
An open block is not allowed to have closure arguments.
A sub-block of a block can be either open or closable. It is closable if and only if there are explicit closure arguments. Compare this to a block which is appended to a method call, which is given closure arguments, even if they are not explicit in the code.
A formal parameter for a method or closure. '='.
A statement separator is either a semicolon or a significant newline. Any number of additional (insignificant) newlines may accompany it.
A declaration with one declarator and optional initialization, like a parameterDeclaration.
Used to parse declarations used for both binding and effect, in places like argument
lists and
while statements.
A declaration with one declarator and no initialization, like a parameterDeclaration.
Used to parse loops like
for (int x in y) (up to the
in keyword).
Used in cases where a declaration cannot have commas, or ends with the "in" operator instead of '='.
A Groovy script or simple expression. Can be anything legal inside {...}.
A statement is an element of a block. Typical statements are declarations (which are scoped to the block) and expressions.
A labeled statement, consisting of a vanilla identifier followed by a colon.
Things that can show up as expressions, but only in strict contexts like inside parentheses, argument lists, and list constructors.
Lookahead for suspicious statement warnings and errors.
Used only as a lookahead predicate for nested type declarations.
An IDENT token whose spelling is required to start with an uppercase letter. In the case of a simple statement {UpperID name} the identifier is taken to be a type name, not a command name.
An assignment operator '=' followed by an expression. (Never empty.)
Declaration of a variable. This can be a class/instance variable, or a local variable in a method It can also include possible initialization.. | http://docs.groovy-lang.org/latest/html/gapi/org/codehaus/groovy/antlr/parser/GroovyRecognizer.html | 2017-06-22T14:12:26 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.groovy-lang.org |
Between the Lines festival, which took place back in March, have released a number of videos from this years festival including the whole storytelling session with Caspar Sonnen, Alexandre Brachet, William Uricchio and our very own Mandy Rose. The festival, a partnership between DocHouse and the Frontline Club, explored the challenges facing documentary makers, investigative journalists and citizen reporters in the new media landscape. As well as this video, there’s a whole host of other sessions and talks available in the DocHouse archive which is definitely worth a visit.
This session in particular looked at the relationship between technology and storytelling, it’s function within the creation of non-linear narratives and the continued role of “old” storytelling telling methods. Enjoy! | http://i-docs.org/2013/07/26/between-the-lines-storytelling-in-the-new-landscape/ | 2017-06-22T14:16:30 | CC-MAIN-2017-26 | 1498128319575.19 | [] | i-docs.org |
Mining
From CEwiki
Mining is a nice profession as it provides you with X.P. during the mining process and with resources you can sell later. There is some initial cost involved as you will need to purchase some basic equipment, though this cost is relatively cheap in comparison to other professions.
How to get started
Mining is pretty straightforward. For a pictorial walk through, please see How to mine ore.
Equipment
A mining laser is required to mine asteroids. However, with only a mining laser, you won't find anything but Ore, a Remote Mining Drone can do that too. No, you'll want to go after more valuable goods. For this, you need to increase your scan rating. So while you can get away with only a mining laser at the first few levels, pretty soon you'll want to get yourself a Asteroid Scanner. (Asteroid Scanner, mind you, not Asteroid Data Scanner!). Once you reach level 25, you might even decide to get yourself some Babylon Scan Units, to raise your scan rating even more.
If you want to see what the asteroid you're about to mine has to offer, you can get yourself a Asteroid Data Scanner. By equipping this system, you can see (using the blue button) which ores are located in that asteroid. However, it will show you all ores, regardless of the scan rating you have. If you're not getting any of the ore you know is there, you'll need to raise your scan rating.
Note that your scan rating only determines which ores you find, while your level determines how much. The amount will increase to a maximum of 25 ore per slot at around level 75.
Skills involved
You'll need a bit of mining skill to equip your mining laser. Just enough to equip it, spending more IP won't help you. If you want to equip an asteroid scanner, you'll also need Scan Tech. Again, only enough to equip it, spending more won't get you anything.
After getting up in levels a bit you will equip also Mining scanners - using scan tech skill to equip the different scanners.
A miner can get a maximum basic scan rate of 74 - at level 180 using scanners and Babylon pod.
DOs and DON'Ts
DO:
- Federation of Mining (F.O.M.) offices are located in the Sphere, VP and Furnace galaxies. These offices offer a reward for mining an unidentified ore in a specific location. If you plan on mining, always get a F.O.M. mission before you start. The farther or the more ore it needs you to collect, the greater the reward.
DON'T:
- Mine plain old Ore beyond level 15. That's when you can equip the first scanner. Get Remote Mining Drones instead. - is disputable - i (Sukayo) and other miners pick up everything
- Upgrade scanners before mining lasers. Mining lasers give a % bonus to scan rating, and a billion percent of zero is still zero! | http://docs.core-exiles.co.uk/index.php?title=Mining | 2017-06-22T14:17:28 | CC-MAIN-2017-26 | 1498128319575.19 | [] | docs.core-exiles.co.uk |
Development Guide
Local Navigation
Create a debug package manually
You can create a debug package from the Windows version of a BlackBerry® Smartphone Simulator. You can use the debug files to debug your BlackBerry device application on a BlackBerry device that is connected to your Mac computer by a USB cable.
After you finish: To learn how to use the debug files, see Debug a BlackBerry device application by using a debug package.
Before you begin: Visit to download the Windows version of the BlackBerry Smartphone Simulator that matches the version of the BlackBerry Device Software that is installed on your device. You must install the BlackBerry Smartphone Simulator in a Windows environment, for example by using VMware on a Mac or on a computer that is running Windows.
- In a Windows environment, extract the simulator bundle to a folder on your hard disk.
- In the <device>.xml file, verify that the <platformVersion> attribute matches the version of the BlackBerry Device Software on the device.
- In the folder where you extracted the simulator bundle, add the following files to a zip file:
- Name the .zip file debug_x.x.x.xxx_yyyy.zip where x.x.x.xxx is the version of the BlackBerry Device Software and yyyy is the model number of the device, for example debug_6.0.0.246_9800.zip.
- Extract the debug_x.x.x.xxx_yyyy.zip to a folder on your Mac computer.
Previous topic: Debug a BlackBerry device application by using a debug package
Was this information helpful? Send us your comments. | http://docs.blackberry.com/en/developers/deliverables/23674/Create_debug_pkg_manually_Mac_1427783_11.jsp | 2014-11-21T02:57:14 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.blackberry.com |
").
Pros:
- Able to be used on any DataStore (would recommend property DataStore as it supports multi geometry)
Cons:
-..
Pros:
- It is implemented and all existing tests pass!
Cons:
-).).
Pros:
- and JDBCDataStore
- Can use package visibility to avoid exposing too much to subclasses
Cons:
-. | http://docs.codehaus.org/pages/viewpage.action?pageId=227051135 | 2014-11-21T02:29:53 | CC-MAIN-2014-49 | 1416400372542.20 | [array(['/download/attachments/227051003/IJDBCDataStore.png?version=1&modificationDate=1317107776113&api=v2',
None], dtype=object)
array(['/download/attachments/227051003/SQLDataStore.png?version=1&modificationDate=1317107792115&api=v2',
None], dtype=object) ] | docs.codehaus.org |
Message-ID: <259625654.1633.1416536847323.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1632_1810177929.1416536847322" ------=_Part_1632_1810177929.1416536847322 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
The Neo4j plugin enables lightweight access to database functionality us= ing N= eo4j. This plugin does NOT provide domain classes nor = dynamic finders like GORM does.=20
The current version of griffon-neo4j is 0.4
To install just issue the following command
griffon install-plugin neo4j=20
Upon installation the plugin will generate the following artifacts at
A new dynamic method named
withNeo4j will be injected into =
all controllers, giving you access to a
org.neo4j.graphdb.GraphDataba=
seService object, with which you'll be able to make calls to the dat=
abase. Remember to make all calls to the database off the EDT otherwise you=
r application may appear unresponsive when doing long computations inside t=
he EDT.
This method is aware of multiple datasources. If no datasourc= eName is specified when calling it then the
default dataSource=
will be selected. Here are two example usages, the first queries against t=
he default datasource while the second queries a datasource whose name has =
been configured as 'internal'
package sample class SampleController { def queryAllDataSources =3D { withNeo4j { dsName, graphdb -> ... } withNeo4j('internal') { dsName, graphdb -> ... } } }=20
This method is also accessible to any component through the singleton
Neo4jConnector.enhance(metaClassInstance).
RelationshipTypeat
src/main.
The
withNeo4j() dynamic method will be added to controllers=
by default. You can change this setting by adding a configuration flag in =
Config.groovy
griffon.neo4j.injectInto =3D ["controller", "service"]=20
The following events will be triggered by this addon=20
The config file
Neo4jConfig.groovy defines a
default<=
/code> dataSource block. As the name implies this is the dataSource used by=
default, however you can configure named dataSources by adding a new confi=
g block. For example connecting to a dataSource whose name is 'internal' ca=
n be done in this way
databases { internal { params =3D [:] =09 storeDir =3D 'neo4j/internal' =09} }=20
This block can be used inside the
environments() block in t=
he same way as the default database block is used.
A trivial sample application can be found at s/tree/master/persistence/neo4j=20 | http://docs.codehaus.org/exportword?pageId=139166122 | 2014-11-21T02:27:27 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.codehaus.org |
For 1.1 onwards we're implementing a range of Group Organisation protocols to help arrange nodes into groups. To get an idea of the progress we're making, check out the javadocs.. | http://docs.codehaus.org/pages/viewpage.action?pageId=19464 | 2014-11-21T02:49:51 | CC-MAIN-2014-49 | 1416400372542.20 | [] | docs.codehaus.org |
Backup Cassandra on Kubernetes
You can use the instructions on this page to create pre and post backup rules with PX-Backup, which take application-consistent backups for Cassandra on Kubernetes in production.
On its own, Cassandra is resilient to node failures. However, you still need Cassandra backups to recover from the following scenarios:
- Unauthorized deletions
- Major failure’s that require a rebuild your entire cluster
- Corrupt data
- Point in time rollbacks
- Disk failure
Cassandra provides an internal snapshot mechanism to take backups with a tool called
nodetool. You can configure this to provide incremental or full snapshot-based backups of the data on the node.
nodetool will flush data from
memtables to disk and create a
hardlink to the SSTables file on the node.
However, disadvantages of this include the fact that you must run
nodetool on each and every Cassandra node, and it keeps data locally, increasing the overall storage footprint. Portworx, Inc.
suggests taking a backup of the Cassandra PVs at a block level and storing them in a space-efficient object storage target. Portworx allows you to combine techniques that are recommended by Cassandra, such as flushing data to disk with pre and post backup rules for the application to give users Kubernetes-native and efficient backups of Cassandra data.
PX-Backup allows you to set up pre and post backup rules that will be applied before and or after a backup occurs. For Cassandra, users can create a custom flush, compaction, or verify rule to ensure a healthy and consistent dataset before and after a backup occurs. Rules can run on one or all pods associated with Cassandra which is often a requirement for nodetool commands.
For more information on how to run Cassandra on Kubernetes, refer to the Cassandra on Kubernetes on Portworx
- Cassandra pods must also be using the
app=cassandralabel.
- This example uses the cassandra keyspace
newkeyspaceas an example. If you wish to use this rule for another keyspace, simply replace keyspace within this document with your own.
Create rules for Cassandra
Create rules for Cassandra that will run both before and after the backup operation runs:
Create a pre-exec backup rule for Cassandra
Create a rule that will run
nodetool flush for our
newkeyspace before the backup. This is essential as Portworx will take a snapshot of the backing volume before it places that data in the backup target.
- Navigate to Settings → Rules → Add New.
- Add a name for your Rule.
Add the following app label:
app=cassandra
Add the following action:
nodetool flush -- <your-cassandra-keyspace>;
Create a post-exec backup rule for Cassandra
A post-exec backup rule for Cassandra isn’t as necessary as the pre-exec backup rules above. However, for completeness in production and to verify a keyspace is not corrupt after the backup occurs, create a rule that runs
nodetool verify. The verify command will verify (check data checksums for) one or more tables.
- Navigate to Settings → Rules → Add New.
- Add a name for your Rule.
Add the following app label:
app=cassandra
Add the following action:
nodetool verify -- <your-cassandra-keyspace>;
Use the rules during Cassandra backup operations
During the backup creation process, select the rules in the pre-exec and post-exec dropdowns:
Once you’ve filled out the backup form, click Create
Demo
Watch this short demo of the above information. | https://backup.docs.portworx.com/use-px-backup/backup-stateful-applications/cassandra/ | 2021-06-12T17:49:12 | CC-MAIN-2021-25 | 1623487586239.2 | [array(['/img/cassandra-use-rules.png', None], dtype=object)] | backup.docs.portworx.com |
Contents:
Contents:
Select Something
Selection Hints:
As you move the cursor around the Transformer page, the cursor changes when it is over a selectable data element.
In the data grid:
- You may select categories of values in a column's data quality bar: Valid, Mismatched, and Missing.
- You may select one or more values in a column's histogram. Use
SHIFTor
CTRLto select multiple values.
- Click a column for column-based operations. Click additional columns to add to your selection. Click a selected column to deselect.
Select a whole or partial cell value to prompt suggestions for managing that specific string of data.
Tip: If you
CTRL-select multiple partial values in a column of numeric data, the suggestion cards apply to the pattern that matches your selected strings. This does not apply to string data.
NOTE: In the data grid, selection of multiple values in a column is not supported for prompting of suggestions. However, through the Column Details panel, you can review and select patterns to triggers suggestions for sets of multiple values in your column. See Column Details Panel.
In the Column Browser or Column Details:
- Select categories of values in the data quality bar.
- Select one or more values in a column's histogram.
Transformation types without suggestions
The following types of transforms are not available through the suggestion cards. In most cases, these operations have too many parameters for a single set of selections to properly suggest the transformation.
- Lookup. See Lookup Wizard.
Join. See Join Window.
- Pivot/Unpivot can be suggested when you select a column, instead of a value. See Pivot Data.
Suggestion Cards
Based on your selections, relevant suggestions appear in suggestion cards:
Figure: Suggestion Cards
In the suggestion cards, the label at the top identifies the transformation type that is being recommended, followed by a brief preview of how the selection might transform the data.
Tip: A suggestion card may contain multiple variants for each suggestion. For example, in the previous image the
extract suggestion has many variants, which can be selected and reviewed by selecting the dots at the bottom of the card.
Additional suggestions may be available. Try horizontal scrolling the set of cards to reveal new suggestions.
For more information, see Selection Details Panel.
Decide on the Suggestion
Before you decide on the suggestion to follow, you can do one of the following:
- Select the suggestion to use. After a suggestion is selected, the changes to the data are previewed in the Transformer page immediately. If there are multiple variants for the suggestion, verify that you are selecting the most appropriate one.
- Select additional columns or values in the Transformer page. A different pattern-based set of suggestions is presented to you. Make your transformation selection.
- Modify the suggestion. You may need to customize the suggestion to meet more specific requirements.
- Start over. If you discover that you have selected the wrong example data, click Cancel. Start again.
Modify Suggestion
To make the suggestion work for your specific use, you might need to modify the step. For example, for the selected text, you might need to define a replacement value, which Trifacta SaaS may not be able to guess. Click Edit.
For more information on how to modify, see Transform Builder.
Previews
As soon as you select a suggestion card, the changes are previewed in the Data Grid:
Figure: Previewed suggestion
In this manner, you can review the change before it is applied to the sample.
Tip: You can use the checkboxes in the status bar to display only the rows, columns, or both that are affected by the previewed transformation.
Iterate
Experiment away! Things to keep in mind:
- If you select the wrong thing, you can always cancel the recipe step. Start again.
- To delete a step that has already been added, select the step in the Recipe panel and click the Trash icon to delete it. See Recipe Panel.
- To step back a number of steps in the recipe, select the recipe to which you want to revert and start adding steps. Note that any added steps may invalidate the subsequent steps in your recipe.
- You can always undo and redo your most recent actions. Use the buttons on the top of the Recipe panel.
- An executed recipe does not change the source, so you can always step back to your recipe in the Transformer page and revert or modify recipe steps.
This page has no comments. | https://docs.trifacta.com/display/AWS/Explore+Suggestions | 2021-06-12T17:36:41 | CC-MAIN-2021-25 | 1623487586239.2 | [] | docs.trifacta.com |
Configuring GeckoView for Automation¶
How to set environment variables, Gecko arguments, and Gecko preferences for automation and debugging.
Configuring GeckoView¶
GeckoView and the underlying Gecko engine have many, many options, switches, and toggles “under the hood”. Automation (and to a lesser extent, debugging) can require configuring the Gecko engine to allow (or disallow) specific actions or features.
Some such actions and features are controlled by the GeckoRuntimeSettings instance you configure in your consuming project. For example, remote debugging web content via the Firefox Developer Tools is configured by GeckoRuntimeSettings.Builder#remoteDebuggingEnabled
Not all actions and features have GeckoView API interfaces. Generally, actions and features that do not have GeckoView API interfaces are not intended for broad usage. Configuration for these types of things is controlled by:
environment variables in GeckoView’s runtime environment
command line arguments to the Gecko process
internal Gecko preferences
Automation-specific configuration is generally in this category.
Running GeckoView with environment variables¶
After a successful
./mach build,
./mach run --setenv can be used to run GeckoView with
the given environment variables.
For example, to wait for attaching a debugger when starting a content tab process, use
./mach run --setenv MOZ_DEBUG_CHILD_WAIT_FOR_JAVA_DEBUGGER=:tab and then attach to that
process within Android Studio.
Reading configuration from a file¶
When GeckoView is embedded into a debugabble application (i.e., when your manifest includes
android:debuggable="true"), by default GeckoView reads configuration from a file named
/data/local/tmp/$PACKAGE-geckoview-config.yaml. For example, if your Android package name is
com.yourcompany.yourapp, GeckoView will read configuration from:
/data/local/tmp/com.yourcompany.yourapp-geckoview-config.yaml
Configuration file format¶
The configuration file format is YAML. The following keys are recognized:
envis a map from string environment variable name to string value to set in GeckoView’s runtime environment
argsis a list of string command line arguments to pass to the Gecko process
prefsis a map from string Gecko preference name to boolean, string, or integer value to set in the Gecko profile
# Contents of /data/local/tmp/com.yourcompany.yourapp-geckoview-config.yaml env: MOZ_LOG: nsHttp:5 args: - --marionette - --profile - "/path/to/gecko-profile" prefs: foo.bar.boolean: true foo.bar.string: "string" foo.bar.int: 500
Verifying configuration from a file¶
When configuration from a file is read, GeckoView logs to
adb logcat, like:
GeckoRuntime I Adding debug configuration from: /data/local/tmp/org.mozilla.geckoview_example-geckoview-config.yaml GeckoDebugConfig D Adding environment variables from debug config: {MOZ_LOG=nsHttp:5} GeckoDebugConfig D Adding arguments from debug config: [--marionette] GeckoDebugConfig D Adding prefs from debug config: {foo.bar.baz=true}
When a configuration file is found but cannot be parsed, an error is logged and the file is ignored entirely. When a configuration file is not found, nothing is logged.
Controlling configuration from a file¶
By default, GeckoView provides a secure web rendering engine. Custom configuration can compromise security in many ways: by storing sensitive data in insecure locations on the device, by trusting websites with incorrect security configurations, by not validating HTTP Public Key Pinning configurations; the list goes on.
You should only allow such configuration if your end-user opts-in to the configuration!
GeckoView will always read configuration from a file if the consuming Android package is set as the current Android “debug app” (see
set-debug-app and
clear-debug-app in the adb documentation). An Android package can be set as the “debug app” without regard to the
android:debuggable flag. There can only be one “debug app” set at a time. To disable the “debug app” check, disable reading configuration from a file entirely. Setting an Android package as the “debug app” requires privileged shell access to the device (generally via
adb shell am ..., which is only possible on devices which have ADB debugging enabled) and therefore it is safe to act on the “debug app” flag.
To enable reading configuration from a file:
adb shell am set-debug-app --persistent com.yourcompany.yourapp
To disable reading configuration from a file:
adb shell am clear-debug-app
Enabling reading configuration from a file unconditionally¶
Some applications (for example, web browsers) may want to allow configuration for automation unconditionally, i.e., even when the application is not debuggable, like release builds that have
android:debuggable="false". In such cases, you can use GeckoRuntimeSettings.Builder#configFilePath to force GeckoView to read configuration from the given file path, like:
new GeckoRuntimeSettings.Builder() .configFilePath("/your/app/specific/location") .build();
Disabling reading configuration from a file entirely¶
To force GeckoView to never read configuration from a file, even when the embedding application is debuggable, invoke GeckoRuntimeSettings.Builder#configFilePath with an empty path, like:
new GeckoRuntimeSettings.Builder() .configFilePath("") .build();
The empty path is recognized and no file I/O is performed. | https://firefox-source-docs.mozilla.org/mobile/android/geckoview/consumer/automation.html | 2021-06-12T16:52:47 | CC-MAIN-2021-25 | 1623487586239.2 | [] | firefox-source-docs.mozilla.org |
PUBLIC UTILITIES COMMISSION OF THE STATE OF CALIFORNIA
I.D.# 6363
ENERGY DIVISION RESOLUTION G-3396
March 1, 2007
RESOLUTION
Resolution G-3396: In compliance with Decision 06-09-039, Southern California Gas Company submits its proposal to offer tradeable capacity rights on its local transmission system as well as revisions to certain terms related to local transmission open season commitments and expansions. SoCalGas' advice letter is approved with modification.
By Advice Letter 3684 filed on November 22, 2006
__________________________________________________________
As required by Decision (D.) 06-09-039, Southern California Gas Company filed Advice Letter (AL) 3684 to set forth its proposal for tradeable capacity rights on its local transmission system, and to make revisions to its tariff and forms related to the open season commitments for local transmission service. This Resolution approves AL 3684 with modification. This resolution requires SoCalGas to file a supplemental advice letter to state that, for applicable rate schedules, five-year firm service contracts will be converted to two-year contracts with the same use-or-pay commitments in the event that the Commission agrees that no capacity expansion is needed as a result of an open season.
The protest filed by Southern California Generation Coalition is denied.
SoCalGas AL 3684 was submitted in compliance with D.06-09-039 to revise certain terms and implement new terms for firm service on SoCalGas's local transmission system.
On November 22, 2006 Southern California Gas Company (SoCalGas) filed compliance Advice Letter (AL) 3684. The filing was submitted in response to Ordering Paragraph (OP) 9 of D.06-09-039 (also known as the "Phase 2 Decision") in Rulemaking (R.) 04-01-025 (also known as the "Gas OIR"). That OP stated:
SoCalGas and SDG&E should file an advice letter within 90 days of the adoption of this decision to implement its proposal to offer tradable capacity rights on its local transmission system, as well as revisions to its open season commitment period as described herein.
The scope of R.04-01-025 was to "Establish Policies and Rules to Ensure Reliable, Long-Term Supplies of Natural Gas to California". Capacity rights on the SoCalGas local transmission system was one of many issues addressed in R.04-01-025 and D.06-09-039.
D.06-09-039 modified SoCalGas's proposed revisions to its rules affecting open seasons for local transmission capacity in congested areas.
The decision continued the practice of requiring no more than two-year commitments for firm service for smaller customers. For the service of larger customers, the decision allowed the utility to require take-or-pay commitments lasting for either five years from sign-up or two years from in-service date of new facilities, whichever occurs first. The decision required SoCalGas and SDG&E to upgrade the system when nominations for firm capacity exceed capacity, or explain its reason if the utility chooses not to. It required that tradable rights be implemented for local transmission capacity. Finally, the decision required that the utility base its usage forecasts and expansion plans on traditional forecasting tools, in addition to open seasons.
In AL 3684, in response to OP 9 of D.06-09-039, SoCalGas proposes to:
· Eliminate one pro forma contract (the "Constrained Area Amendment (Form 6597-14)");
· Create two new pro forma contracts, (the "Scheduled Quantity Addendum (Form 6900)" and the "Constrained Area Firm Capacity Trading Agreement (Form 6910)");
· Add a definition to Rule No.1 (Definitions); and
· Make numerous additions to various rate schedules (G-10, G-AC, GT-AC, G-EN, GT-EN, GT-F, and GW-SD).
In conceptual terms, AL 3684 proposes to eliminate the existing "Constrained Area Amendment (Form 6597-14)" to the Master Services Contract, Schedule A, Intrastate Transmission Service. This form currently allows core and noncore customers to indicate their firm service transportation commitments, and specifies special terms and conditions for constrained area transportation. In the AL, SoCalGas proposes to strip away the terms and conditions and move them to the appropriate rate schedule and to Rule No.1. SoCalGas also proposes to augment the schedule section of the new form, allowing for more detail, and to rename the resulting document the "Scheduled Quantity Addendum (Form 6900)".
The AL also proposes a new "Constrained Area Firm Capacity Trading Agreement (Form 6910)" to facilitate trading of capacity and associated use-or-pay commitments between customers who wish to trade and are located in the same constrained area.
The AL proposes to add new special condition clauses to rate schedules G-10, G-AC, GT-AC, G-EN, GT-EN dealing with Open Season and Non-bidding Customers. The content of these clauses formerly was contained in the "Constrained Area Amendment (Form 6597-14)". And finally, the AL proposes to add numerous Special Condition clauses to the GT-F (firm transmission) and GW-SD (intrastate transmission service for San Diego Gas & Electric) rate schedules to accept language jettisoned from the eliminated "Constrained Area Amendment (Form 6597-14)" as well as language necessary to effectuate the changes to contract terms and the establishment of transmission rights trading ordered by D.06-09-039.
Notice of AL 3684 was made by publication in the Commission's Daily Calendar. SoCalGas states that a copy of the Advice Letter was mailed and distributed in accordance with Section III-G of General Order 96-A.
Advice Letter 3684 was timely protested by Southern California Generation Coalition (SCGC) on December 12, 2006. The arguments are numbered below to facilitate the discussion in this resolution.
1) Regarding the proposed GT-F Special Condition 34, describing the "Term" of the contract, SCGC protests that the language does not take into account a possible circumstance that is in fact unfolding in Application (A.) 06-10-034. In that proceeding, SoCalGas is seeking to mitigate congestion of a section of its local transmission grid by acquiring capacity on interstate and/or foreign pipelines. SCGC proposes that language be inserted into the clause in question to accommodate this circumstance.
2) Also regarding Special Condition 34, SCGC argues that language should be added to reflect the scenario in which an open season, when customers make their respective two- and five-year commitments, fails to demonstrate that the local system is in fact constrained. SCGC argues that "If there is no need for expansion and, accordingly, no expansion is undertaken, there is no need to have a five-year term to `ensure that the noncore customers whose demand has caused the need to expand will actually use the expansion facilities...'" (D.06-09-039 p.63) SCGC argues that holding customers to use-or-pay commitments in the situation where no expansion takes places serves no purpose and is unnecessarily punitive. In this case, SCGC asks that customers with five-year commitments be switched to two-year contracts.
3) The proposed GT-F Special Condition 39 makes the customer liable for outstanding use-or-pay charges in the event of "Early Termination". SCGC calls for eliminating this clause, arguing that if a customer goes out of business due to bankruptcy or some other reason, the customer should not be burdened with these obligations.
4) The proposed GT-F Special Condition 46 gives the utility the right to reject any bid. SCGC argues that this provision should be expanded so as to provide that any rejection shall be based on reasonable grounds.
5) SCGC notes that D.06-09-039 provided that "If, even in the event that nominations exceed capacity, the utility declines to upgrade the system, it shall file a publicly available advice letter with the Commission explaining its decision." (pp.63-64) SCGC argues that the GT-F tariff should be expanded to include this provision.
6) SCGC also points out that D.06-09-039 noted that SoCalGas had described a mechanism for trading local transmission rights and concomitant use-or-pay obligations making use of the utility's Envoy electronic bulletin board. SCGC notes that no such mention is made in SoCalGas's filing, and asks that it be added.
7) Finally, "given the overlap between Advice 3684 and the Otay Mesa proceeding pending in A.06-10-034, SCGC recommends that either Advice 3684 be consolidated for consideration with A.06-10-034 or, in the alternative, held in abeyance until completion of the proceeding in A.06-10-034."
In its December 19, 2006 response, SoCalGas claims that SCGC's protest has no merit and should be denied, and addresses each of SCGC's arguments.
1) SoCalGas argues that SCGC's request to add language to the "Term" clause of rate schedule GT-F, to account for the situation in which the utility mitigates congestion by means of obtaining "service on interstate or foreign pipelines", amounts to a modification of D.06-09-039, and is not appropriate for a compliance filing.
2) Likewise, SoCalGas argues that SCGC's request, to reduce customers' firm commitments in the event that the open season reveals no need for expansion, goes beyond the scope of D.06-09-039. Nevertheless, SoCalGas states that in the event that an open season or other planning tool shows that congestion is not present and therefore the utility does not plan to expand the local system within the five year contract period, "it will inform the Commission. Upon Commission agreement that there is no need to construct additional facilities, SoCalGas would agree to amend the five-year contracts to two-year contracts with the same use-or-pay commitments as the small customers."
3) SoCalGas argues that SCGC's request to remove use-or-pay commitments in the event of contract termination, as described in proposed Special Condition 39, would involve changing contract commitments and is not authorized by D.06-09-039. SoCalGas notes that the "Early Termination" clause is currently already contained in the utility tariffs, in the "Constrained Area Amendment" (Form 6597-14).
4) Likewise, SoCalGas argues that SCGC's request to proposed Special Condition 46 amounts to an unauthorized tariff modification, since, says SoCalGas, this provision was simply transplanted from the "Constrained Area Amendment" (Form 6597-14).
5) SoCalGas agrees with SCGC that in the event that the utility decides not to expand its local system despite congestion indicated by an open season, it is required to file an advice letter explaining its decision. But "SoCalGas believes that it is not necessary to express this requirement in the tariff since it is not a customer issue and unnecessarily complicates the tariffs." (p. 3)
6) Responding to SCGC's request for tariff language referring to the posting and soliciting of capacity rights trades on the SoCalGas electronic bulletin board, SoCalGas argues that it has in fact included this language in the proposed GT-F Special Condition 48.g, in which "Customers desiring a Trade may use a Utility-hosted platform or other lawful means to solicit a Trade."
7) SoCalGas opposes SCGC's call to either fold this AL into the Otay Mesa proceeding (A.06-10-034) or hold it until that proceeding is completed. SoCalGas argues that doing either would unnecessarily delay implementation of D.06-09-039. SoCalGas notes that "The current open season periods for the potentially capacity-constrained areas of the Imperial Valley and San Joaquin Valley are set to expire on April 1, 2007..." SoCalGas argues that SCGC should make its arguments in the Otay Mesa proceeding.
The Commission has reviewed the Advice Letter, SCGC's protest, and SoCalGas's response and reached the following conclusions:
1) The language proposed by SCGC for GT-F Special Condition 34 dealing with the Otay Mesa proceeding does in fact go beyond what was considered or ordered by D.06-09-039. Furthermore, this issue can very readily be raised in the Otay Mesa proceeding (A. 06-10-034). In disposing of that application, the Commission may agree with SCGC and order corresponding changes to the tariff. This AL is not the place for it.
2) Likewise, we agree with SoCalGas that SCGC's proposed language change to GT-F Special Condition, dealing with open seasons which uncover no congestion, goes beyond what was considered or ordered by D.06-09-039. We believe that SoCal's counter-proposal, to switch five-year contracts to two-year contracts once the Commission has agreed that no new construction will be undertaken during the five-year contract period, is in the spirit of D.06-09-039 and will allow it. We will order SoCalGas to make this modification to the GT-F rate schedule as well as to other applicable rate schedules.
3) Regarding the proposed language in the GT-F "Early Termination" Special Condition 39, we agree with SoCalGas that this same language was already contained in the tariff (in the existing/old "Constrained Area Amendment (Form 6597-14)") and will allow it.
4) Regarding the proposed language in the GT-F "Right of Refusal" Special Condition 46, we agree with SoCalGas that this same language was already contained in the tariff (in the existing/old "Constrained Area Amendment (Form 6597-14)"). It should also be noted that the language contained in the current and proposed tariffs does require the utility to explain to the customer the reason why the bid was rejected. We will allow the language to remain as proposed by SoCalGas.
5) Regarding SoCalGas's obligation to explain its decision to the Commission in the event that the utility chooses not to expand despite the finding of congestion by an open season, we agree with SoCalGas that this obligation is already stated in D.06-09-039, is not directly relevant to the the terms and conditions for firm service or open seasons, and therefore need not be included here.
6) We find that the language proposed by SoCalGas regarding the public solicitation of transmission capacity trades is compliant with D.06-09-039.
7) As we noted earlier, any tariff changes prompted by events being addressed in A.06-10-034 should properly be raised in that proceeding.
We also find all other aspects of AL 3684 to be reasonable, and find they should be adopted.
The Comment Period will NOT be waived or reduced:. In OP 9 of D. 06-09-039, the Commission directed SoCalGas to file within 90 days an Advice Letter which would implement tariff changes related to the local transmission policies embodied in that decision.
2. On November 22, 2006 SoCalGas timely filed its compliance AL 3684.
3. On December 12, 2006 SCGC timely filed its protest to AL 3684.
4. On December 19, 2006 SoCalGas timely filed its response to the SCGC protest.
5. The tariff language as proposed by SoCalGas in AL 3684 is reasonable.
6. SoCalGas should add language, in Special Condition 34 of Schedule GT-F and to other applicable rate schedules, that states: In the event an open season or other planning tool shows that congestion is not present and therefore the utility does not plan to expand the local system within the five year contract period, SoCalGas will inform the Commission. Upon Commission agreement that there is no need to construct additional facilities, SoCalGas will amend any five-year contracts to two-year contracts with the same use-or-pay commitments as required for small customers.
1. The request of SoCalGas to implement tariff changes as requested in AL 3684 is approved with one modification.
2. SoCalGas shall file a supplemental advice letter within 5 days to insert the following language in Special Condition 34 of proposed Schedule GT-F: "In the event all requests for firm noncore capacity can be awarded without proration and the Utility does not plan to expand the local transmission system within the five-year contract period, the Utility will inform the Commission. Upon Commission agreement that there is no need to construct additional facilities within the five-year contract period, the Utility shall amend the five-year contracts to expire after two-years, consistent with the term for small customers." This language shall also be inserted into other rate schedules, as applicable.
3. SoCalGas's supplement advice letter shall be effective today.
4. This resolution is effective today.
This Resolution is effective today.
I certify that the foregoing resolution was duly introduced, passed and adopted at a conference of the Public Utilities Commission of the State of California held on March 1, 2007; the following Commissioners voting favorably thereon:
_______________
STEVE LARSON
Executive Director
STATE OF CALIFORNIA ARNOLD SCHWARZENEGGER, Governor
PUBLIC UTILITIES COMMISSION
505 VAN NESS AVENUE
SAN FRANCISCO, CA 94102-3298
I.D.# 6363
January 29, 2007 RESOLUTION G-3396
Commission Meeting March 1, 2007
TO: PARTIES TO SOUTHERN CALIFORNIA GAS COMPANY ADVICE LETTER NO 3684
Enclosed is draft Resolution G-3396 of the Energy Division. It will be on the
agenda at the March 1, 2007 Commission meeting. The 30-day comment period is in effect. The Commission may then vote on this Resolution or it may postpone a vote until later.
[email protected]
A hard copy and an electronic copy of the comments should be submitted to:
James Loewen
Energy Division
California Public Utilities Commission
320 West 4th Street, Suite 500
Los Angeles, CA 90013
[email protected]
Any comments on the draft Resolution must be received by the Energy Division by February 16, 2007. Those submitting comments must serve, by mail and by email, a copy of their comments on 1) the entire service list attached to the draft Resolution, 2) all Commissioners, and 3) the Director of the Energy Division, on the same date that the comments are submitted to the Energy Division.
Comments shall be limited to five pages in length plus a subject index listing the recommended changes to the draft Resolution, a table of authorities, and an appendix setting forth the proposed findings and ordering paragraphs.
Comments shall focus on factual, legal or technical errors in the proposed draft Resolution. Comments that merely reargue positions taken in the advice letter or protests will be accorded no weight and are not to be submitted.
Replies to comments on the draft resolution may be filed (i.e., received by the Energy Division) on February 23, 2007, five business days after comments are filed, and shall be limited to identifying misrepresentations of law or fact contained in the comments of other parties. Replies shall not exceed five pages in length, and shall be filed and served as set forth above for comments.
Late submitted comments or replies will not be considered.
Richard Myers
Program and Project Supervisor
Energy Division
Enclosure: Service List
Certificate of Service
CERTIFICATE OF SERVICE
I certify that I have by mail this day served a true copy of Draft Resolution G-3396 on all parties in these filings or their attorneys as shown on the attached list.
Dated January 29, 2007 at San Francisco, California.
____________________.
Service List for Resolution G-3396 | http://docs.cpuc.ca.gov/published/Comment_resolution/64159.htm | 2009-07-04T15:12:49 | crawl-002 | crawl-002-026 | [] | docs.cpuc.ca.gov |
sql:bucket( bucketEdgesParam as xs:anyAtomicType*, srchParam as xs:anyAtomicType, [collationLiteral as xs:string] ) as xs:integer*
Returns an unsignedLong specifying the index of the bucket the second parameter belongs to in buckets formed by the first parameter. Values that lie on the edge of a bucket fall to the greater index.
for $i in (1,5,10) return sql:bucket((2,4,6,7,8), $i) => 0 2 5
sql:bucket(('Aขเ','Aเ', 'B'), 'Aเข', '') => 0
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/sql:bucket | 2022-08-07T22:23:49 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.marklogic.com |
Using a Component INF File
If you want to include user-mode software for use with a device on Windows 10, you have the following options to create a DCH-compliant driver:
A software component is a separate, standalone driver package that can install one or more software modules. The installed software enhances the value of the device, but is not necessary for basic device functionality and does not require an associated function driver service.
This page provides guidelines for the use of software components.
Getting started
To create components, an extension INF file specifies the INF AddComponent directive one or more times in the INF DDInstall.Components section. For each software component referenced in an extension INF file, the system creates a virtual software-enumerated child device. More than one driver package can reference the same software component.
Virtual device children can be updated independently just like any other device, as long as the parent device is started. We recommend separating functionality into as many different groupings as makes sense from a servicing perspective, and then creating one software component for each grouping.
You'll provide an INF file for each software component.
If your software component INF specifies the AddSoftware directive, the component INF:
- Must be a universal INF file.
- Must specify the SoftwareComponent setup class.
You can specify the AddSoftware directive one or more times.
Note
When using Type 2 of the AddSoftware directive, it is not required to utilize a Component INF. The directive can be used in any INF successfully. An AddSoftware directive of Type 1, however, must be used from a Component INF.
Additionally, any INF (component or not) matching on a software component device:
- Can specify Win32 user services using the AddService directive.
- Can install software using the INF AddReg directive and the INF CopyFiles directive.
- Does not require a function driver service.
- Can be uninstalled by the user independently from the parent device.
You can find an example of a component INF in the Driver package installation toolkit for universal drivers.
Note: In order for a software-enumerated component device to function, its parent must be started. If there is no driver available for the parent device, driver developers can create their own and optionally leverage the pass-through driver "umpass.sys". This driver is included in Windows and, effectively, does nothing other than start the device. In order to use umpass.sys, developers should use the Include/Needs INF directives in the DDInstall section for each possible [DDInstall.*] section to the corresponding [UmPass.*] sections as shown below, regardless of whether the INF specifies any directives for that section or not:
[DDInstall] Include=umpass.inf Needs=UmPass ; also include any existing DDInstall directives [DDInstall.HW] Include=umpass.inf Needs=UmPass.HW ; also include any existing DDInstall.HW directives [DDInstall.Interfaces] Include=umpass.inf Needs=UmPass.Interfaces ; also include any existing DDInstall.Interfaces directives [DDInstall.Services] Include=umpass.inf Needs=UmPass.Services ; also include any existing any DDInstall.Services directives
Accessing a device from a software component
To retrieve the device instance ID of a device that is associated with a software component, use the SoftwareArguments value in the INF AddSoftware directive section with the
<<DeviceInstanceID>> runtime context variable.
The executable can then retrieve the device instance ID of the software component from its incoming argument list.
Next, if the software component is targeting the Universal target platform, use the following procedure:
- Call CM_Locate_DevNode with the device instance ID of the software component to retrieve a device handle.
- Call CM_Get_Parent to retrieve a handle to that device’s parent. This parent is the device that added the software component using the INF AddComponent Directive.
- Then, to retrieve the device instance ID of the parent, call CM_Get_Device_ID on the handle from CM_Get_Parent.
If the software component is targeting the Desktop target platform only, use the following procedure:
- Call SetupDiCreateDeviceInfoList to create an empty device information set.
- Call SetupDiOpenDeviceInfo with the software component device's device instance ID.
- Call SetupDiGetDeviceProperty with
DEVPKEY_Device_Parentto retrieve the device instance ID of the parent.
Example
The following example shows how you might use a software component to install a control panel using an executable for a graphics card.
Driver package INF file
[Version] Signature = "$WINDOWS NT$" Class = Extension ClassGuid = {e2f84ce7-8efa-411c-aa69-97454ca4cb57} ExtensionId = {zzzzzzzz-zzzz-zzzz-zzzz-zzzzzzzzzzzz} ; replace with your own GUID Provider = %CONTOSO% DriverVer = 06/21/2006,1.0.0.0 CatalogFile = ContosoGrfx.cat [Manufacturer] %CONTOSO%=Contoso,NTx86 [Contoso.NTx86] %ContosoGrfx.DeviceDesc%=ContosoGrfx, PCI\VEN0001&DEV0001 [ContosoGrfx.NT] ;empty [ContosoGrfx.NT.Components] AddComponent = ContosoControlPanel,, Component_Inst [Component_Inst] ComponentIDs = VID0001&PID0001&SID0001 [Strings] CONTOSO = "Contoso Inc." ContosoGrfx.DeviceDesc = "Contoso Graphics Card Extension"
Software component INF file
[Version] Signature = "$WINDOWS NT$" Class = SoftwareComponent ClassGuid = {5c4c3332-344d-483c-8739-259e934c9cc8} Provider = %CONTOSO% DriverVer = 06/21/2006,1.0.0.0 CatalogFile = ContosoCtrlPnl.cat [SourceDisksNames] 1 = %Disk%,,,"" [SourceDisksFiles] ContosoCtrlPnl.exe = 1 [DestinationDirs] DefaultDestDir = 13 [Manufacturer] %CONTOSO%=Contoso,NTx86 [Contoso.NTx86] %ContosoCtrlPnl.DeviceDesc%=ContosoCtrlPnl, SWC\VID0001&PID0001&SID0001 [ContosoCtrlPnl.NT] CopyFiles=ContosoCtrlPnl.NT.Copy [ContosoCtrlPnl.NT.Copy] ContosoCtrlPnl.exe [ContosoCtrlPNl.NT.Services] AddService = , %SPSVCINST_ASSOCSERVICE% [ContosoCtrlPnl.NT.Software] AddSoftware = ContosoGrfx1CtrlPnl,, Software_Inst [Software_Inst] SoftwareType = 1 SoftwareBinary = %13%\ContosoCtrlPnl.exe SoftwareArguments = <<DeviceInstanceID>> SoftwareVersion = 1.0.0.0 [Strings] SPSVCINST_ASSOCSERVICE = 0x00000002 CONTOSO = "Contoso" ContosoCtrlPnl.DeviceDesc = "Contoso Control Panel"
The driver validation and submission process is the same for component INFs as for regular INFs. For more info, see Windows HLK Getting Started.
For more info on setup classes, see System-Defined Device Setup Classes Available to Vendors.
See Also
INF AddComponent Directive
INF AddSoftware directive
INF DDInstall.Components Section
INF DDInstall.Software Section
Feedback
Submit and view feedback for | https://docs.microsoft.com/en-us/windows-hardware/drivers/install/using-a-component-inf-file | 2022-08-08T00:01:51 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
ESP USB Bridge¶
The ESP USB Bridge is an ESP-IDF project utilizing an ESP32-S2 (or optionally, an ESP32-S3) chip to create a bridge between a computer (PC) and a target microcontroller (MCU). It can serve as a replacement for USB-to-UART chips (e.g. CP210x). Official reference can be found here.
Contents
Configuration¶
You can configure debugging tool using debug_tool option in “platformio.ini” (Project Configuration File):
[env:myenv] platform = ... board = ... debug_tool = esp-usb-bridge
If you would like to use this tool for firmware uploading, please change upload protocol:
[env:myenv] platform = ... board = ... debug_tool = esp-usb-bridge upload_protocol = esp-usb-bridge
More options: | https://docs.platformio.org/en/stable/plus/debug-tools/esp-usb-bridge.html | 2022-08-07T21:41:30 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../_images/esp-usb-bridge.png',
'../../_images/esp-usb-bridge.png'], dtype=object)] | docs.platformio.org |
How Devengo paginates information for large collections
Pagination
All paginated collections return extra information to let API consumers know important API links to navigate the collection and the
total_items in case the client needs to calculate the number of pages.
"links": { "self": "", "first": "", "prev": "", "next": "", "last": "" }, "meta": { "pagination": { "total_items": 10000 } } | https://docs.devengo.com/reference/basics-pagination | 2022-08-07T22:21:13 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.devengo.com |
Job Processor Scripting
A Job Processor script is a PHP code, which runs in the context of a job being processed. The script runs after a job has been parsed and all user/queue policies have been applied. It runs before the job is made
ready or
paused.
The typical usage of the script is the modification of the job based on its properties. For example:
if ($this->pageCount > 10) { $this->duplex = "longEdge"; }
In this example, if the job has more than 10 pages, it's released as duplex.
$this is the job being processed. It's an instance of the
Job class.
Basic PHP functions can be used. For MyQ specific classes and functions, see Job Scripting Reference.
To set up processing via the PHP script, go to the properties panel of the queue where you want to use it, open the Job processing tab, and go to the Scripting (PHP) section. Under Actions after processing, enter the script in the box.
| https://docs.myq-solution.com/print-server/8.2/job-processor-scripting | 2022-08-07T23:04:06 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../print-server/8.2/993001477/ActionsAfterProcessing.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'Actions after processing properties sub-tab'], dtype=object) ] | docs.myq-solution.com |
All public logs
Combined display of all available logs of UABgrid Documentation. You can narrow down the view by selecting a log type, the username (case-sensitive), or the affected page (also case-sensitive).
- 11:56, 27 August 2010 [email protected] (Talk | contribs) moved page Trust UABgrid CA to Using UABgrid Web Sites (More descriptive of the content.) | https://docs.uabgrid.uab.edu/w/index.php?title=Special:Log&page=Trust+UABgrid+CA | 2022-08-07T22:30:17 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.uabgrid.uab.edu |
Disabled (application usage component)
FlexNet Manager Suite 2022 R1 (On-Premises)
Command line | Registry
Disabledspecifies whether the application usage component is inactive on this managed device. When set to
True, the FlexNet inventory agent does not record application usage data. When set to
False, the FlexNet inventory agent records application usage data.
Note: The schedule component has a preference of the same name, used for a slightly different purpose.Background: Application usage tracking (sometimes called "application metering") by the FlexNet inventory agent). This preference can be modified in three ways:
- At adoption or installation time for the FlexNet inventory agent on a target Windows device, the installation configuration file can establish your preferred default (but only for Windows):
- On Windows, the
USAGEAGENT_DISABLEsetting from the mgssetup.ini file is written to the registry location shown below
- On UNIX-like platforms, the mgsft_rollout_response file does not support a setting for application usage tracking; but manual editing of the preference is possible (described next).
- On a target device, you can edit this preference setting manually:
- On Windows, edit the registry setting shown below, or deploy a registry change using your preferred deployment tool
- For UNIX-like platforms, you can edit the UNIX preference file (config.ini) and deploy the update separately.
- The simplest method is that you can update the setting for selected target devices through the web interface. Navigate to tab, and scroll down to the Application usage options section. Changes recorded here are deployed automatically to the target inventory devices through the next policy update, where they adjust this setting appropriately on each device.
Values
Command line
Registry
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/topics/PMD-Disabled-AppUsg.html | 2022-08-07T23:04:28 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
Note
The following information is about Microsoft Stream (Classic) which will eventually be retired and replaced by Stream (on SharePoint). To start using the newer video solution today, just upload your videos to SharePoint, Teams, Yammer, or OneDrive. Videos stored in Microsoft 365 the way you'd store any other file is the basis for Microsoft Stream (on SharePoint). Learn more...
Search content in Microsoft Stream (Classic)
You can search for content in Microsoft Stream (Classic) from the top of any page with the Search box in the application bar. You can search for videos, channels, people, and browse groups.
If you don't have access to a video, channel, or group it won't show up in Stream (Classic) or your search results.
Search across Stream (Classic)
Type in a word or phrase into the Search box at the top of Microsoft Stream. Press enter or click the magnifying glass.
Click Videos, Channels, or People to narrow your search results.
For videos and channels, use Sort by to sort the results to further make it easier to find what you are looking for.
We currently don't support searching for groups but you can browse groups and sort them to find the group you are looking for.
Deep search on what's said in the video
When searching for videos, Stream (Classic) finds videos based not only on the title and description, but also based on what's being said in the video. From the search results based on the transcript of the video, you can jump to the exact point in the video that has the information you are looking for.
From Browse > Videos > Search for videos or the Search box at the top of Stream, type a word or phrase to search for.
When you see video search results with a time code (for example, @00:10), click. | https://docs.microsoft.com/en-ca/stream/portal-search-browse-filter | 2022-08-07T22:50:55 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
Pinch-Hitting Across the Pond
Just when I thought I was done with Flight Sim it turns out I need to jet to Europe next week to present to some journalists. Everyone else who could go is otherwise occupied with the AvSim convention or just getting FSX wrapped up. It should be fun. I visited two years back to talk about plans to the local marketing teams. I'm looking forward to returning to show off the fruits of those discussions. Plus, I'm flying Airbus all the way--how appropriate! | https://docs.microsoft.com/en-us/archive/blogs/tdragger/pinch-hitting-across-the-pond | 2022-08-07T22:23:28 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
NSUrl
Session Handler. Credentials Property
Definition
Important
Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here.
Gets or sets authentication information used by this handler.
The authentication credentials associated with the handler. The default is
null. | https://docs.microsoft.com/en-us/dotnet/api/system.net.http.nsurlsessionhandler.credentials?view=xamarinios-10.8 | 2022-08-07T23:30:59 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
Please note that this article is only for users of a self-hosted shop. It is not relevant for users of a Shopware 6 cloud environment.
Shopware 6 offers you the option of extending the range of functions. To do this, go to Extensions > My Extensions and manage the extensions that are already available for your shop. You can purchase new extensions in the store.
The area my extensions is divided into several sub-sections.
Apps (1): Shows you an overview of the apps available in the shop and in your account.
Theme (2): Shows you an overview of the themes available in the shop and in your account.
Recommendations (3): Here you will receive recommendations for extensions, divided into regions and categories.
Shopware Account (4): Here you can link your shop with your Shopware account.
Upload extension (5): If you have an extension as a zip file, you can upload it here.
Detailed information on the respective section can be found further down in this documentation.
The apps section offers you an overview of all apps added to your shop.
The app overview is divided into several columns
Overview of apps (1): Here you can see an overview of all apps and the most important information about each app.
Hide inactive extensions (2): Use this button to hide all apps that are currently not activated in order to obtain a better overview of the active apps.
Sorting (3): Here you can specify the criterion according to which the overview should be sorted.
Active button (4): With this button you can activate or deactivate installed apps. In contrast to uninstalling, deactivating has the advantage that settings you have made in the app are not lost.
Install app (5): If you upload an app or have bought one in your Shopware account, it is not yet installed in your shop. You can install the app using this button. Please note that the app is initially deactivated after installation. If you want to use it straight away, it is necessary to activate it right away.
"..." button (6): Here you can call up the context menu for the respective apps. Different functions are then available in the menu, depending on the status of the app.
Uninstall: Uninstalls the app but does not remove it completely from the system so that it remains in the overview. This option is available if the app is installed.
Remove: Removes the app from the system. This option is available if the app is not yet installed or has been uninstalled.
Configuration: Opens the configuration menu. This option is available for apps that use their own configuration menu (e.g. the PayPal app).
Upload extension (7): Here you can manually add an extension to the shop. You can download the zip file required for this in the extension management of your Shopware account, for example.
Notes on the active status
The active slider has three states, which are differently displayed.
Active
A white button upon a light blue background indicates that the app is active.
Inactive
If a white button is displayed upon a grey background, the app is not active.
Uninstalled
An uninstalled app is indicated by a dark grey button on a light grey background. In addition, Install app is now displayed on the right-hand side.
The licence for the extension is therefore still available and the app can be reinstalled at any time.
The Themes section offers you an overview of all themes added to your shop.
The overview is divided into several columns
Overview of themes (1): Here you can see an overview of all themes and the most important information about each theme.
Hide inactive extensions (2): Use this button to hide all themes that are currently not activated in order to obtain a better overview of the active themes.
Sorting (3): Here you can specify the criterion according to which the overview should be sorted.
Active button (4): With this button you can activate or deactivate installed themes. In contrast to uninstalling, deactivating has the advantage that settings you have made in the theme are not lost. Themes that are active here are not automatically assigned to a sales channel. The assignment only takes place in the sales channel.
Open app (5): This link takes you directly to the configuration page of the theme."..."
button (6): Here you can call up the context menu for the respective theme. You can uninstall installed themes here. You can completely remove already uninstalled themes from the administration here.
The Recommendations section provides you with extension recommendations for certain areas of your shop.
First, you select the region and category for which you would like to receive recommendations. These are then displayed below the categories. Click on the Install button to add the app directly to the app overview and install it.
Here you can connect to your Shopware account to get access to your purchases.
In order for the login to work without problems, it is necessary that you have registered and verified your domain, information on how to do this can be found here.
If the desired extension is not yet listed under apps or themes, it is first required that you add it. This is possible in two ways.
Now that the extension is available under apps or themes, you can install it. To do this, open the context menu by clicking on the "..." button. Click on Install in the menu. You can then activate the extension using the button in the Status column.
Some active extensions have their own menu item under Settings > Extensions, which you can use to open the configuration of the extension. Information on the function and configuration of the individual extensions can be found in the respective extension documentation.
If updates are available for extensions/apps, this will be displayed in the corresponding line of the extension/app in the app overview, next to the current version of the extension/app. The update will be triggered by clicking on Update (1). It will be checked whether you are authorised to update and the update will then be carried on or rejected with a corresponding message. | https://docs.shopware.com/en/shopware-6-en/extensions/myextensions | 2022-08-07T23:00:38 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.shopware.com |
A direct agent can be converted into a proxy agent, and a proxy agent can be converted into a direct agent. A proxy agent can be moved from one proxy server to another.
You can find the migration scripts in the following location in the OpsRamp console. Automation > Scripts > OpsRamp Agent
There are separate scripts for migrating Linux and Windows agents. The scripts are self-explanatory and you can use them to migrate agents. | https://jpdemopod2.docs.opsramp.com/platform-features/agents/deploy-on-windows/migrate-agent/ | 2022-08-07T23:18:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | jpdemopod2.docs.opsramp.com |
public abstract class SubAssembly extends Pipe
Pipes so they my be reused in the same manner a Pipe is used. That is, a typical SubAssembly subclass will accept a 'previous' Pipe instance, and a few arguments for configuring the resulting sub-assembly. The previous pipe (or pipes) must be passed on the super constructor, or set via
setPrevious(Pipe...). This allows the current SubAssembly to become the parent of any Pipe instances between the previous and the tails, exclusive of the previous, and inclusive of the tails. Subsequently all tail Pipes must be set via the
setTails(Pipe...)method. Note if the SubAssembly represents a split in the pipeline process, all the 'tails' of the assembly must be passed to
setTails(Pipe...). It is up the the developer to provide any other access to the tails so they may be chained into any subsequent Pipes. Any
ConfigDefvalues on this SubAssembly will be honored by child Pipe instances via the
Pipe.getParent()back link described above.
configDef, name, nodeConfigDef, parent, stepConfigDef
equals, getConfigDef, getHeads, getNodeConfigDef, getParent, getStepConfigDef, getTrace, hasConfigDef, hashCode, hasNodeConfigDef, hasStepConfigDef, id, named, names, outgoingScopeFor, pipes, print, printInternal, resolveIncomingOperationArgumentFields, resolveIncomingOperationPassThroughFields, setParent, toString
clone, finalize, getClass, notify, notifyAll, wait, wait, wait
protected SubAssembly()
protected SubAssembly(Pipe... previous)
protected SubAssembly(java.lang.String name, Pipe[] previous)
public static Pipe[] unwind(Pipe... tails)
tails- of type Pipe[]
protected void setPrevious(Pipe... previous)
previous- of type Pipe
protected void setTails(Pipe... tails)
tails- of type Pipe
public Pipe[] getTails()
setTails(Pipe...).
public java.lang.String[] getTailNames()
public java.lang.String getName()
getNamein class
Pipe
public Pipe[] getPrevious()
getPreviousin class
Pipe | http://docs.concurrentinc.com/cascading/3.3/javadoc/cascading-core/cascading/pipe/SubAssembly.html | 2022-08-07T21:20:23 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.concurrentinc.com |
Date: Sun, 7 Aug 2022 22:39:09 +0000 (UTC) Message-ID: <[email protected]> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_12802_1693478556.1659911949843" ------=_Part_12802_1693478556.1659911949843 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
frevvo palette offers a rich variety = of controls that let you create virtually any form. All controls provide fu= nctionality as soon as you drop them into your form but need to be customiz= ed (edited) to suit the form you are designing.
The purpose of each control is described below.
On this page:. frevvo. See Setting Properties for the details.<= /p>es=
Test ic=
on in the frevvo designer. The=
form URL will have the &_formTz=3D<tz> paramete=
r with the value of the browser's timezone appended to it. The URL for=
a form being tested in the Eastern DayLight Time (EDT) timezone would be:<=
at= ion. Business Rules execute in the form time zone.
Requires users to enter a valid EMail address. The address must conform = to the following syntax: <name>@<name>.<string>
Allows users to enter U.S. currency. Users may type commas and a decimal= point but frevvo will not add them to t= he data automatically. The form also will round all entries to two decimal = places. For example, if the user enters 4000, it will display as 4000.00 wh=.
frevvo supports 5 types of Selec= tion controls. Selection controls let users choose from a list of opti= ons instead of having to enter text.
See the Item Width property for styling options. option in the selection control is chosen.
frevvo LDAP customers may have many u= sers/roles configured on their LDAP server. What if the designer wants to p= ull a list of users or roles from their LDAP server into a form or flow for= user selection? Using a traditional dropdown control is not ideal especial= ly when you have thousands of users to pick from. Dropdown control options = are confined to a predefined list and maybe limited by the default max resu= lt size of an LDAP query. The ComboBo= x control allows users to pick from a predefined list of values or type a v= alue that is not in the list.
The Combobox is used = internally for frevvo functions such as = creating the Access Control List.
Here's how it works:
Users can typeahead to narrow the choices based on the letters entered. = User ids and roles are case sensitive so remember to use the correct case w= hen typing.
The combobox supports the ability = to specify single or multiple values. Simply check the single value checkbo= x if you want to limit the choice to one value.bs= p; provide a text box for additional information when the last op= tion provide= a text box for additional information when the last option in the sel=.= URL endpoint (ex= : servlet) that can save the attachments locally within your company. For m= ore details see handling attachments.
The File Name property allows designers to speci= fy a naming convention for the attachment in the form/flow submission. in-house by mo= difying the context parameter: frevvo.upload.file.types in the web.xml file= . See Installation Tasks for th=. frevvo subm= ission repository does not save the uploaded zipfile as a zipfile. Instead,= it will save it as a file named Upload91. As a workaround, you can open th= e file using winrar.exe and then save it or simply use right click and and select the 'save as' option.:
&n= bsp;/f= low. The internal upper limit is controlled by a configuration parameter - = frevvo.attachment.maxsize. In the frevvo= span> cloud. It is set to 10 mb. If you enter a value into the control= 's max attachment size property greater than frevvo internal upper limit, you will see an error message d= isplayed on the upload control.
In-house, customers can control the fre= vvo internal upper limit via the frevvo.attachment.= maxsize parameter in the web.xml or frevvo.xml files. Initially, this v= alue is set to 10485760 bytes. You can also set the m= ax attachment size per user by editing the user's profile as the admin user= and editing the Configuration field shown below:
The value in the user profile takes precedence over the configuration pa= rameter.
The File Name property allows designers to set up an at= tachment name that applies to all files uploaded with an upload control. At= tachments with configured file names will be seen when viewing the attachment section of a submissi= on. contro= l template value. frevvo will try to= replace the template with a matching value. If no match is found then the = template will not be included in the filename. For example, if the = File Name property is set up as Full {Name} but there is not a Nam= e control in the form with a value in it. The resulting filename of the att= achment will be Full.docx Since frevvo cannot resolve the template, the {Name} is removed. if = not supported. Non image file types uploaded are ignored. The d= esigner.<= /span>
Message controls are most often used to add static text to your form or = flow, Form titles and helpful instructions for your users are just a few wa= ys to use the Message control, It can also be used in the following not so = obvious situations:
The message control supports rich text input in design mode via the Rich= Text Editor. The editor simplifies the creation of your message content.
When you drag a Message control from the palette and drop it into your f= orm/flow, then click on it, the mes= sage control transforms to an in-place rich text editor. The editor consists of an editing area for entering text and an icon in the top left th=. The Source button is worthy of mention:
By default, text is wrapped in an HTML paragraph tag (<p>) wheneve=
r more than one line is entered. This can lead to undesired vertical space.=
To remove the extra space - for example - if you only have one line =
of text, click the Source button and remove the surroundin=
g html tags.
There are two ways to apply text styling features;
If you are using the Chrome browser, you must type the text, select it a= nd then click the menu in the designer, click on any other control= or somewhere else in the designer canvas. When you click away form the Mes= sage control you can get an idea of how it will look to your users. Of cour= se, you can always click thePreview = form icon to see how your form displays.
You can chose a message type to display different background colors, dec= orators or a border from the Message Type dropdown on the Setting tab of the property pane= l. You can still select style properties on the Style Ta= b.
Whwn you drop a Message Control in a Table, the rich text editor is rend= ered much smaller when you click on it,This is illustrated below. Notice th= at none of the rich text editor's menus are displayed.
Clicking theicon to toggle the = menus on expands the height of the in-place editor.
The Rich Text editor will be vertical=
ly expanded when you show the menus in Message Controls or any group contro=
l that you drop a Message control into that is less than 4 columns wide.
Let's say you wanted to modify a sect= ion of your form to reflect a horizontal layout as shown in the image.
One approach would be to use the&n= bsp;Table and Radio Controls. = ;Another alternative is to use a Message control and a Radio control= . Follow these steps:. T=
he New Line
property is checked by default. =
;.. This will add a blank area on the= left of that image and move it into the center. Click the Style t= ab if you need to modify the width of the message control.
This lets you include video files in your form and works the same way as= the image control. The control supports .SWF files and other video file fo= rmats that are supported by your browser. This control does not allow the d= esigner to resize the video area. If your video is already hosted on anothe= r web service it is often best to embed your video into your form using a <= a href=3D"#PaletteControls-MessageControl">message control and adding t= he following html code to that control as shown below:
<ifr= ame width=3D"100%" height=3D"500" src=3D"// c" frameborder=3D"0"></iframe>=20
If you are using MySql and yo= u are uploading a large video, you may see this message:
The d= efault value of max_allowed_packet configuration parameter may not be large enou= gh. See this documentation for the solution. = p>
The trigger control adds a button to your form and is used in conjunctio= n with rules. If your form does not have rules you will not need the trigge= r control. If your form does have rules, see Triggers &a= mp; palet= te and drop it into your form the default width is 3 columns. You can chang= yo== ts, a=. co= lumn width to fill the remaining space in the section. Since panels are gro= up controls, you drag other controls inside them. Below are three pan= els that have been dragged in to the palette for use in a three-column layo= ut. Inside the first panel is a text control (city), a dropdown control (st= ate) and another text control (zip). prop= erty or allow only a certain number of items based on user i= nput using a business rule. Yo= u can also use business rules for computed values, enabling/disabling fie= lds, showing/hiding fields etc.de= lete icon to eliminate deletion of a row in a table rows using the
When tabbing through a form, use Shif= t th=
ree rows and three columns. The columns will display the default names of&n=
bsp;col 0, col 1 and col 2 respectively. The&nb=
sp;
icons will display for each row in the ta=
ble, allowing addition and deletion of rows. Deleting all the rows wil=
l result in a table with one row. Notice the name of each tables column is unique.<=
th=
e table. The minus icon will become a w=
hen the Min # value is reached or there is only one row in the tab=
le.72/Setting+Properties#SettingProperties-Min#andM= ax#">Setting Properties for more Information. bus= iness rule.
If you have a dropdown, radio or checkbox control in your table the requ= ired and comment properties are selectable on the cell property p= anel. for= more information.
Error messages for invalid data in a table will display o= neat= ion as shown in the image.
You can use a Table control with Message and Radio controls to accomplis= h this. Follow these u= ser input.
Repeat controls do not have a label. You can identify a repeat con= trol in the designer by the palette icon for a repeat seen on top-left header.more disap= pear from the form and only the icons will remain. special template syntax. See templates for repeat controls for the details..
<= /p><= /p>
Tablet and Smartphone forms are gener= ated using HTML5 controls so features like Custom Keyboards, Date pickers e= tc. can be used. For example: not be visible by d= efault when you edit your form in the form designer. You can toggle the vis= ibility of these controls via the= icon in the toolbar., frevvo will generate a Smartphone layout and a Tablet l= ayout with the page-breaks specified. If page-breaks are not spec= ified, there will be just one page. The Employee on Boarding form shown bel= ow, che= cked. dro=
pped BETWEEN 2 controls or ABOVE any control. For example, say you had a fo=
rm with two panels side by side and you wanted to drop a PageBreak control =
in between them. Drag the PageBreak control from the palette to the left of=
the second panel in the form until you see the
up can= noti= c Signatures for more information.
The Form Viewer Control is used to allow a generated PDF to be view= ed as part of a form/flow. This control is visible in th= e Forms designer palette to be used when generating "Pixel Perfect" = PDFs in the Forms designer. Refer to this = documentation for the details.
.
Testing Your Mobile Form= h3>
Currently, the best way to test mobile forms/flows is to click the
Test=
icon on the Forms Home Pag=
e in the designer. You can click on tablet and phone icons at the top to see an appro=
ximate rendering of your form on the selected device. . Sometimes, the mobi=
le form/flow does not look the same in these test form views as it does on =
the mobile device. It is recommended that you design/debug your form/flow c=
ompletely on the desktop first. Then you can add your form/flow to a space and look at it on the mobile dev=
ice. | https://docs.frevvo.com/d/exportword?pageId=19252020 | 2022-08-07T22:39:09 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.frevvo.com |
T- The event type
public interface ApplicationEventPublisher<T>
Interface for classes that publish events received by
ApplicationEventListener instances.
Note that this interface is designed for application level, non-blocking synchronous events for decoupling code and is not a replacement for a messaging system
static final ApplicationEventPublisher<?> NO_OP
static <K> ApplicationEventPublisher<K> noOp()
ApplicationEventPublisher.
K- The event type
ApplicationEventPublisher
void publishEvent(@NonNull T event)
event- The event to publish
@NonNull default Future<Void> publishEventAsync(@NonNull T event)
event- The event to publish | https://docs.micronaut.io/3.4.0/api/io/micronaut/context/event/ApplicationEventPublisher.html | 2022-08-07T22:02:14 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.micronaut.io |
Licenses
The total number of embedded terminals that can run at the same time is equal to the number allowed by the embedded terminal licenses. If the number of embedded licenses at the server is exhausted, the terminal is deactivated. As a result, users cannot log in to this terminal and the User session not loaded message appears on the terminal.
To regain access to the terminal, you can add a new license or deactivate one of the currently activated terminals and then, reactivate the printing device on the MyQ Web administrator interface.
For information on how to add embedded terminal licenses, activate them, and extend the software assurance period, see Licenses in the MyQ Print Server guide.
Additional information about special license editions (Education, Government, NFR, Trial, etc.) is displayed on the terminals login screen. It is not displayed on devices with 4.3” display because of lack of space.
Education license example
Government license example
NFR license example
| https://docs.myq-solution.com/hp-emb/8.2/licenses | 2022-08-07T22:59:42 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../hp-emb/8.2/244383773/NoLicenseLoginFail.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'User session not loaded error due to no license'], dtype=object)
array(['../../hp-emb/8.2/244383773/EduLicense.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'Education edition login screen'], dtype=object)
array(['../../hp-emb/8.2/244383773/GovLicense.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'Government edition login screen'], dtype=object)
array(['../../hp-emb/8.2/244383773/NFRLicense.png?inst-v=0c2ae7d1-025f-4de5-814c-3cd0f5cd6f8b',
'NFR license login screen'], dtype=object) ] | docs.myq-solution.com |
Overview¶
The different tenants in OpenStack is divided into domains and projects. One domain usually has the form “company.com” and may have several projects underneath. The projects usually has a name ending on the same thing as the domain it belongs to, so typically “project1.company.com” and “project2.company.com”. A project is an administrative entity in OpenStack and can be seen as a separate environment. A very common setup is to have different projects for the test and production environments, like “test.company.com” and “prod.company.com”. This way common resources can be shared between the different instances (virtual machines) but kept apart between the different environments.
When you log into the platform you will be greated with the “Overview”-screen were you can see how much resources you currently is using in the project.
The pie charts show how much of the resources compared to the set quota.
Instances¶
Instances means virtual machine in OpenStack. As you can see two instances out of 10 possible (with the current quota settings) is running in the project.
VCPUs¶
VCPU stands for Virtual CPU and translates to processor cores in Safesprings platform. In this example 8 out of 20 are running in the project.
RAM¶
RAM is exactly what you would expect it to be: memory allocated to the instances in the project.
Volumes¶
Volumes corresponds to the number of volumes created in the project. Storage in OpenStack comes in two types: ephemeral and persistent. Ephemeral storage is created with the instance and has the same lifetime as the instance. This means that ephemeral storage is removed automatically when the instance is removed. Persistent storage can be created independently of instances and attached and detached to instances. This type of storage is not tied to a specific instance and is created and deleted separately. Volumes is the notion for persistent storage in OpenStack and hence can be created in a dialogue separate from the instance creation.
Volume Snapshots¶
It is possible to take snapshots of a volume. This pie chart shows how many such snapshots that are stored currently.
Volumes Storage¶
Here you can see how much of the current storage quota that is used in the project.
Security Groups and Security Group Rules¶
Security Groups is the built-in firewall functionality in OpenStack. One Security Groups is a set of Security Group Rules with each of them corresponding to a specific port and allowed source IP, IP network of other Security Group.
Networks¶
With the new networking engine in Safesprings setup of OpenStack, Calico, there is not possible to create networks which is why this is not applicable in the platform. | https://docs.safespring.com/new/overview/ | 2022-08-07T21:30:40 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../images/np-overview.png', 'image'], dtype=object)] | docs.safespring.com |
5.1. Lesson: Creating a New Vector Dataset.
5.1.1.
Follow Along: The Layer Creation Dialog
Before you can add new vector data, you need a vector dataset to add it to. In our case, you’ll begin by creating new data entirely, rather than editing an existing dataset. Therefore, you’ll need to define your own new dataset first.
Open QGIS and create a new blank project.
Navigate to and click on the menu entry New Shapefile Layer dialog, which will allow you to define a new layer.. You’ll be presented with the
Click … for the File name field. A save dialog will appear.
Navigate to the
exercise_datadirectory.
Save your new layer as
school_property.shp.
It’s important to decide which kind of dataset you want at this stage. Each different vector layer type is „built differently“ in the background, so once you’ve created the layer, you can’t change its type.
For the next exercise, we’re going to create new features which describe areas. For such features, you’ll need to create a polygon dataset.
For Geometry Type, select Polygon from the drop down menu:
This has no impact on the rest of the dialog, but it will cause the correct type of geometry to be used when the vector dataset is created.
The next field allows you to specify the Coordinate Reference System, or CRS. CRS is a method of associating numerical coordinates with a position on the surface of the Earth. See the User Manual on Working with Projections to learn more.
For this example we will use the default CRS associated with this project, which is WGS84.
Next there is a collection of fields grouped under New Field. By default, a new layer has only one attribute, the
idfield (which you should see in the Fields list) below. However, in order for the data you create to be useful, you actually need to say something about the features you’ll be creating in this new layer. For our current purposes, it will be enough to add one field called
namethat will hold
Text dataand will be limited to text length of
80characters.
Replicate the setup below, then click the Add to Fields List button:
Check that your dialog now looks like this:
Click OK
The new layer should appear in your Layers panel.
5.1.2.
Follow Along: Data Sources
Data Source Manager button.
Raster on the left side.
In the Source panel, click on the … button:
Navigate to
exercise_data/raster/.
Select the file
3420C_2010_327_RGB_LATLNG.tif.
Click Open to close the dialogue window.
Click Add and Close. An image will load into your map.
If you don’t see an aerial image appear, select the new layer, right click, and choose Zoom to Layer in the context menu.
Click on the
Zoom In button, and zoom to the area highlighted in blue below:
Now you are ready to digitize these three fields:
Before starting to digitize, let’s move the
school_property layer above the aerial image.
school_propertylayer in the Layers pane and drag it to the top.
>>IMAGE
school_propertylayer in the Layers panel to select it.
Click on the
Toggle Editing button.
If you can’t find this button, check that the Digitizing toolbar is enabled. There should be a check mark next to the menu entry.
As soon as you are in edit mode, you’ll see that some digitizing tools have become active:
Other relevant buttons are still inactive, but will become active when we start interacting with our new data.
Notice that the layer
school_propertyin the Layers panel now has the pencil icon, indicating that it is in edit mode.
Click on the
Capture Polygon button to begin digitizing our school fields.
You’ll notice that your mouse cursor has become a crosshair. This allows you to more accurately place the points you’ll be digitizing. Remember that even when:
Click OK, and you have created a new feature!
In the Layers panel select the
school_propertylayer.
Right click and choose Open Attribute Table in the context menu.
In the table you will see the feature you just added. While in edit mode you can update the attributes data by double click on the cell you want to update.
Close the attribute table.
To save the new feature we just created, click on
Save Edits button.
Remember, if you’ve made a mistake while digitizing a feature, you can always edit it after you’re done creating it. If you’ve made a mistake, continue digitizing until you’re done creating the feature as above. Then:
Click on
Vertex Tool button.
Hover the mouse over a vertex you want to move and left click on the vertex.
Move the mouse to the correct location of the vertex, and left click. This will move the vertex to the new location.
The same procedure can be used to move a line segment, but you will need to hover over the midpoint of the line segment.
If you want to undo a change, you can press the
Undo button or Ctrl+Z.
Remember to save your changes by clicking the
Save Edits button.
When done editing, click the
Toggle Editing button to get out of edit mode.
5.1.3.
Try Yourself Digitizing Polygons
Digitize the school itself and the upper field. Use this image to assist you:
Remember that each new feature needs to have a unique
id value!
Poznámka
When you’re done adding features to a layer, remember to save your edits and then exit edit mode.
Poznámka
You can style the fill, outline and label placement and formatting
of the
school_property using techniques learnt in earlier
lessons.
5.1.4.
Follow Along: Using Vertex Editor Table
Another way to edit a feature is to manually enter the actual coordinate values for each vertex using the Vertex Editor table.
Make sure you are in edit mode on layer
school_property.
If not already activated, click on
Vertex Tool button.
Move the mouse over one of the polygon features you created in the
school_propertylayer and right click on it. This will select the feature and a Vertex Editor pane will appear.
Poznámka
This table contains the coordinates for the vertices of the feature. Notice there are seven vertices for this feature, but only six are visually identified in the map area. Upon closer inspection, one will notice that row 0 and 6 have identical coordinates. These are the start and end vertices of the feature geometry, and are required in order to create a closed polygon feature.
Click and drag a box over a vertex, or multiple vertices, of the selected feature.
The selected vertices will change to a color blue and the Vertex Editor table will have the corresponding rows highlighted, which contain the coordinates of the vertices.
To update a coordinate, double left click on the cell in the table that you want to edit and enter the updated value. In this example, the x coordinate of row
4is updated from
20.4450to
20.4444.
After entering the updated value, hit the enter key to apply the change. You will see the vertex move to the new location in the map window.
When done editing, click the
Toggle Editing button to get out of edit mode, and save your edits.
5.1.5.
Try Yourself Digitizing Lines
We:
If the roads layer is not yet in your map, then add the
roadslayer from the GeoPackage file
training-data.gpkgincluded in the
exercise_datafolder of the training data you downloaded. You can read Follow Along: Loading vector data from a GeoPackage Database for a how-to.
Create a new ESRI Shapefile line dataset called
routes.shpin the
exercise_datadirectory, with attributes
idand
type(use the approach above to guide you.)
Activate edit mode on the routes layer.
Since you are working with a line feature, click on the
Add Line button to initiate line digitizing mode.
One at a time, digitize the path and the track on the
routeslayer. Try to follow the routes as accurately as possible, adding additional points along corners or turns.
Set the
typeattribute value to
pathor
track.
Use the Layer Properties dialog to add styling to your routes. Feel free to use different styles for paths and tracks.
Save your edits and toggle off editing mode by pressing the
Toggle Editing button.
Answer
The symbology doesn’t matter, but the results should look more or less like this:
5.1.6. In Conclusion.
5.1.7. What’s Next?
Features in a GIS layer aren’t just pictures, but objects in space. For example, adjacent polygons know where they are in relation to one another. This is called topology. In the next lesson you’ll see an example of why this can be useful. | https://docs.qgis.org/3.22/cs/docs/training_manual/create_vector_data/create_new_vector.html | 2022-08-07T23:11:07 | CC-MAIN-2022-33 | 1659882570730.59 | [array(['../../../_images/move_school_layer.png',
'../../../_images/move_school_layer.png'], dtype=object)
array(['../../../_images/field_outlines1.png',
'../../../_images/field_outlines1.png'], dtype=object)
array(['../../../_images/path_start_end.png',
'../../../_images/path_start_end.png'], dtype=object)
array(['../../../_images/track_start_end.png',
'../../../_images/track_start_end.png'], dtype=object)
array(['../../../_images/routes_layer_result.png',
'../../../_images/routes_layer_result.png'], dtype=object)] | docs.qgis.org |
DeleteMarkerEntry
Information about the delete marker.
Contents
- IsLatest
Specifies whether the object is (true) or is not (false) the latest version of an object.
Type: Boolean
Required: No
- Key
The object key.
Type: String
Length Constraints: Minimum length of 1.
Required: No
- LastModified
Date and time the object was last modified.
Type: Timestamp
Required: No
- Owner
The account that created the delete marker.>
Required: No
- VersionId
Version ID of an object.
Type: String
Required: No
See Also
For more information about using this API in one of the language-specific Amazon SDKs, see the following: | https://docs.amazonaws.cn/en_us/AmazonS3/latest/API/API_DeleteMarkerEntry.html | 2022-08-07T21:56:15 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.amazonaws.cn |
Common Configuration Options¶
Master Port¶
By default, the master listens on TCP port 8080. This can be configured via the
port option.
Security¶
The master can secure all incoming connections using TLS. That ability requires a TLS private key
and certificate to be provided; set the options
security.tls.cert and
security.tls.key to
paths to a PEM-encoded TLS certificate and private key, respectively, to do so. If TLS is enabled,
the default port becomes 8443 rather than 8080. See Transport Layer Security for more information.
Configuring Trial Runner Networking¶
The master is capable of selecting the network interface that trial runners will use to communicate
when performing distributed (multi-machine) training. The network interface can be configured by
editing
task_container_defaults.dtrain_network_interface. If left unspecified, which is the
default setting, Determined will auto-discover a common network interface shared by the trial
runners.
Note
For Introduction to Distributed Training, Determined automatically detects a common network interface shared by the agent machines. If your cluster has multiple common network interfaces, please specify the fastest one.
Additionally, the ports used by the GLOO and NCCL libraries, which are used during distributed (multi-machine) training can be configured to fall within user-defined ranges. If left unspecified, ports will be chosen randomly from the unprivileged port range (1024-65535).
Default Checkpoint Storage¶
See Checkpoint Storage for details.
Telemetry¶
By default, the master and WebUI collect anonymous information about how Determined is being used. This usage information is collected so that we can improve the design of the product. Determined does not report information that can be used to identify individual users of the product, nor does it include model source code, model architecture/checkpoints, training datasets, training and validation metrics, logs, or hyperparameter values.
The information we collect from the master periodically includes:
a unique, randomly generated ID for the current database and for the current instance of the master
the version of Determined
the version of Go that was used to compile the master
the number of registered users
the number of experiments that have been created
the total number of trials across all experiments
the number of active, paused, completed, and canceled experiments
whether tasks are scheduled using Kubernetes or the built-in Determined scheduler
the total number of slots (e.g., GPUs)
the number of slots currently being utilized
the type of each configured resource pool
We also record when the following events happen:
an experiment is created
an experiment changes state
an agent connects or disconnects
a user is created (the username is not transmitted)
When an experiment is created, we report:
the name of the hyperparameter search method
the total number of hyperparameters
the number of slots (e.g., GPUs) used by each trial in the experiment
the name of the container image used
When a task terminates, we report:
the start and end time of the task
the number of slots (e.g., GPUs) used
for experiments, we also report:
the number of trials in the experiment
the total number of training workloads across all trials in the experiment
the total elapsed time for all workloads across all trials in the experiment
The information we collect from the WebUI includes:
pages that are visited
errors that occur (both network errors and uncaught exceptions)
user-triggered actions
To disable telemetry reporting in both the master and the WebUI, start the master with the
--telemetry-enabled=false flag (this can also be done by editing the master config file or
setting an environment variable, as with any other configuration option). Disabling telemetry
reporting will not affect the functionality of Determined in any way.
OpenTelemetry¶
Separate from the telemetry reporting mentioned above, Determined also supports OpenTelemetry to collect traces. This is disabled by default; to enable it, use the
master configuration setting
telemetry.otel-enabled. When enabled, the master will send
OpenTelemetry traces to a collector running at
localhost:4317. A different endpoint can be set
via the
telemetry.otel-endpoint configuration setting. | https://docs.determined.ai/latest/reference/reference-deploy/config/common-config-options.html | 2022-08-07T21:26:32 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.determined.ai |
Backing Up the Persistence Store
You can trigger backups of your persistence store, using the Java API, REST API, or Management Center. Backing up the persistence store is useful if you want to copy the data onto other clusters without having to shut down your cluster.
Before you Begin
To back up data in the persistence store, the cluster must be configured with a directory in the
persistence.backup-dir configuration. See Configuring Persistence.
How Members Create Backups
When a member receives a backup request, it becomes the coordinating member and sends a new backup sequence ID to all members.
If all members respond that no other backup is currently in progress and that no other backup request has already been made, then the coordinating member commands the cluster to start the backup process nearly instantaneously on all members.
During this process, each member creates a sequenced backup subdirectory in
the configured
backup-dir directory with the name
backup-<backupSeq>.
To make the backup process more performant, the contents of files in the persistence store are not duplicated. Instead, members create a new file name for the same persisted contents on disk, using hard links. If the hard link fails for any reason, members continue by copying the data, but future backups will still try to use hard links.
Triggering a Backup
To trigger a new backup, you can use one of the following options:
Triggering a Backup in Java
Put the cluster in a
PASSIVEstate.
Backups may be initiated during membership changes, partition table changes, or during normal data updates. As a result, some members can have outdated versions of data before they start the backup process and copy the stale persisted data. By putting your cluster in a
PASSIVEstate, you can make data more consistent on all members.
Trigger a backup.
PersistenceService service = member.getCluster().getPersistenceService(); service.backup();
The sequence number in sequenced backup subdirectories is generated by the backup process, but you can define your own sequence numbers as shown below:
PersistenceService service = member.getCluster().getPersistenceService(); long backupSeq = ... service.backup(backupSeq);
Put your cluster back in an
ACTIVEstate.
Once the backup method has returned, all cluster metadata is copied and the exact partition data which needs to be copied is marked. After that, the backup process continues asynchronously and you can return the cluster to the
ACTIVEstate and resume operations.
Monitoring the Backup Process
Only cluster and distributed object metadata is copied synchronously during the invocation of the backup method. The rest of the persistence store is copied asynchronously after the method call has ended. You can track the progress of the backup process, using one of the following options:
Java API
-
An example of how to track the progress via the Java API is shown below:
PersistenceService service = member.getCluster().getPersistenceService(); persistence stores
(defined by
PersistenceConfig.setParallelism())
but this can change at a later point to provide greater resolution.
Besides tracking the Persistence status by API, you can view the status in the
Management Center and you can inspect the on-disk files for each member.
Each member creates an
inprogress file which is created in each of the copied persistence stores.
This means that the backup is currently in progress. When the backup task completes
the backup operation, this file is removed. If an error occurs during the backup task,
the
inprogress file is renamed to
failure which contains a stack trace of the exception.
Interrupting and Canceling a Backup
Once the backup method call has returned and asynchronous copying of the partition data has started, the backup task can be interrupted. This is helpful in situations where the backup task has started at an inconvenient time. For instance, the backup task could be automated and it could be accidentally triggered during high load on the Hazelcast instances, causing the performance of the Hazelcast instances to drop.
The backup task mainly uses disk I/O,:
PersistenceService service = member.getCluster().getPersistenceService();:
PersistenceService service = member.getCluster().getPersistenceService(); service.interruptLocalBackupTask(); ...
The backup task stops as soon as possible and it does not remove the disk contents of the backup directory meaning that you must remove it manually.
Restoring from a Backup
To restore a cluster with data from a specific backup, do the following:
Remove the files in your
base-dirdirectory.
Copy the contents of a sequenced subdirectory in your
backup-dirdirectory to your
base-dirdirectory.
Restart the cluster.
To start a new cluster from the backups of an existing cluster, do the following for each existing member before starting the cluster: Copy the contents of an existing member’s backup subdirectory to the directory that’s configured in a new member’s
base-dir directory. | https://docs.hazelcast.com/hazelcast/5.1/storage/backing-up-persistence | 2022-08-07T22:14:06 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.hazelcast.com |
xdmp:rethrow() as empty-sequence()
Within the catch section of a try-catch expression, re-throw the currently caught error.
try { xdmp:document-delete($uri) } catch ($ex) { (: ignore documents that aren't there :) if ($ex/error:code eq 'XDMP-DOCNOTFOUND') then () else xdmp:rethrow() }
Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question. | https://docs.marklogic.com/9.0/xdmp:rethrow | 2022-08-07T23:01:42 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.marklogic.com |
Configure Character Settings for Outlook Web App
Applies to: Exchange Server 2010 SP3, Exchange Server 2010 SP2.
Use the Shell to configure character settings for Outlook Web App.
Set-OwaVirtualDirectory -identity "Owa (Default Web Site)" -OutboundCharset AlwaysUTF8
Note
The AlwaysUTF8 character setting on the Outlook Web App virtual directory takes precedence over user-defined settings. Outlook Web App sets the UTF-8 character on all outgoing e-mail messages, regardless of the user's language choice in Outlook Web App.
For more information about syntax and parameters, see Set-OwaVirtualDirectory.
Other Tasks
After you configure character settings for Outlook Web App, you may also want to Configure Language Settings for Outlook Web App. | https://docs.microsoft.com/en-us/previous-versions/office/exchange-server-2010/bb124898(v=exchg.141) | 2022-08-07T22:29:04 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.microsoft.com |
Editing a Relay Team Entry¶
If you want to edit an entry, then the process is much the same as entering one. On the homepage the green button will now read EDIT MY RELAY ENTRY FOR: . . .
Click this and now you can go through the entry journey and edit the number of teams you have selected for each event.
Editing Entries
Make sure you go through to the end of the entry journey to pay for any additional teams. The example below is for a “free” entry, but you can see that the changes made still require confirming, hence the Due.
Confirm Changes | https://docs.opentrack.run/cms/editrelayentry/ | 2022-08-07T22:07:28 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.opentrack.run |
Kafka - Getting started
Find out how to set up and manage your Public Cloud Databases for Kafka
Find out how to set up and manage your Public Cloud Databases for Kafka
Last updated 5th January 2022
Apache Kafka is an open-source and highly resilient event streaming platform based on 3 main capabilities:
You can get more information on Kafka from the official Kafka website{.external).
This guide explains how to successfully configure Public Cloud Databases for Kafka via the OVHcloud Control Panel.
Log in to your OVHcloud Control Panel and switch to
Public Cloud in the top navigation bar. After selecting your Public Cloud project, click on
Databases in the left-hand navigation bar under Storage.
Click the
Create a database instance button. (
Create a service if your project already contains databases.)
Click on the type of database you want to use and then select the version to install from the respective drop-down menu. Click
Next to continue.
In this step, choose an appropriate service plan. If needed, you will be able to upgrade the plan after creation.
Please visit the capabilities page of your selected database type for detailed information on each plan's properties.
Click
Next to continue.
Choose the geographical region of the datacenter where your service will be hosted.
Click
Next to continue.
You can increase the number of nodes and choose the node template in this step. The minimum and maximum amount of nodes depends on the solution chosen in step 2.
Please visit the capabilities page of your selected database type for detailed information on hardware resources and other properties of the database installation.
Take note of the pricing information and click
Next to continue.
You can name your database in this step and decide to attach a public or private network. Please note that attaching a private network is a feature not yet available at this time.
The final section will display a summary of your order as well as the API equivalent of creating this database instance with the OVHcloud API.
In a matter of minutes, your new Apache Kafka service will be deployed. Messages in the OVHcloud Control Panel will inform you when the streaming tool is ready to use.
Once the Public Cloud Databases for Kafka service is up and running, you will have to define at least one user and one authorised IP in order to fully connect to the service (as producer or consumer).
The
General information tab should inform you to create users and authorized IPs.
Switch to the
Users tab. An admin user is preconfigured during the service installation. You can add more users by clicking the
Add user button.
Enter a username, then click
Create User.
Once the user is created, the password is generated. Please keep it securely as it will not be shown again.
Passwords can be reset for the admin user or changed afterwards for other users in the
Users tab.
For security reasons the default network configuration doesn't allow any incoming connections. It is thus critical to authorize the suitable IP addresses in order to successfully access your Kafka cluster.
Switch to the
Authorized IPs tab. At least one IP address must be authorised here before you can connect to your database.
It can be your laptop IP for example.
Clicking on
Add an IP address or IP address block (CIDR) opens a new window in which you can add single IP addresses or blocks to allow access to the database.
You can edit and remove database access via the
... button in the IP table.
If you don't know how to get your IP, please visit a website like. Copy the IP address shown on this website and keep it for later.
Your Apache Kafka service is now fully accessible!
Optionally, you can configure access control lists (ACL) for granular permissions and create something called topics, as shown below.
Topics can be seen as categories, allowing you to organize your Kafka records. Producers write to topics, and consumers read from topics.
To create Kafka topics, click on the
Add a topic button:
In advanced configuration you can change the default value for the following parameters:
Public Cloud Databases for Kafka supports access control lists (ACLs) to manage permissions on topics. This approach allows you to limit the operations that are available to specific connections and to restrict access to certain data sets, which improves the security of your data.
By default the admin user has access to all topics with admin privileges. You can define some additional ACLs for all users / topics, click on
Add a new entry button:
For a particular user, and one topic (or all with '*'), define the ACL with the the following permissions:
Note: Write permission allows the service user to create new indexes that match the pattern, but it does not allow deletion of those indexes.
When multiple rules match, they are applied in the order listed above. If no rules match, access is denied.
Verify that the IP address visible from your browser application is part of the "Authorised IPs" defined for this Kafka service.
Check also that the user has granted ACLs for the target topics.
In order to connect to the Apache Kafka service, it is required to use server and user certificates.
The server CA (Certificate Authority) certificate can be downloaded from the General information tab:
The user certificate can be downloaded from the Users tab:
Also download the user access key.
As part of the Apache Kafka official installation, you will get different scripts that will also allow you to connect to Kafka in a Java 8+ environment: Apache Kafka Official Quickstart.
We propose to use a generic producer and consumer client instead: Kcat (formerly known as kafkacat). Kcat is more lightweight since it does not require a JVM.
For this client installation, please follow the instructions available at: Kafkacat Official Github.
Let's create a configuration file to simplify the CLI commands to act as Kafka Producer and Consumer:
kafkacat.conf :
bootstrap.servers=kafka-f411d2ae-f411d2ae.database.cloud.ovh.net:20186 enable.ssl.certificate.verification=false ssl.ca.location=/home/user/kafkacat/ca.pem security.protocol=ssl ssl.key.location=/home/user/kafkacat/service.key ssl.certificate.location=/home/user/kafkacat/service.cert
In our example, the cluster address and port are kafka-f411d2ae-f411d2ae.database.cloud.ovh.net:20186 and the previously downloaded CA certificates are in the /home/user/kafkacat/ folder.
Change theses values according to your own configuration.
For this first example let's push the "test-message-key" and its "test-message-content" to the "my-topic" topic.
echo test-message-content | kcat -F kafkacat.conf -P -t my-topic -k test-message-key
Note: depending on the installed binary, the CLI command can be either kcat or kafkacat.
The data can be retrieved from "my-topic".
kcat -F kafkacat.conf -C -t my-topic -o -1 -e
Note: depending on the installed binary, the CLI command can be either kcat or kafkacat.
Congratulations, you now have an up and running Apache Kafka cluster, fully managed and secured. You are able to push and retrieve data easily via CLI.
Kafka Official documentation
Some UI tools for Kafka are also available:
Visit our dedicated Discord channel:. Ask questions, provide feedback and interact directly with the team that builds our databases services. | https://docs.ovh.com/gb/en/publiccloud/databases/kafka/getting-started/ | 2022-08-07T22:23:51 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.ovh.com |
Zero-Footprint: Implementation
The Zero-footprint case does not require any software deployment, since the FlexNet inventory core components are installed on every inventory beacon as part of the FlexNet Beacon code installation. However, there are preliminary configuration steps needed to allow remote execution to proceed.
To configure remote execution:
- Ensure that you have the appropriate inventory beacons fully operational.
To do this, navigate to Network group), and check the following properties for an existing inventory beacon: (in the
To deploy and configure a new inventory beacon, click Deploy a beacon. Consult the online help for these pages for more information.
- Ensure that at least one inventory beacon is configured to cover the subnet containing the target inventory devices.While all inventory beacons receive all rules declared in the web interface of FlexNet Manager Suite (when they download the BeaconPolicy.xml file), each one enacts only those rules that apply to target devices that fall within their assigned subnet(s). This setting is available through the web interface for FlexNet Manager Suite at . See the online help there for more information.Tip: It is best practice to deploy an inventory beacon into each subnet that contains target inventory devices..
- If yours is a highly secure, locked down environment, you may need to open network ports on the target computer devices to allow for remote execution.Since the inventory beacons use standard ports to access target devices and remotely gather inventory, the required ports are already available in many environments. (The ports are documented in the online help, under FlexNet Manager Suite Help > Inventory Beacons > Inventory Beacon Reference > Ports and URLs for Inventory Beacons. The default requirements for remote execution are ports 445 for SMB on Windows and 22 for SSH on Unix.)
- Ensure adequate credentials are available for the remote execution process to run. There are two possible approaches for Windows devices:
- You can register a domain administrator account that has installation privileges on all the target computer devices within the domain. This approach minimizes entries in the Password Manager.
- You can record appropriate (potentially unique) credentials for each device in the Password Manager. With this approach, you should also add filters to limit the number of password attempts on each target device, so that the remote execution attempt is not terminated because it attempted too many credentials without success.
These credentials must be recorded in the secure Password Manager available on each inventory beacon (for details, see the online help, under FlexNet Manager Suite Help > Inventory Beacons > Password Management Page).For UNIX-like devices, the
sshdaemon must be installed, and you must either:
- Record
rootcredentials for the target device in the Password Manager on the applicable inventory beacon
- Record non-root credentials for the target device in the Password Manager on the applicable inventory beacon, and additionally ensure that a tool to allow privilege escalation (such as
sudoor
priv) is installed on target devices and either:
- The use of that tool is configured in the Password Manager (in the extra fields exposed when you specify and SSH account type), or
- Target devices are configured to allow escalation of privileges without requiring an interactive password.
- Navigate to , and create one or more rules to take inventory from target computing devices within your enterprise, and then to collect inventory from them.Rules consist of:
- Targets that identify sets of devices, and (for all the devices identified within a single target) specify policy about how to connect, whether to collect CAL evidence, whether to track application usage, and whether to adopt — for the Zero-footprint case, it is critical that either
- The target device is not included in any target that has Allow these targets to be adopted selected; or
- The target device is included in an active target for which Do not allow these targets to be adopted is selected (as a 'deny' always over-rides an 'allow'). This may be the easier condition to set and maintain over time.
- Actions that declare what to do to the targeted devices — to ensure discovery and inventory collection, the relevant action must ensure that, in the General devices discovery and inventory section (click the title bar to expand the section), one or both of the Discover ... check boxes is selected, and Gather hardware and software inventory is selected. In addition, other specialized kinds of inventory may be selected, depending on the target inventory device. By specifying multiple rules, you can customize actions to gather all required inventory types while minimizing activity on any individual target device.
- A schedule for implementing the action on the targeted devices.
FlexNet Manager Suite (On-Premises)
2022 R1 | https://docs.flexera.com/FlexNetManagerSuite2022R1/EN/GatherFNInv/SysRef/FlexNetInventoryAgent/tasks/ZFA-Implentation.html | 2022-08-07T22:34:28 | CC-MAIN-2022-33 | 1659882570730.59 | [] | docs.flexera.com |
Settings
Here are all available settings with their defaults, you can override them in your project settings
DRF_STANDARDIZED_ERRORS = { # class responsible for handling the exceptions. Can be subclassed to change # which exceptions are handled by default, to update which exceptions are # reported to error monitoring tools (like Sentry), ... "EXCEPTION_HANDLER_CLASS": "drf_standardized_errors.handler.ExceptionHandler", # class responsible for generating error response output. Can be subclassed # to change the format of the error response. "EXCEPTION_FORMATTER_CLASS": "drf_standardized_errors.formatter.ExceptionFormatter", # enable the standardized errors when DEBUG=True for unhandled exceptions. # By default, this is set to False so you're able to view the traceback in # the terminal and get more information about the exception. "ENABLE_IN_DEBUG_FOR_UNHANDLED_EXCEPTIONS": False, # When a validation error is raised in a nested serializer, the 'attr' key # of the error response will look like: # {field}{NESTED_FIELD_SEPARATOR}{nested_field} # for example: 'shipping_address.zipcode' "NESTED_FIELD_SEPARATOR": ".", } | https://drf-standardized-errors.readthedocs.io/en/latest/settings.html | 2022-08-07T21:53:40 | CC-MAIN-2022-33 | 1659882570730.59 | [] | drf-standardized-errors.readthedocs.io |
For class, read:
Roadmap for next few weeks.
Week 4: forms - submitting basic user input data to the web server; HTML.
Week 5: WSGI - building a fully functional Web server component; templating.
Week 6: More interesting Web apps; header processing & cookies.
Structure of HTTP, revisited. See presentation.
Payload of request, abstractly
Payload of response, abstractly
String whacking.
Read Strings and Lists and try to solve these problems generically, using only those string manipulation commands:
-
Pick out the 3rd value, e.g.f("a,b,c,d,e,f") == "c"
-
Extract everything after the 4th comma in a string, e.g.f("a,b,c,d,e,f,g") == "e,f,g"
-
Return the fourth and fifth lines of a multiline string, e.g.f("a\nb\nc\nd\ne\nf\n") = ["d", "e"]
-
Pick out the third and fourth values, removing leading underscores, e.g.:f("_a,_b,_c,_d,_e,_f") = ["d", "e"]
See also String Methods, and Strings: Part I, Part II, and Part III.
Testing.
Create a new directory & download two files to arctic by doing:
mkdir cse491-day5 cd cse491-day5 wget wget
Activate your virtualenv:
source ~/cse491.env/bin/activate.csh
and then run nosetests:
nosetests
You should see 8 errors from the code in ‘day5.py’. Fix the code in ‘day5.py’ so that the tests all pass!
Solutions here:
This file can be edited directly through the Web. Anyone can update and fix errors in this document with few clicks -- no downloads needed.
For an introduction to the documentation format please see the reST primer. | http://msu-web-dev.readthedocs.io/en/latest/day5.html | 2018-02-17T21:41:32 | CC-MAIN-2018-09 | 1518891807825.38 | [] | msu-web-dev.readthedocs.io |
Appendix B: Collection of Anonymous Data
This solution includes an option to send anonymous usage data to AWS. We use this data to better understand how customers use this solution and related services and products. When enabled, the following information is collected and sent to AWS during initial stack creation:
Solution ID: The AWS solution identifier
Unique ID (UUID): Randomly generated, unique identifier for each Deployment Pipeline for Go Applications deployment
Timestamp: Data-collection timestamp
Code Repository Data:The AWS CloudFormation template (AWS CodeCommit or GitHub) version launched
Note that AWS will own the data gathered via this survey. Data collection will be subject to the AWS Privacy Policy. To opt out of this feature, modify the AWS CloudFormation template mapping section as follows:
Send: AnonymousUsage: Data: Yes
to
Send: AnonymousUsage: Data: No | https://docs.aws.amazon.com/solutions/latest/deployment-pipeline-go-applications/appendix-b.html | 2018-12-10T00:49:20 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.aws.amazon.com |
To create a new Mobile Forms account, click the "Sign Up" button on the home page and enter your details in the form that follows. A valid email address is required to activate the account, so make sure you enter this correctly.
All users are automatically issued with a 14-day trial account with full Enterprise functionality. After 14 days you will be contacted by email to allow you to sign up for a paid account or migrate to a free account. Free accounts only support a single device, and have limited functionality.
Activating Your Account
Once an account has been created you need to activate it before it can be used. An activation email will be sent to the email address you entered at sign up. Click the link in this email to activate your account.
Note: If your email client does not let you click on links within emails, copy and paste the link into your browser address bar and press enter.
Logging in to Your Account
Following the activation link will automatically log you into your account and take you to your organization's home page. However, you can always log in by entering your credentials on the page that follows after you click "Sign In" at the top right hand corner of the DeviceMagic.com.
That's it, browse the rest of our Help Center to learn more about how to use your new Device Magic account. | https://docs.devicemagic.com/getting-started-with-device-magic/the-basics/creating-an-account | 2018-12-09T23:40:04 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['https://downloads.intercomcdn.com/i/o/56916438/051a8c829652ae589c39d991/Device-Magic-Account-SignUp.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56916515/fe5c752f6e8584da6d315504/Device-Magic-Account-StartTrial.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/56916761/7faadef8cf1e22fcb331e123/Device-Magic-Account-Login.png',
None], dtype=object) ] | docs.devicemagic.com |
Windows PowerShell: Defining Parameters
There are simple and complex ways to define parameters in Windows PowerShell, and both ways have their benefits.
Don Jones
You’ll often write a script or function that needs to accept some kind of input. This could be a computer name, a file path or anything like that. You can tell Windows PowerShell to expect these parameters, collect them from the command line, and put their values into variables within your script or function. That makes dealing with input easy and efficient.
You just have to know how to declare your parameters. The simplest means of doing so is the param block:
Param( [string]$computerName, [string]$filePath )
You don’t have to break that down into separate lines like I’ve done. It’s legal to run it all together on a single line. I prefer to break it down for easier reading, though. When used as the first lines of code in a script file or within a function, Windows PowerShell reads this and will even tab-complete parameter names when someone runs the script or function. I’ve been careful to use parameter names: they’re –computerName and –filePath here. These are similar to the ones other Windows PowerShell cmdlets use for this kind of information. That way, my parameters are consistent with what’s already in the shell.
If I put this into a script named Get-Something.ps1, I’d use the parameters like this:
./Get-Something –computerName SERVER1 –filePath C:\Whatever
I could also truncate the parameter names. This lets me type fewer characters and they still work:
./Get-Something –comp SERVER1 –file C:\Whatever
I could even omit the names entirely. Windows PowerShell will automatically and positionally accept values. Here I need to be careful to provide values in the same order in which the parameters are listed in my file:
./Get-Something SERVER1 C:\Whatever
Of course, by using parameter names, the command becomes a bit easier for a person to figure out. I then get the luxury of putting the parameters in any order I want:
./Get-Something –filePath C:\Whatever –computerName SERVER1
Windows PowerShell also provides a more complex way of declaring parameters. This more full-fledged syntax lets you define parameters as mandatory, specify a position (if you don’t do so, then the parameter can only be used by name) and more. This expanded syntax is also legal in both scripts and functions:
[CmdletBinding()] Param( [Parameter(Mandatory=$True,Position=1)] [string]$computerName, [Parameter(Mandatory=$True)] [string]$filePath )
Again, you can run all that together on a single line, but breaking it down makes it a bit easier to read. I’ve given both of my parameters a [Parameter()] decorator, and defined them both as mandatory.
If someone tries to run my script and forgets one or both of these parameters, the shell will prompt for them automatically. There’s no extra work on my part needed to make that happen. I’ve also defined –computerName as being in the first position, but –filePath needs to be provided by name.
There are some other advantages to using the [CmdletBinding()] directive. For one, it ensures my script or function will have all the Windows PowerShell common parameters, including –Verbose and –Debug. Now, I can use Write-Verbose and Write-Debug within my script or function, and their output will be suppressed automatically.
Run the script or function with –Verbose or –Debug, and Write-Verbose or Write-Debug (respectively) are magically activated. That’s a great way to produce step-by-step progress information (Write-Verbose) or add debugging breakpoints (Write-Debug) in your scripts.
As they’re currently written, both parameters will accept only a single value. Declaring them as [string[]] would let them accept an entire collection of values. You’d then enumerate this using a Foreach loop, so you could work with one value at a time.
Another neat parameter type is [switch]:
Param([switch]$DoSomething)
Now, I can run my script or function with no –DoSomething parameter and internally the $DoSomething variable will be $False. If I run the script with the –DoSomething parameter, $DoSomething gets set to $True. There’s no need to pass a value to the parameter. Windows PowerShell sets it to $True if you simply include it. This is how switch parameters operate, such as the –recurse parameter of Get-ChildItem.
Keep in mind that each parameter is its own entity, and it’s separated from the next parameter by a comma. You’ll notice that in a previous example:
[CmdletBinding()] Param( [Parameter(Mandatory=$True,Position=1)] [string]$computerName, [Parameter(Mandatory=$True)] [string]$filePath )
There the entire –computerName parameter, including its [Parameter()] decorator, appears before the comma. The comma indicates I’m done explaining the first parameter and I’m ready to move on to the next. Everything associated with –filePath follows the comma. If I needed a third parameter, I’d put another comma:
[CmdletBinding()] Param( [Parameter(Mandatory=$True,Position=1)] [string]$computerName, [Parameter(Mandatory=$True)] [string]$filePath, [switch]$DoSomething )
All of that is contained within the Param() block. Note that you don’t have to use the [Parameter()] decorator on every parameter. Only use it on the ones where you need to declare something, such as the parameter being mandatory, accepting pipeline input, being in a certain position and so on. Run help about_functions_advanced_parameters in Windows PowerShell for more information on other attributes you can declare that way.
Writing functions and scripts that accept input only via parameters is a best practice. It makes them more self-contained, easier to document and more consistent with the way the rest of the shell works.
. | https://docs.microsoft.com/en-us/previous-versions/technet-magazine/jj554301(v=msdn.10) | 2018-12-10T00:30:07 | CC-MAIN-2018-51 | 1544376823228.36 | [array(['images/ff404193.don_jones%28en-us%2cmsdn.10%29.jpg',
'Don Jones Don Jones'], dtype=object) ] | docs.microsoft.com |
Create a Seating Chart
You can create seating charts that contain information to help you take attendance in each of your classes. These seating charts can include names, pictures, APIDs, database fields, grades and averages, and specific assignment columns. When creating these charts you can determine the seating chart name, the teacher's position, the seat arrangement grid's size and order, and the arrangement of students within the grid.
Click Attendance > Seating Chart on the side navigation menu.
You can also click
at the top of the startup screen.
The most recently saved seating chart for the current class gradebook opens in the Attendance tab. If you haven't created any seating charts yet, the tab is blank.
- Click the Seating Chart tab, and then, next to the Seating Chart drop-down list, click + Add New.
- In the Add New Seating Chart panel, fill out the Seating Chart Name box.
Expand Arrange Seats, and set the grid size by filling out the Choose number of seats per row box and the Choose number of rows box.
If you leave Arrange Students set to Random the seating chart generated is blank. You can add students by editing the seating chart. For more information see Edit or Delete a Seating Chart.
Optional: To create a customized seating chart, do any of the following:
- Arrange Seats: Expand Arrange Seats, and set the Teacher position relative to the grid and/or determine How {Assign All} places students in the grid.
- Label Seats: Expand What fields to show on seat, select any combination of the following: Picture, APID, Name (Full, First, or Last), Database Field, Show Averages and Grades, and Assignment Columns.
- Arrange Students: Expand Arrange Students, and click one of the following: Random, By Rank, By data field (Alphabetically), Alphabetically, or By ID.
If you skip customizing the options above, your seating chart will follow the default settings.
- At the top of the Add New Seating Chart panel, click Save. | https://docs.rediker.com/guides/teacherplus-gradebook/html5/take-attendance/create-a-seating-chart.htm | 2018-12-09T23:22:17 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.rediker.com |
Contents Now Platform Administration Previous Topic Next Topic Set LDAP connection properties ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Set LDAP connection properties Configure your LDAP server connection properties. Before you beginRole required: admin Procedure Navigate to System LDAP > LDAP Servers. Select the LDAP server to configure. Set the connection property fields (see table). Click Update. Table 1. LDAP connection properties. scripts for LDAPScheduled data imports for LDAPRelated ReferenceLDAP communication channels On this page Send Feedback Previous Topic Next Topic | https://docs.servicenow.com/bundle/helsinki-platform-administration/page/integrate/ldap/task/t_SetLDAPConnectionProperties.html | 2018-12-10T00:38:37 | CC-MAIN-2018-51 | 1544376823228.36 | [] | docs.servicenow.com |
The linking framework¶
One of the strengths of Glue is the ability to be able to link different datasets together. The How Data Linking Works page describes how to set up links graphically from the Glue application, but in this page, we look at how links are set up programmatically.
Creating component links programmatically¶
As described in Working with Data objects, components are identified by
ComponentID instances. We can then use these
to create links across datasets. Note that links are not defined between
Data or
Component
objects, but between
ComponentID instances.
The basic linking object is
ComponentLink.
This describes how two
ComponentID instances
are linked. The following example demonstrates how to set up a
ComponentLink programmatically:
>>> from glue.core import Data, DataCollection >>> d1 = Data(x1=[1, 2, 3]) >>> d2 = Data(x2=[2, 3, 4, 5]) >>> dc = DataCollection([d1, d2]) >>> from glue.core.component_link import ComponentLink >>> link = ComponentLink([d1.id['x1']], d2.id['x2'])
Note that the first
argument of
ComponentLink should be a list of
ComponentID
instances.
Since no linking function was specified in the above example,
ComponentLink defaults to the simplest kind
of link,
identity. For the link to be useful, we need to add it to the data
collection, and we’ll be able to see what it changes:
>>> dc.add_link(link)
If we look at the list of components on the
Data
objects, we see that the
x2 component in
d2 has been replaced by
x1:
>>> print(d1.components) [Pixel Axis 0, World 0, x1] >>> print(d2.components) [Pixel Axis 0, World 0, x1]
This is because we used the identify transform, so since the
ComponentID objects
x1 and
x2 are
interchangeable, Glue decided to use
x1 instead of
x2 in
d2 for
simplicity.
The benefit of this is now that if we create a
SubsetState based on the
x1
ComponentID, this
SubsetState will be applicable to both datasets:
>>> subset_state = d2.id['x1'] > 2.5 >>> subset_group = dc.new_subset_group('x1 > 2.5', subset_state)
This has now created subsets in both
d1 and
d2:
>>> d1.subsets[0].to_mask() array([False, False, True], dtype=bool) >>> d2.subsets[0].to_mask() array([False, True, True, True], dtype=bool)
Let’s now try and use a custom linking function that is not simply identity:
>>> link = ComponentLink([d1.id['x1']], d2.id['x2'], ... using=lambda x: 2*x) >>> dc.add_link(link)
This time, if we look at the list of components on the
Data
objects, we see that
d1 now has an additional component,
x2:
>>> print(d1.components) [Pixel Axis 0, World 0, x1, x2] >>> print(d2.components) [Pixel Axis 0, World 0, x2]
We can take a look at the values of all the components:
>>> print(d1['x1']) [1 2 3] >>> print(d1['x2']) [2 4 6] >>> print(d2['x2']) [2 3 4 5]
In this case, both datasets have kept their original components, but
d1 now
also includes an
x2
DerivedComponent which
was computed as being twice the values of
d1['x1'].
Creating simple component links can also be done using arithmetic operations on
ComponentID instances:
>>> d3 = Data(xa=[1, 2, 3], xb=[1, 3, 5]) >>> dc = DataCollection([d3]) >>> diff = d3.id['xa'] - d3.id['xb'] >>> diff <BinaryComponentLink: (xa - xb)> >>> dc.add_link(diff) >>> d3['diff'] array([ 0, -1, -2])
Note
This is different from using comparison operators such as
> or
<= on
ComponentID instances,
which produces
SubsetState objects.
It is also possible to add a component link to just one particular
Data object, in which case this is equivalent to creating a
DerivedComponent. The following:
>>> from glue.core import Data >>> d4 = Data(xa=[1, 2, 3], xb=[1, 3, 5]) >>> link = d4.id['xa'] * 2 >>> d4.add_component_link(link, 'xa_double_1') <glue.core.component.DerivedComponent object at 0x107b2c828> >>> print(d4['xa_double_1']) [2 4 6]
is equivalent to creating a derived component:
>>> d4['xa_double_2'] = d4.id['xa'] * 2 >>> print(d4['xa_double_2']) [2 4 6]
When adding a component link via the
DataCollection
add_link() method, new
component IDs are only added to
Data objects for which
the set of
ComponentID required for the link
already exist. For instance, in the following example,
xu is only added to
d6:
>>> d5 = Data(xs=[5, 5, 6]) >>> d6 = Data(xt=[3, 2, 3]) >>> dc = DataCollection([d5, d6]) >>> new_component = ComponentID('xu') >>> link = ComponentLink([d6.id['xt']], new_component, ... using=lambda x: x + 3) >>> dc.add_link(link) >>> print(d5.components) [Pixel Axis 0, World 0, xs] >>> print(d6.components) [Pixel Axis 0, World 0, xt, xu]
Built-in link functions¶
Glue includes a number of built-in link functions that are collected in the
link_function registry object from
glue.config. You can easily create new link functions as described in Custom Link Functions, and these will then be available through the user interface, as shown in How Data Linking Works in the User guide. | https://glueviz.readthedocs.io/en/v0.7.0/developer_guide/linking.html | 2018-12-09T23:34:49 | CC-MAIN-2018-51 | 1544376823228.36 | [] | glueviz.readthedocs.io |
Reporting Bugs¶
Python is a mature programming language which has established a reputation for stability. In order to maintain this reputation, the developers would like to know of any deficiencies you find in Python.
Documentation bugs¶ the Python issue tracker
Using the Python issue tracker¶. | https://docs.python.org/3.4/bugs.html | 2016-10-21T11:08:19 | CC-MAIN-2016-44 | 1476988717963.49 | [] | docs.python.org |
JTableUsergroup::delete::delete
Description
Delete this object and it's dependancies.
Description:JTableUsergroup::delete [Edit Descripton]
public function delete ($oid=null)
- Returns mixed Boolean or Exception.
- Defined on line 148 of libraries/joomla/database/table/usergroup.php
- Since
See also
JTableUsergroup::delete source code on BitBucket
Class JTableUsergroup
Subpackage Database
- Other versions of JTableUsergroup::delete
SeeAlso:JTableUsergroup::delete [Edit See Also]
User contributed notes
<CodeExamplesForm /> | https://docs.joomla.org/API17:JTableUsergroup::delete | 2016-10-21T11:33:42 | CC-MAIN-2016-44 | 1476988717963.49 | [] | docs.joomla.org |
...
- borderColor: defines the color of the shape's outline. If false, the outline will not be drawn.
- borderWidth: defines the thickness of the shape's outline.
- fill: defines the color, paint or gradient to fill the shape's content.
- opacity: controls how much of the shape is visible, value must be in the range [0..1], default is 1.
- asShape: creates the shape but does not render to the screen, useful for mixing complex shapes.
- asImage: creates the shape but does not render to the screen, useful for drawing images or applying textures.
... | http://docs.codehaus.org/pages/diffpages.action?pageId=47611923&originalId=43057179 | 2013-12-05T07:21:58 | CC-MAIN-2013-48 | 1386163041301 | [] | docs.codehaus.org |
One long standing request of the GeoTools code base is to offer an operations api for working on Features (similar to what is available for grid coverage).
The idea here is to have a low level interface to handle a very simple kind of operations on data.
Here are examples of what I call "usual operations on.
Eclesia has three main reasons:
Jody has two reasons:
Relationship to Web Processing Service:
Acceptance tests are the best definition of scope, the final API we present here will meet the following requirements.
Controlling scope:
Controlling the scope of "process":
Not quite in scope:
While our design will address these concerns, we will only implement what we need at this time. It is way better to wait until someone has a real live problem in hand in order to test the solution.
The API is currently being defined using code - the final API will meet several acceptance tests.
Goals:
Feedback:
Single thread example (say from a main method):
Multiple threads example (say from a Swing Button):
Goals:
Feedback:
Goals:
Feedback:
Goals:
Feedback: | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=70418447 | 2013-12-05T07:10:38 | CC-MAIN-2013-48 | 1386163041301 | [] | docs.codehaus.org |
Quickicon Module or 'edit' an existing Quickicon Module, navigate to the.
This module shows Quick Icons that are visible on the Control Panel (admin area home page). The Module Type name for this Module is "mod_quickicon". It is not related to a component.
At the top right you will see the toolbar:
The functions are: | http://docs.joomla.org/index.php?title=Help25:Extensions_Module_Manager_Admin_Quickicon&diff=84066&oldid=84065 | 2013-12-05T07:16:25 | CC-MAIN-2013-48 | 1386163041301 | [] | docs.joomla.org |
Description
@Scalify simplifies the task of integrating Groovy and Scala code. This transformation adds the required bytecode to your Groovy classes to make them appear as native Scala classes. It also helps when implementing or extending a Scala trait/class from Groovy.
Usage
Place @Scalify at the class level, example;
When the trait Output is compiled it will generate bytecode similar to
The first pair of methods is your typical POJO accessors (generated by Scala's @BeanProperty) which Groovy can handle quite well. The second pair of methods represent Scala's native accessors, these will be written by @Scalify (if not present already in your class definition). Additionally @Scalify will make sure that your class implements
scala.ScalaObject if it does not already.
One last trick up the sleeve is that @Scalify will generate operator friendly methods, these are the currently supported methods:
Dependencies
Make sure to have the Scala compiler & libraries on your classpath. You will also need
asm, grab the latest version available with your Groovy distribution | http://docs.codehaus.org/pages/diffpages.action?pageId=228173897&originalId=120259518 | 2013-12-05T07:21:46 | CC-MAIN-2013-48 | 1386163041301 | [] | docs.codehaus.org |
]
See also
Notes
The algorithm relies on computing the eigenvalues of the companion matrix [R241].
References
Examples
>>> coeff = [3.2, 2, 1] >>> np.roots(coeff) array([-0.3125+0.46351241j, -0.3125-0.46351241j]) | http://docs.scipy.org/doc/numpy-1.7.0/reference/generated/numpy.roots.html | 2013-12-05T07:20:44 | CC-MAIN-2013-48 | 1386163041301 | [] | docs.scipy.org |
.
Get Unlimited Access to Our Complete Business Library
Plus | http://premium.docstoc.com/docs/8070778/Congratulations-Letter-Collection | 2013-12-05T07:20:23 | CC-MAIN-2013-48 | 1386163041301 | [] | premium.docstoc.com |
Ipxroute
Updated: April 17, 2012
Applies To: Windows Server 2008, Windows Server 2008 R2
Displays and modifies information about the routing tables used by the IPX protocol. Used without parameters, ipxroute displays the default settings for packets that are sent to unknown, broadcast, and multicast addresses. For examples of how this command can be used, see Examples.
Syntax
ipxroute servers [/type=X] ipxroute ripout <Network> ipxroute resolve {guid | name} {GUID | <AdapterName>} ipxroute board= N [def] [gbr] [mbr] [remove=xxxxxxxxxxxx] ipxroute config
Parameters
Examples
To display the network segments that the workstation is attached to, the workstation node address, and frame type being used, type:
ipxroute config | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff961502(v=ws.10) | 2018-02-18T03:48:27 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.microsoft.com |
).
A configuration option:()). */; }
Backwards Compatibility Warning: Prior to SQLite version 3.20.0 (2017-08-01), the fts5() worked slightly differently. Older applications that extend FTS5 must be revised to use the new technique shown above.. */ );:
/* ** Implementation of an auxiliary function that returns the number ** of tokens in the current row (including all columns). */); } }. The exception is if the table was created with the offsets=0 option specified. In this case *piOff is always set to -1.); -- -- CREATE VIRTUAL TABLE ft1_v_instance USING fts5vocab(ft1, instance);..
SQLite is in the Public Domain. | http://docs.w3cub.com/sqlite/fts5/ | 2018-02-18T02:59:58 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.w3cub.com |
How to edit existing helpers
To edit an existing helper, head to the "helpers" page (in your left menu) and click on the edit icon for the helper that you'd like to update.
A panel should appear that shows the current settings for this helper.
Make any updates that you'd like, and hit save. Easy.
Alternatively, you can launch the visualizer tool on a page that you know the helper is loaded on, click on "edit contextuals", find the helper that you want to change, and click edit. Make the changes you want, and hit save. | http://docs.elevio.help/en/articles/81559-how-to-edit-existing-contextuals | 2018-02-18T03:09:14 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.elevio.help |
Error getting tags :
error 404Error getting tags :
error 404
exp10(2) -- returns 100
exp10(-3) -- returns 0.001
Use the exp10 function to obtain a power of 10.
Parameters:
The number is a real number, or an expression that evaluates to a number.
Value:
The exp10 function returns a positive number.
The expression
exp10(number) is equal to
10^number.
If number is a positive integer,
exp10(number) is a 1 followed by number zeros. If number is a negative integer,
exp10(number) is a decimal point followed by number-1 zeros and a one. | http://docs.runrev.com/Function/exp10 | 2018-02-18T03:27:38 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.runrev.com |
Error getting tags :
error 404Error getting tags :
error 404
set the dropShadow of object to propertiesArray
set the dropShadow[propertyName] of object to propertyValue
set the dropShadow of button "Ok" to tDropshadowPropertiesArray
set the dropShadow["color"] of me to "255,0,0"
Use the dropShadow property to create a shadow effect on an object. The dropShadow is an array style property, each key of the array controls a different dropShadow parameter that will affect its final appearance. The easiest way to adjust these properties is by using the Graphic Effects card of the property inspector which has full control over each parameter. To control the effect by script use the following properties:
dropShadow["color"]
The color of the shadow, in the format red,green,blue where each value is between 0 and 255.
dropShadow["blendMode"]
How the shadow is blended with objects behind it. This is one of the following values:
- "normal" : the shadow is laid over the background.
- "multiply" : this results in a darkening effect
- "colorDodge" : this results in a lightening effect
dropShadow["opacity"]
How opaque the shadow is. The value is between 0 (fully transparent) and 255 (fully opaque).
dropShadow["filter"]
Which algorithm is used to render the shadow. This is one of the following options:
"gaussian" : highest quality (and slowest)
"box3pass" : high quality.
"box2pass" : medium quality
"box1pass" : low quality (and fastest)
When using the "colorDodge" blend mode, it is recommended that you set the filter mode to "gaussian".
dropShadow["size"]
The size of the shadow, i.e. how large the shadow is. This is between 0 and 255.
dropShadow["spread"]
This controls where the effect begins to blend. This is between 0 and 255.
dropShadow["distance"]
This controls how far the shadow is offset from the object. This is between 0 and 359.
dropShadow["angle"]
The controls the direction the shadow is cast in. This is between 0 and 360. | http://docs.runrev.com/Property/dropshadow | 2018-02-18T03:27:30 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.runrev.com |
The Settings dialog box contains general AppDNA options. To open this dialog box, choose from the menus.
The options on the Reporting page are:
Records per page – Specifies the number of applications that appear on a report view page. When the value is very large (for example, more than 500), performance may deteriorate – for example, scrolling may become jerky and the page may take too long to display. The default is 100.
This setting is automatically updated when you change the number of records on the page in the Report Viewer itself. However, it is useful to be able to change the value here if it has inadvertently been set it to a very large value and the page becomes unusably slow.
This setting does not affect the AppDNA web client. For information about changing the number of records per page in the AppDNA web client, see Report issues.
Show counts in PDF exports – Select this check box to show in the Report Data section of the PDF exports, columns for all of the algorithm groups in the report. These columns show how many times the application has triggered the algorithms in the group. (An application can potentially trigger the same algorithm multiple times – for example, when the same issue is detected in multiple components.) By default, those columns are hidden in PDF exports so that reports with many algorithm groups fit the available space.
Application complexity thresholds – The more files and registry entries an application has, the more complex it is to remediate and test. Therefore application complexity is measured by the number of files and registry entries within the application. AppDNA defines three levels of application complexity – simple, normal, and complex. The thresholds define the lower and upper bounds of what constitutes a normal complexity application.
The default threshold values are based on extensive testing, but you can adjust them if required. The default threshold values are:
The following table shows the icon for each application complexity level and provides an example based on the default thresholds.
The Effort Calculator uses the application complexity when estimating the time and effort involved in a migration project. In addition, the application complexity icons are shown in some of the report views – for example, the Overview Summary. | https://docs.citrix.com/de-de/dna/7-12/reporting/reporting-settings.html | 2018-02-18T03:25:35 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.citrix.com |
Creating Areas / Rooms
Clicking the settings icon in the top right will take you to the Settings menu for the system. It is recommended to set up your Areas / Rooms / Circuits before setting up the various source, screen & HDL devices.
Tap on
Areas >
Add a new Area, and enter the area name.
Once an Area has been created, a new Room can be added to the Area by tapping Add a new Room. Rooms have a name i.e. ‘Lounge’, and an associated Image.
There are several options for including.
Adding AV Controls to a Room
AV Controls provide a mechanism to control AV Equipment from within the Blustream App, or as a quick way to jump to other Apps e.g. Sonos.
To add an AV Control to the room tap on
Add a new AV Control and enter the AV Control name i.e.
Sky+ HD. You can then choose a custom icon/image and begin to add a user-interface (UI) element, Actions or a mixture of both.
A typical setup would involve sending the TV Power On/Source commands as actions, and then have the source control as the AV Interface.
Adding Circuits to a Room.
Adding Scenes to a Room
Click
Add a Scene and enter the scene name, e.g. of the scene settings.
Adding Shade Controls to a Room
Shade controls can be used to control either curtains or blinds.
Click
Add a new Curtain Control and enter a name for the control, e,g
Main Curtains. Shade controls can also be either
Adjustable or
Non-Adjustable.
Adding Heating and HVAC Controls
In order for the
Underfloor Heating or
HVAC controls to be available in the app interface, you must add a heating control to a room, and link that virtual heating control to the heating device channel.
Simply select either
Add a new Heating Control or
Add a new HVAC Control and type in a name.
UV Switch Controls
UV Controls in the app can either be an
On Switch or an
On/Off Switch. On switches will automatically send the
off command to reset the switch after it has been activated – this is commonly used with the IR Emitters. Simply tap
Add a new UV Switch Control and enter it's
name and
type.
Note: Many of the Room Controls can be easily added to another room by choosing the
Add an existing _ option where available.
Once Areas and Rooms have been created, it's time to add devices. | http://blustream.docs.demopad.com/basics/area_room_creation/ | 2018-02-18T02:44:40 | CC-MAIN-2018-09 | 1518891811352.60 | [array(['/images/5-settings-page.png', None], dtype=object)
array(['/images/6-area-creation.png', None], dtype=object)
array(['/images/7-room-creation.png', None], dtype=object)
array(['/images/9-camera-roll-images.png', None], dtype=object)
array(['/images/10-av-controls.png', None], dtype=object)
array(['/images/11-circuits.png', None], dtype=object)
array(['/images/13-scenes.png', None], dtype=object)
array(['/images/12-blinds.png', None], dtype=object)
array(['/images/14-heating.png', None], dtype=object)
array(['/images/15-uv-switches.png', None], dtype=object)] | blustream.docs.demopad.com |
FAQ/Troubleshoot
Building / Compiling the PhoneGap Desktop
How do I fix the
too many files open erroror
Fatal error: EMFile, open errorusing the
Grunttask to build / compile PhoneGap Desktop?
On Mac OSX follow these instructions.
Operating System Compatibility
Does the PhoneGap Desktop work on Windows?
Yes, however it's only been tested on Windows 7 and Windows 8.
If you're having problems running the PhoneGap Desktop App on Windows try these steps or try using this workaround.
PhoneGap Desktop App & PhoneGap Developer App
Why won't my PhoneGap Developer App connect to the local server started by the PhoneGap Desktop App?
The computer with PhoneGap Desktop App and the mobile device with PhoneGap Developer App must be on the same network.
Your network may have the PhoneGap Desktop App's server port blocked. this could be the result of network security settings, firewall, VPN or being on a corporate network. | http://docs.phonegap.com/references/desktop-app/troubleshoot-faq/ | 2018-02-18T03:28:03 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.phonegap.com |
The maven plugin
This plugin is useful for building parts that use maven.
The maven build system is commonly used to build Java projects. The plugin requires a pom.xml in the root of the source tree.
Plugin-specific keywords
- maven-options: (list of strings) flags to pass to the build using the maven semantics for parameters. | https://docs.snapcraft.io/reference/plugins/maven | 2018-02-18T03:26:10 | CC-MAIN-2018-09 | 1518891811352.60 | [] | docs.snapcraft.io |
Custom Manager Support using Regex
The
regex manager is designed to allow users to manually configure Renovate for how to find dependencies that aren't detected by the built-in package managers.
This manager is unique in Renovate in that:
- It is configurable via regex named capture groups
- Through the use of the
regexManagersconfig, multiple "regex managers" can be created for the same repository.
Required Fields
The first two required fields are
fileMatch and
matchStrings.
fileMatch works the same as any manager, while
matchStrings is a
regexManagers concept and is used for configuring a regular expression with named capture groups.
In order for Renovate to look up a dependency and decide about updates, it then needs the following information about each dependency:
- The dependency's name
- Which
datasourceto look up (e.g. npm, Docker, GitHub tags, etc)
- Which version scheme to apply (defaults to
semver, but also may be other values like
pep440)
Configuration-wise, it works like this:
- You must capture the
currentValueof the dependency in a named capture group
- You must have either a
depNamecapture group or a
depNameTemplateconfig field
- You can optionally have a
lookupNamecapture group or a
lookupNameTemplateif it differs from
depName
- You must have either a
datasourcecapture group or a
datasourceTemplateconfig field
- You can optionally have a
versioningcapture group or a
versioningTemplateconfig field. If neither are present,
semverwill be used as the default
- You can optionally have a
currentDigestcapture group.
Regular Expression Capture Groups
To be fully effective with the regex manager, you will need to understand regular expressions and named capture groups, although sometimes enough examples can compensate for lack of experience.
Consider this
Dockerfile:
FROM node:12 ENV YARN_VERSION=1.19.1 RUN curl -o- -L | bash -s -- --version ${YARN_VERSION}
You would need to capture the
currentValue using a named capture group, like so:
ENV YARN_VERSION=(?<currentValue>.*?)\n.
If you're looking for an online regex testing tool that supports capture groups, try.
Configuration templates
In many cases, named capture groups alone won't be enough and you'll need to configure Renovate with additional information about how to look up a dependency. Continuing the above example with Yarn, here is the full config:
{ "regexManagers": [ { "fileMatch": ["^Dockerfile$"], "matchStrings": ["ENV YARN_VERSION=(?<currentValue>.*?)\n"], "depNameTemplate": "yarn", "datasourceTemplate": "npm" } ] }
Advanced Capture
Let's say that your
Dockerfile has many
ENV variables you want to keep updated and you prefer not to write one
regexManagers rule per variable. Instead you could enhance your
Dockerfile like the following:
ARG IMAGE=node:12@sha256:6e5264cd4cfaefd7174b2bc10c7f9a1c2b99d98d127fc57a802d264da9fb43bd FROM ${IMAGE} # renovate: datasource=github-tags depName=nodejs/node versioning=node ENV NODE_VERSION=10.19.0 # renovate: datasource=github-releases depName=composer/composer ENV COMPOSER_VERSION=1.9.3 # renovate: datasource=docker depName=docker versioning=docker ENV DOCKER_VERSION=19.03.1 # renovate: datasource=npm depName=yarn ENV YARN_VERSION=1.19.1
The above (obviously not a complete
Dockerfile, but abbreviated for this example), could then be supported accordingly:
{ "regexManagers": [ { "fileMatch": ["^Dockerfile$"], "matchStrings": [ "datasource=(?<datasource>.*?) depName=(?<depName>.*?)( versioning=(?<versioning>.*?))?\\sENV .*?_VERSION=(?<currentValue>.*)\\s" ], "versioningTemplate": "{{#if versioning}}{{{versioning}}}{{else}}semver{{/if}}" }, { "fileMatch": ["^Dockerfile$"], "matchStrings": [ "ARG IMAGE=(?<depName>.*?):(?<currentValue>.*?)@(?<currentDigest>sha256:[a-f0-9]+)s" ], "datasourceTemplate": "docker" } ] }
In the above the
versioningTemplate is not actually necessary because Renovate already defaults to
semver versioning, but it has been included to help illustrate why we call these fields templates. They are named this way because they are compiled using
handlebars and so can be composed from values you collect in named capture groups. You will usually want to use the tripe brace
{{{ }}} template (e.v.
{{{versioniong}}} to be safe because
handlebars escapes special characters by default with double braces.
By adding the comments to the
Dockerfile, you can see that instead of four separate
regexManagers being required, there is now only one - and the
Dockerfile itself is now somewhat better documented too. The syntax we used there is completely arbitrary and you may choose your own instead if you prefer - just be sure to update your
matchStrings regex. | https://docs.renovatebot.com/modules/manager/regex/ | 2020-09-18T22:35:49 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.renovatebot.com |
Does the solder server have to be on port 80?
Posted in Hosting by Sean Cox Tue Oct 18 2016 02:50:56 GMT+0000 (Coordinated Universal Time)·Viewed 2,016 times
Can solder listen to a port other than 80? Can it be behind SSL (HTTPS)? Can I stick it (or just the mod repo) on a virtual directory ()? | https://docs.solder.io/v0.7/discuss/58058e106d62452700f12deb | 2020-09-18T23:38:46 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.solder.io |
Technical Blog
Technical and Product News and Insights from Rackspace
Custom!
Boot. | https://docs.rackspace.com/blog/authors/mike-metral/ | 2020-09-19T00:05:27 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.rackspace.com |
On the Xinet server, you will need to add the Xinet SAML auth module (based on mod_auth_mellon) in order to support the Apache Authentication module.Warning: Do not use yum to install the mellon module. It will not work. The Xinet module has been modified to accommodate Portal sites and will conflict with the mellon module.Xinet provides a modified mod_auth_xinetsaml.so library that supports Portal sites. Get the correct version for your server (Redhat 6 or 7 is supported) and put it on your Xinet server.You will need to create Entity ID names for the Service Providers (SPs) Note that the Xinet server and each Portal Site are all SPs. These names are arbitrary strings but they do have to be known by the IdP and must be unique compared to any other SP that the IdP serves. We recommend using only alphanumeric characters or a period, and no other special characters.An example would be creating /etc/httpd/xinetsaml as user "apache", mode 700, and copying the XML to "idp-metadata.xml" in that folder.Use the Xinet provided script mellon_create_metadata.sh to generate the necessary output.Values you will need to provide are EntityID of the SP that you created in the Create Entity ID names section and the hostname or IP address of your Xinet server.Here’s an example command line where the EntityID is “xinet.15.webnative” and the host machine IP address is “192.168.0.15”:For SimpleSAML, the files listed in the output should be in the current directory where you ran the script.'AssertionConsumerService' => '','SingleLogoutService' => '',The .cert and .key files created by mellon_create_metadata.sh need to go where Apache on the Xinet server can access them.Copy the .cert and .key files to /etc/httpd/xinetsaml on the Xinet server and note this location. The location is arbitrary, but the location will be used in the Add Mellon entries to httpd.conf section.This entry points to the location of mod_auth_xinetsaml.so that was determined in the Install mod_auth_xinetsaml.so library section.To create the conf file to load the module, /etc/httpd/conf.modules.d/10-saml.conf, run:echo "LoadModule auth_mellon_module modules/xinet/mod_auth_xinetsaml.so" > /etc/httpd/conf.modules.d/10-saml.confNote: If /etc/httpd/conf.d/10-auth_mellon.conf exists delete it. It also means the standard mellon auth module had been installed on this machine!For Xinet server configuration, you have to add the Mellon module configuration to three sections in the httpd.conf file. Once for each area on the filesystem that Xinet uses.There are several variables where your specific information will be different from the above values. Use the correct values for the following variables:This information is in the /var/simplesamlphp/attributemap/name2oid.php file on the SimpleSAML IdP server. Search for “userid” in that file and use that value. It will have a similar format to the value in the examples below.For Google IdP you can map any attribute you want to the user ID. We chose “uid'’ to map the user email which is what is used to log in.The string you chose for the Xinet Service Provider Entity ID in the Create Entity ID names section.The path where you put the ".key" output file in the Copy the .cert and .key files to the Xinet server section.The path where you put the ".cert" output file in the Copy the .cert and .key files to the Xinet server section.Here is a sample output of the repeated section in the configs that must be updated from the sections below.The italics show what is removed, and bold text what is added. Italics within a bold line shows fields that need to be edited.Taking note, again, of the above entries that will need specific information for your set up, update the information for the WebNative document directory section. (The italics show what is removed, and bold text what is added. Italics within a bold line shows fields that need to be edited.)Taking note, again, of the above entries that will need specific information for your set up, update the WebNative styles directory section. (The italics show what is removed, and bold text what is added. Italics within a bold line shows fields that need to be edited.) | https://docs.xinet.com/docs/Xinet/19.2.1/AllGuides/SSOLinux.37.2.html | 2020-09-18T22:51:04 | CC-MAIN-2020-40 | 1600400189264.5 | [] | docs.xinet.com |
During the resynchronization of a DataKeeper resource, the state of this resource instance on the target server is “Resyncing”. However, the resource instance is “Source” (ISP) on the primary server. The LifeKeeper GUI reflects this status by representing the DataKeeper resource on the target server with the following icon:
and the DataKeeper resource on the primary server with this icon:
As soon as the resynchronization is complete, the resource state on the target becomes “Target” and the icon changes to the following:
The following points should be noted about the resynchronization process:
- A SIOS DataKeeper resource and its parent resources cannot fail over to a target that was in the synchronization process when the primary failed.
- If your DataKeeper resource is taken out of service/deactivated during the synchronization of a target server, that resource can only be brought back into service/activated on the same system or on another target that is already in sync (if multiple targets exist), and the resynchronization will continue.
- If your primary server becomes inoperable during the synchronization process, any target server that is in the synchronization process will not be able to bring your DataKeeper resource into service. Once your primary server becomes functional again, a resynchronization of the mirror will continue.
フィードバック
フィードバックありがとうございました
このトピックへフィードバック | http://docs.us.sios.com/spslinux/9.3.2/ja/topic/resynchronization | 2020-11-24T01:26:03 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.us.sios.com |
SocketCore.Port
From Xojo Documentation
or
IntegerValue = aSocketCore.Port
Supported for all project types and targets.
The port to bind on or connect to.
Notes
On most operating systems, attempting to bind to a port less than 1024 causes a Error event to fire with an error number 107 unless the application is running with administrative permissions. This is due to security features built into the underlying OS.
You need to set the port property explicitly before any call to Listen or Connect as the Port property will be modified to reflect what the actual bound port is during the various stages of operation.
For instance, if you listen on port 8080 and a connection comes in, you can check the Port property to ensure that you’re still listening on port 8080 (that the port hasn’t been hijacked). Or, if you connect to a socket on port 8080, once the connection occurs, you can check to see what port the OS has bound you to. This will be a random-seeming port number.
This trick can be very useful when you do things like Listen on port 0. In that case, the OS will pick a port for you and listen on it. Then you can check the Port property to see which port the OS picked. This functionality is used for various protocols, such as FTP.
Example
This example sets the Port to 8080. | http://docs.xojo.com/SocketCore.Port | 2020-11-24T00:18:51 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.xojo.com |
Recipes¶
Atmosphere¶
- Blocking metrics and indices, teleconnections and weather regimes (MiLES)
- ClimWIP: independence & performance weighting
- Clouds
- Cloud Regime Error Metric (CREM)
- Combined Climate Extreme Index
- Consecutive dry days
- Evaluate water vapor short wave radiance absorption schemes of ESMs with the observations.
- Diurnal temperature range
- Extreme Events Indices (ETCCDI)
- Diagnostics of stratospheric dynamics and chemistry
- Heat wave and cold wave duration
- Hydroclimatic intensity and extremes (HyInt)
- Modes of variability
- Precipitation quantile bias
- Standardized Precipitation-Evapotranspiration Index (SPEI)
- Drought characteristics following Martin (2018)
- Stratosphere - Autoassess diagnostics
- Stratosphere-troposphere coupling and annular modes indices (ZMNAM)
- Thermodynamics of the Climate System - The Diagnostic Tool TheDiaTo v1.0
- Zonal and Meridional Means
Climate metrics¶
Future projections¶
- Constraining future Indian Summer Monsoon projections with the present-day precipitation over the tropical western Pacific
- Constraining uncertainty in projected gross primary production (GPP) with machine learning
- Emergent constraints for equilibrium climate sensitivity
- Emergent constraints on carbon cycle feedbacks
- Emergent constraint on equilibrium climate sensitivity from global temperature variability
- Emergent constraint on snow-albedo effect
- Equilibrium climate sensitivity
- KNMI Climate Scenarios 2014
- Multiple ensemble diagnostic regression (MDER) for constraining future austral jet position
- Projected land photosynthesis constrained by changes in the seasonal cycle of atmospheric CO2
- Transient Climate Response
Land¶
Ocean¶
Other¶
- Example recipes
- Capacity factor of wind power: Ratio of average estimated power to theoretical maximum power
- Ensemble Clustering - a cluster analysis tool for climate model simulations (EnsClus)
- Multi-model products
- RainFARM stochastic downscaling
- Seaice feedback
- Sea Ice
- Seaice drift
- Shapeselect
- Toymodel | https://docs.esmvaltool.org/en/latest/recipes/index.html | 2020-11-24T00:51:41 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.esmvaltool.org |
Set-Cs
UCPhone Configuration
Enables you to modify management options for UC phones. This includes such things as the required security mode and whether or not the phone should automatically be locked after a specified period of inactivity. This cmdlet was introduced in Lync Server 2010.
Syntax>] [[-Identity] <XdsIdentity>] [-Force] [-WhatIf] [-Confirm] [<CommonParameters>]>] [-Instance <PSObject>] [-Force] [-WhatIf] [-Confirm] [<CommonParameters>]
Description
UC phones represent the merging of the telephone and Skype for Business Server..
The CsUCPhoneConfiguration cmdlets enable you to use configuration settings to manage phones running Skype for Business.. the
Set-CsUCPhoneConfiguration cmdlet to change the value of the global collection's LoggingLevel property to True.
The following parameters are not applicable to Skype for Business Online: CalendarPollInterval, Force, Identity, Instance, PipelineVariable, SIPSecurityMode, Tenant, Voice8021p, and VoiceDiffServTag
Examples
--------------------------"
Example 2 the
Get-CsUCPhoneConfiguration cmdlet; the Filter parameter and the filter value "site:*" limit the returned data to phone settings configured at the site scope.
This filtered collection is then piped to the
Set-CsUCPhoneConfiguration cmdlet,"
Example 4 configures the EnforcePhoneLock and the PhoneLockTimeout properties for all the UC phone settings where the SIP security mode is set to either Low or Medium.
To perform this task, the command first uses the
Get-CsUCPhoneConfiguration cmdlet the
Set-CsUCPhoneConfiguration cmdlet, the
Get-CsUCPhoneConfiguration cmdlet to return a collection of all the UC phone settings currently in use in the organization.
This collection is then piped to the
Where-Object cmdlet, which picks out only those settings where the PhoneLockTimeout property is less than 10 minutes (00 hours: 10 minutes: 00 seconds).
In turn, the filtered collection is piped to the
Set-CsUCPhoneConfiguration cmdlet, which sets the PhoneLockTimeout value for each item in the collection to 10 minutes..
Suppresses the display of any non-fatal error message that might occur when running the command.
Represents the unique identifier assigned to the collection of UC phone-CsUCPhoneConfiguration cmdlet will modify the global settings.
Allows you to pass a reference to an object to the cmdlet rather than set individual parameter values..
{{Fill Tenant Description}}
Microsoft.Rtc.Management.WritableConfig.Policy.Voice.UcPhoneSettings object.
The
Set-CsUCPhoneConfiguration cmdlet accepts pipelined instances of the UC phone settings object.
Outputs
The
Set-CsUCPhoneConfiguration cmdlet does not return a value or object.
Instead, the cmdlet configures instances of the Microsoft.Rtc.Management.WritableConfig.Policy.Voice.UcPhoneSettings object. | https://docs.microsoft.com/fr-fr/powershell/module/skype/Set-CsUCPhoneConfiguration?view=skype-ps | 2020-11-24T01:50:40 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
Changes to the tournament
Before diving into the questions from Slido, Arbitrage started off by talking about the recent change to the tournament’s reputation calculations that resulted in models being reranked (including historical ranking). For Arbitrage, this put him in the top 25 for a period of about two months — a net positive for his reputation. He asked the audience, “What was your impression of the change?”
Joakim mentioned that his model is still young and dropped several ranks under the new system. “I think it has a lot to do with whether or not you already have four or more weeks in a row of submissions, then you weren’t averaged across like we used to be,” Arbitrage said. “That would increase your volatility because it was based on less than four rounds.”
The new reputation score is based on a weighted average of a model’s performance in 20 rounds. “I had a really good ramp-up from October through March, so for me I’m losing all of my good rounds every week. So my rank is going to decline as those rounds fall off. So I’m watching that with bated breath, if you will.”
Arbitrage also pointed out that several tournament participants noticed that, on the same day as Office Hours, the tournament hit the 250 NMR per day payout cap.
Author’s note: since recording this episode, Numerai introduced new updates to the payout system.
Data scientist Keno expressed that it might be cause for concern among participants; someone could, for example, create multiple accounts with the same model as a way to earn more rewards without actually creating multiple models, taking up positions on the leaderboard in the process.
Arbitrage noted that this is a concern, although this behavior has yet to manifest. “It’s always been a risk,” he said, “but I don’t think anyone’s using [that method] because of diversification benefits.”
He then explained McNemar’s test, a way to score two models against each other to see if they’re the same model or not. The test produces a statistical analysis that says if the two models are similar. “I’ve proposed that as a way to sniff out if somebody is running clones of something, and also to prevent people from submitting the example predictions.”
Keno pointed out that, historically, once the tournament data scientists “solve” the payout structure, the Numerai team is quick to update the payout calculations. He said, “Trying to earn NMR, from my observations of others, works for a little bit but then they figure it out and say, ‘these people are gaming us,’ so they change the [payouts] … you kind of have to think ‘what am I going to do with this competition — am I going to always try to game them? Or do I just submit a model that makes sense?’”
“Yeah, you’re right Keno,” Arbitrage said, “and they’ve shown in the past that they’re willing to make big moves to prevent attacks.” Arbitrage then pointed out that the tournament rules also clearly state: “We reserve the right to refund your stake and void all earnings and burns if we believe that you are actively abusing or exploiting the payout rules.”
Returning to the topic of the new payout calculations, Arbitrage explained a quick analysis he performed on his own payouts. Now that the submission correlation is the payout percentage, his average correlation is 0.81%, noting he skews a little positive and that this calculation doesn’t take into consideration bonus payouts. “I was concerned that it would be skewed negative, because in the past, that was the case: the data indicated we were more likely to burn than to earn.”
Arbitrage thanked Michael Oliver for joining, telling him, “Now that I’ve interviewed you, I’m going to refer to you as my Panel of Experienced Users, along with Bor.” He asked Michael if he’s done any data exploration on the new payouts system.
Though he hasn’t done any exploring yet, Michael said he noticed that there’s going to be less day-to-day volatility because the smoothing window has more of a Gaussian shape, but it’s actually narrower because of the change from 100 days weighted equally to not being weighted equally. “You could expect that without the noise of the day-to-day fluctuation, you can expect to move up and down a little faster than before.” He added that the tournament docs have already been updated to reflect the new reputation scoring.
“Also, I think the downside will be more persistent,” Arbitrage said, “if it’s sticky on top it’ll be sticky at the bottom.” He explained that his model is hovering around 89th place on the leaderboard and isn’t gaining higher positions despite high performance. As mentioned in previous Office Hours, Arbitrage includes validation data in his training set so he experiences higher volatility in past tournaments. He wasn’t surprised that his model is sticking in the midrange on the leaderboard, saying “I kind of expected this.”
Arbitrage then shared that with expanded access to accounts (10 instead of 3), he’s testing cloned models but without including validation data in the training sets to see what the impact is on model performance, noting that it will be months before he can determine if that worked or not.
Michael asked Arbitrage why he thought excluding the Validation data might improve his model performance, as that would effectively be just excluding one year’s worth of data. Arbitrage countered that the excluded year could be very similar to five other years or extremely unlike a year from a more challenging period, either of which would weaken the signals from the more significant eras. “That’s my hunch,” he said.
To that point, Michael explained that testing that hypothesis would entail excluding random eras, random years, or random blocks of eras to see which approach would have the most positive impact on performance. “I haven’t done it,” Michael said, “but I’ve often wondered if excluding some of these early eras from so long ago might be a good idea.”
“Recency matters…
… especially when you’re trying to capture regime changes,” Arbitrage said. This is one of the reasons why he was so excited about the prospect of using the Validation data as training data: it’s more recent, so likely more relevant. When he added the validation data, Arbitrage considered dropping some of the earliest eras, but ultimately decided that was a risk. His approach is that Numerai is giving participants the data for a reason and they’re not trying to be misleading, “and that’s served me well in the past… I have found that the validation data as additional training data has increased volatility of my performance. Yeah I could punch higher, but I would get punished harder, too.”
Arbitrage explained that when he didn’t include Validation data in his training, his model performance was smoother, but it didn’t help him climb the leaderboard as much. “I haven’t figured out which is more profitable,” he said, “The upside is I did really well in the beginning of the year, but we don’t have enough time with Kazutsugi to really know how it’s going to play out.”
Arbitrage then asked another OG Numerai data scientist Themicon for their take on the latest changes, who shared that they experienced results nearly identical to Arbitrages. Themicon explained that including Validation data in their training set resulted in huge fluctuations in their score: “when everyone else was doing well, I was doing really well; when everyone else was doing bad, I was doing really bad.”
“I don’t remember who I was talking to, or when,” Arbitrage said, “but somebody just pointed out that if you do really well and include Validation data, it just means that the current live era is similar to Validation.” Arbitrage suspects that this will also apply to meta-model contribution (MMC) such that if one person trains with Validation and others don’t, the models that don’t include Validation will become performant (though it’s still too early to tell if this is the case at the time of this Office Hours).
Joakim: MMC is going to be a more difficult tournament, I reckon.
Arbitrage: I think so. If it’s a side pot, it’ll be fun, but I don’t know about targeting MMC. If I switch my model to MMC and other people do as well, if we stumble into the same solution then our share of MMC is going to decline. So you need to choose between stability and chasing MMC. I think it’s going to be very difficult to be performant and have high MMC over time. But I’m very interested to see how it all plays out. Especially with all of these crazy genetic algorithms that Bor is running.
Joakim: It might be more valuable for Numerai, though.
Arbitrage: When you think about it from a hedge fund perspective, it absolutely is more valuable for them to get 1,000 completely different but performant models, then to get clusters of 3 different types of models that are performant because then they’re really just creating a metamodel on 3 different models.
This, Arbitrage said, he believed was unavoidable: because of the nature of the data, tournament participants will likely converge around the most performant strategies. But, as he discussed with Richard Craib, if a data scientist treats the features differently, drops certain features or subsets of features, only trains on certain eras, or uses different blending techniques, this is where MMC becomes a powerful anchor point. There’s probably enough variation in the data to capture MMC, but Arbitrage isn’t convinced that there is enough diversification of modeling techniques to achieve the same thing.
In the chat, Michael Oliver posted a link to a linear model with almost perfect correlation to MMC:
Michael Oliver: It’s a linear model trained on a subset of eras in an automatically determined way. Since MMC came out, I’ve been really curious as to how MMC and the model performance line up. I don’t know what to completely make of it. It’s basically half of a mixture of linear regression models, so it tries to find the eras that best go with two linear regressions. Sort of a regime within the data. The fact that MMC and performance look so similar, I don’t know what to make of it, it’s just really interesting.
Arbitrage: Your correlation with the metamodel is similar to what I’ve seen with neural nets. Is it just a linear regression? You probably won’t tell me more than that, will you …
Michael Oliver: It’s a mix of linear regressions: it automatically parses out eras to two different linear regressions, so it’s basically about 60% of the eras.
Arbitrage pointed out that some of his students have come up with ensembles of basic linear models. Their early indication is that performance will be above average, adding that 19 of his students have completed their production models and have created tournament accounts (some of them even asking if they’re allowed to continue tinkering with their models even though the assignment was finished).
How much live data is needed to evaluate how overfit a model is? Is a month long enough? How confident could one be with 12 months of live vs 12 months of validation?
“One month is definitely not long enough.” — Arbitrage
Submissions to the Numerai tournament are making predictions on a month-long timeframe every week, essentially making the same prediction four times. This means anyone would need at least 12 weeks of performance history to evaluate a model. On top of that, if the model starts during a burn period, it tends to be auto-correlated, meaning it could experience four to eight weeks of continuous burn. “If you encounter that,” Arbitrage said, “you have to wait until it turns positive to see how everybody else does.”
Arbitrage said he hasn’t had the opportunity to experience entering the tournament during a burn period until recently, but he would want to know: if everybody is burning and so is his model, who recovers first? He said that if his model burns longer than the top performing users, it’s an indication that his model isn’t performant.”But that’s like, not scientific at all, just kind of a gut check.”
As for how confident one could be with 12 months of live data versus validation, Arbitrage said: “Not very- this is stocks, this is equities, we have no clue. Look at what happened with Covid-19, it’s a huge regime change right in the middle of all of this, and we have to hope that our models can survive regime changes.”
Themicon: I’ve been [competing] since 2016, and in the beginning I was changing my model every week and it did not work. I had no idea if it was me or the market or anything like that. I’ve started getting to the point where I think I have something and leave it for three or four months before I go back and look at it. I’d rather create more accounts and try other things on other accounts. I’d say four months at minimum.
Arbitrage echoed his advice to his students from back when the account limit was three: create three different accounts and over time kill the lowest performing one. Just delete it and try a new one. “If you have your own evolutionary process, similar to what Bor is doing but more manual, then you will always improve. It keeps you constantly innovating.” He added that now that the account limit is 10, maybe he would consider dropping the bottom three, but he’s unsure.
At the time of the Office Hours, Arbitrage was experiencing a flippening: his model Leverage was higher than his model Arbitrage.
Author’s note: In the time since recording this Office Hours, Arbitrage surpassed Leverage and balance has been restored.
Arbitrage never expected this to happen because his namesake model has always performed well, but now he’s thinking it needs a closer look and might warrant some tinkering. “But if Arbitrage fell to the bottom, I’d kill it,” he said mercilessly.
“Like I tell my students with trading, there are no sacred cows. You have to be willing to drop something that’s not working.” — Arbitrage
The conclusion: A longer time frame and manual evolutionary process help lead to improvement over time.
Is Numerai’s influence on the market itself big enough to make a drop in correlation of our models on live data due to obvious signals from trained data already utilized?
Phrased another way, this question is asking if Numerai is trading on the predictions and then squashing those signals’ ability to generate profit, to which Arbitrage confidently said “no” and included another question as part of his answer: has Numerai ever revealed its yearly profit numbers or given any indication if the [metamodel] is working?
Arbitrage said that he can answer both of these questions with one simple observation: all hedge funds that trade equities have to file a form 13F once they reach a certain threshold of assets under management ($100 million). Numerai has not filed a 13F, so Arbitrage suggests that we can infer it’s not a large hedge fund and therefore is not moving the market.
Was Round 202 considered a difficult round?
“Yes.”
Arbitrage believed that this round took place when most assets were seeing high correlation: gold, bitcoin, equities in every major market, and bonds all sold off and the only asset that saw any positive performance was treasuries (which barely moved because yield was already practically zero). “When correlations are 1,” he said, “everything blows up.”
Themicon: Any other eras that correlate with 202?
Arbitrage: I would suggest in the middle of the training data — there appear to be some difficult eras.
Arbitrage said that the difficult eras seem to be rare, and that he suspects there are models in the tournament that fit to those high volatility periods and intentionally leaving off the “easier” eras. This leads to doing well when everyone else is burning, but rarely doing well after that. “Now that the data is balanced,” he said, “it doesn’t make sense to purposefully fit to the difficult eras.” He also noted that tournament participants should expect to see eras where performance doesn’t match their perception of the market, e.g. high burn despite no clear signals of volatility in the market.
Joakim: You mentioned eras 60–90 were difficult, do you know roughly what years they represent?
Arbitrage: I don’t — they’ve never officially told us when the time period starts. We can only guess, I’ve just noticed that the middle third of the eras seem to be rather difficult. I wouldn’t even know how to extrapolate that back to an actual time series, and I’m not sure that it really matters.
Even though Numerai data is delivered chronologically, Arbitrage pointed out that data scientists know so little about it to begin with so he’d be very cautious about trying to align the time series with any actual news or events, because that could introduce bias (which is one of Arbitrage’s least favorite things).
Joakim: I’m mostly just curious.
Arbitrage: Oh me too! Every time I see Richard I’m asking him every possible question I can and he always laughs at me and thinks I’m an idiot for even bothering to try, but so be it.
Michael Oliver indicated in the chat that he has a counterpart model which performed well during era 202, prompting Arbitrage to wonder if Michael averaged the performance of his primary model with the one that performed during the high volatility period, what would the resulting score look like.
Michael has considered that approach, but hasn’t come around to trying it yet. He explained that his counterpart model is trained on the disjoint set of eras from his other model, so they’re not quite mirror images of each other, but an attempt at capturing two different regimes. The counterpart model does perform well when everyone else is doing badly, but that rarely happens so the model overall isn’t particularly good.
A model diversification strategy like MIchael’s counterpart models may have been worthwhile in the past, but Arbitrage doesn’t see the value in something like that as the tournament currently stands because ultimately, sustained positive performance is preferable to short gains.Michael then added that he doesn’t stake on this models, but finds them as interesting data points to track performance over time.
Arbitrage asks the Panel of Experienced Users: are you going to spread your stakes out across ten models or are you going to stick with what you know?
Themicon: I think it’s too early to say at the moment. I’ve added four more accounts with ideas that I had a long time ago that I want to try out, and I’ll leave them for the next four months and see how they do. Depending on how they do I might [spread my stake around], but at the moment I’m just sticking to my original three because I know they work in different regimes.
Arbitrage: The one thing to consider is that if you are planning on staking eventually, every day that you wait, you have to wait another 100 days to earn reputation. That’s what I’m struggling with: I staked early for three accounts and staked again right in the middle of a burn sequence so I haven’t broke even yet. Let’s extrapolate out: 20 weeks in, all of your models are in the top 300, are you staking evenly on them or are you sticking with what you know?
Michael Oliver: I’m definitely sticking with what works for my biggest stakes and gradually increasing stakes on things with increased confidence. If some model is looking better overall, I might switch it to one of the higher stake accounts.
Arbitrage: I guess you could switch your stakes just by changing your submission files — I didn’t even think about that. That would blend out your reputation series too. That’s interesting, I have to think about that some more. I gotta stop talking out loud and giving out my ideas.
What are your plans to improve your models’ performance? Not asking for secret sauce, but would be interested in the direction of your and others’ thoughts.
Arbitrage said his plan is to essentially keep killing his worst performing models. He also considers volatility to be one of his parameters, so if he has a performant model but one that keeps swinging on the leaderboard, he would consider killing it just because of how volatile it is. “To me, that’s not very good.”
Ultimately, Arbitrage pointed out that iterating on tournament models takes a significant amount of time so his strategy is focused more on steady growth as opposed to big incremental gains. One example he gave was having three models in the top 50 for a cumulative period of nine months. As to how he’ll achieve that, Arbitrage said he “can’t think of any way other than to kill the worst performing one in some kind of death match among my own ten models.”
Themicon: Yeah, I think I’m going to do what you’ve been discussing. It’s such a long game. Keep the things that are working, and kill off the things that aren’t working after four months. That’s why I haven’t filled up my accounts yet. I have three that are working, four more with ideas, and I’ll see how those go before I start adding more.
SSH (in chat): Keeping 90% in one major stake and around 10% in the other two.
Richard asks in chat: How many of you plan to stake on MMC?
Arbitrage: I’m in “wait and see” mode, not going to say yes or no to that.
Michael Oliver: They’re going to change to MMC2 first, which we haven’t seen yet, so I have to see that first.
Richard: I was looking at MMC2 and it does look a little bit more stable, from what I was seeing. I only looked at a few users, but it does seem to me that whereas you’re at the mercy of the market with the normal tournament, like you’re going to burn if there’s a burn period, that doesn’t seem to be the case with MMC. Part of me has concerns that we might get to a place, maybe a year from now, where 80% of the stakes are on MMC.
Arbitrage: Why is that a concern, though?
Richard: Well, it’s not a concern, it would just be strange. The tournament changes its whole character: it’s not about modeling the data, it’s also about kind of knowing what others are trying to do.
Arbitrage: Oh yeah, that would be a concern. Like what I was saying about how I chase MMC along with others and we stumble onto the same solution so our share of MMC goes down because we’re correlated together.
Richard: You guys said earlier that you think it’s quite volatile; it doesn’t seem as volatile as the normal returns. If you look at the black line and the blue line on the Submissions page, usually the blue line is a little bit more compressed than the black line. So it seems to me to be a little less volatile. Often, if someone has 80% of weeks up on the normal tournament, and their MMC is up 90% of weeks, so it seems like it might be quite compelling for a lot of people.
Arbitrage: Yeah, but I just don’t know that I can stake on both because my MMC is correlated so strongly with my tournament performance. When my model does good I get high MMC and when I burn I get negative MMC. For me, it doesn’t offer a diversification, but maybe MMC2 does. I don’t know, it’ll be interesting to see. Any way I can reduce my risk, and if that’s betting on a side pot, that’s beneficial to me. That’s what I’m waiting to see.
Slightly off topic but: what do you (or others) think will kill the project? And why do you think there’s no real competition out there?
Off the bat, Arbitrage noted that a significant change to the global equities markets which invalidated all of the Numerai data would kill the project. A scenario where capital controls prevented investing in foreign markets, for example, would kill the model as it’s based on foreign equity trading. Arbitrage also pointed out the legal risks involved working within such heavily regulated industries, such as if cryptocurrency can no longer be used as a compensation mechanism. “That kind of screws things up pretty bad.”
After Keno asked about competition as a threat to the tournament, Arbitrage added one more potential killer: what if one day Richard gets a call from a massive financial services company and they buy Numerai for $10 billion, then shut it down.
Arbitrage: Richard’s laughing, what do you have to say Richard?
Richard: Well, that’s why I have more than half the shares and control the board of the company.
Richard explained that he doesn’t mind investing in his own company and his own token because he specifically doesn’t want some kind of hostile takeover to happen.
Arbitrage: If somebody called you and said, “hey, we’re going to give you $10 billion to buy your project,” that’s going to be a tough call to turn down.
Richard: Nope 🙅♂️
Slyfox: It’s not about the money, it’s about the vision!
Arbitrage: Everyone has a number, I refuse to believe there isn’t a number that you would take to shut this thing down. Or rather, you would take not knowing they were going to shut it down.
Richard: Well that’s why everything is open source, so even if someone did buy it and shut it down (which is impossible because we wouldn’t sell it) but even if they did, someone would just rebuild it with the code we left behind.
Arbitrage: That’s true, with Erasure being open source the way it is, I can see that.
Is live sharpe ratio versus validation sharpe ratio a good way to measure how overfit my model is?
Arbitrage said that in general, yes, data scientists can use sharpe ratio to determine how overfit a model is but noted that the direct measure suggested in the question doesn’t work. A live sharpe ratio of 1 to a validation sharpe of 2 does not equal a 50% overfit, for example, because that could be the result of spurious correlation. “In general, comparing your in sample to out of sample will always give you an indication of whether you’re overfit but it’s not a direct measure.”
If my model performs better or worse live compared to validation, how can I determine if it’s due to over/underfitting, market regimes, liking/disliking my model, or feature exposure?
“You can’t.”
Arbitrage explained that because it’s live stock data, he doesn’t believe tournament participants can infer much about why models behave the way they do. The validation data is such a small subset of the larger data set: equities change by the minute and the tournament prediction time frame is a month long. This is why Arbitrage encourages his students to take a long view and to aim for something stable.
When is SAMM (single account multiple models) coming out? Can we consolidate to a single email yet?
Slyfox: Yeah, it’s coming soon! We’re working on it right now. We’re slowly making those changes to our API and putting on the final touches so sign on and account creation make sense on the front end. It’s taking time to make it look good and useable, but it’s coming and it’s definitely a priority, so any feature requests?
Arbitrage took the opportunity to bring up a hot topic in RocketChat: the ability to withdraw from stakes on Wednesday nights — Thursday morning (to send a reputation bonus directly to a user’s wallet or to pare down their stake). Arbitrage is an advocate and stamped it as his #1, highest priority feature request. Essentially, Arbitrage is asking for a window where his stake is not active after receiving a payout where he can choose to roll it forward or take his profit off the top.
Slyfox agreed that the idea makes sense and noted that it’s been discussed internally. He said he would look into it, noting that in terms of timeline if they move forward with this, it would likely be grouped with the introduction to MMC2.
Another “Feature” request: it’s been about three months since the last Fireside Chat.
Arbitrage said that Office Hours with Arbitrage is not a substitution for a Fireside Chat and wanted to know when the next one would be.
“I feel like we’re scheduled to have one next week,” said NJ who was fortunately on the call.
Author’s note: Richard and Anson host quarterly Numerai Fireside Chats where they answer questions from the Numerai tournament community covering topics like recent changes, feature requests, modeling tips, and what to look out for in the coming months. They did, in fact, have a Fireside Chat the following week. Stay tuned for a recap from that call. RocketChat for the next time and date.
Thank you to Keno, Michael Oliver, Slyfox, and NJ for fielding questions during this Office Hours, to Arbitrage for hosting, and to Richard Craib for being utterly unwilling to sell Numerai. | https://docs.numer.ai/office-hours-with-arbitrage/office-hours-recaps/ohwa-6 | 2020-11-24T00:10:22 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.numer.ai |
Customize the URL Filtering Response Pages
The firewall provides predefined URL Filtering Response Pages that display by default when a user:
- A user attempts to browse to a site in a category with restricted access.
- A user submits valid corporate credentials to a site for which credential detection is enabled (Prevent Credential Phishing based on URL category).
However, you can create your own custom response pages with your corporate branding, acceptable use policies, and links to your internal resources..
- Export the default response page(s).
- Select.DeviceResponse Pages
- Select the link for the URL filtering response page you want to modify.
- Click the response page (predefined or shared) and then click theExportlink and save the file to your desktop.
- Edit the exported page.
- Using the HTML text editor of your choice, edit the page:
- Save the edited page with a new filename. Make sure that the page retains its UTF-8 encoding. For example, in Notepad you would selectUTF-8from theEncodingdrop-down in the Save As dialog.
- Import the customized response page.
- Select.DeviceResponse Pages
- Select the link that corresponds to the URL Filtering response page you edited.
-.
- Save the new response page(s).Committhe changes.
- Verify that the new response page displays.From a browser, go to the URL that will trigger the response page. For example, to see a modified URL Filtering and Category Match response page, browse to URL that your URL filtering policy is set to block.The firewall uses the following ports to display the URL filtering response pages:
- HTTP—6080
- Default TLS with firewall certificate—6081
- Custom SSL/TLS profile—6082
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/pan-os/10-0/pan-os-admin/url-filtering/customize-the-url-filtering-response-pages | 2020-11-24T01:30:30 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.paloaltonetworks.com |
Create a Custom Amazon Machine Image (AMI)
Learn how creating a custom Amazon Machine Image (AMI) can speed your deployment process.
A custom VM-Series AMI gives you the consistency and flexibility to deploy a VM-Series firewall with the PAN-OS version you want to use on your network instead of being restricted to using only an AMI that is published to the AWS public Marketplace or to the AWS GovCloud Marketplace. Using a custom AMI speeds up the process of deploying a firewall with the PAN-OS version of your choice because it reduces the time to provision the firewall with an AMI published on the AWS public or AWS GovCloud marketplace, and then performing software upgrades to get to the PAN-OS version you have qualified or want to use on your network. Additionally, you can then use the custom AMI in the Auto Scaling VM-Series Firewalls CloudFormation Templates or any other templates that you have created.
You can create a custom AMI with the BYOL, Bundle 1, or Bundle 2 licenses. The process of creating a custom AMI requires you to remove all configuration from the firewall and reset it to factory defaults, so in this workflow you’ll launch a new instance of the firewall from the AWS Marketplace instead of using an existing firewall that you have fully configured.
When creating a custom AMI with a BYOL version of the firewall, you must first activate the license on the firewall so that you can access and download PAN-OS software updates to upgrade your firewall, and then deactivate the license on the firewall before you reset the firewall to factory defaults and create the custom AMI. If you do not deactivate the license, you lose the license that you applied on this firewall instance.
- (Only for BYOL)Activate the license.
- Install software updates and upgrade the firewall to the PAN-OS version you plan to use.
- (Only for BYOL)Deactivate the license.
- Perform a private data reset.The system disks are not erased, so the content updates from Step 4 are intact.A private data reset removes all logs and restores the default configuration.
- Access the firewall CLI.
- Remove all logs and restore the default configuration.request system private-data-resetEnteryto confirm.The firewall reboots to initialize the default configuration.
- Create the custom AMI.
- Log in to the AWS Console and select the EC2 Dashboard.
- Stopthe VM-Series firewall.
- Select the VM-Series firewall instance, and click.ImageCreate Image
- Enter a custom image name, and clickCreate Image.The disk space of 60GB is the minimum requirement.
- Verify that the custom AMI is created and has the correct product code.
- On the EC2 Dashboard, selectAMI.
- Select the AMI that you just created. Depending on whether you selected an AMI with the BYOL, Bundle 1, or Bundle 2 licensing options, you should see one of the followingProduct Codesin the details:
- BYOL—6njl1pau431dv1qxipg63mvah
- Bundle 1—6kxdw3bbmdeda3o6i1ggqt4km
- Bundle 2—806j2of0qy5osgjjixq9gqc6g
- If you plan to use the custom AMI with EBS encryption for an Auto Scaling VM-Series Firewalls with the Amazon ELB Service deployment, you must use the default master key for your AWS account.
Recommended For You
Recommended Videos
Recommended videos not found. | https://docs.paloaltonetworks.com/vm-series/10-0/vm-series-deployment/set-up-the-vm-series-firewall-on-aws/deploy-the-vm-series-firewall-on-aws/create-custom-ami.html | 2020-11-24T00:28:08 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.paloaltonetworks.com |
.NET API analyzer
The .NET API Analyzer is a Roslyn analyzer that discovers potential compatibility risks for C# APIs on different platforms and detects calls to deprecated APIs. It can be useful for all C# developers at any stage of development.
API Analyzer comes as a NuGet package Microsoft.DotNet.Analyzers.Compatibility. After you reference it in a project, it automatically monitors the code and indicates problematic API usage. You can also get suggestions on possible fixes by clicking on the light bulb. The drop-down menu includes an option to suppress the warnings.
Note
The .NET API analyzer is still a pre-release version.
Prerequisites
- Visual Studio 2017 and later versions, or Visual Studio for Mac (all versions).
Discover deprecated APIs
What are deprecated APIs?
The .NET family is a set of large products that are constantly upgraded to better serve customer needs. It's natural to deprecate some APIs and replace them with new ones. An API is considered deprecated when a better alternative exists. One way to inform that an API is deprecated and shouldn't be used is to mark it with the ObsoleteAttribute attribute. The disadvantage of this approach is that there is only one diagnostic ID for all obsolete APIs (for C#, CS0612). This means that:
- It's impossible to have dedicated documents for each case.
- It's impossible to suppress certain category of warnings. You can suppress either all or none of them.
- To inform users of a new deprecation, a referenced assembly or targeting package has to be updated.
The API Analyzer uses API-specific error codes that begin with DE (which stands for Deprecation Error), which allows control over the display of individual warnings. The deprecated APIs identified by the analyzer are defined in the dotnet/platform-compat repo.
Add the API Analyzer to your project
- Open Visual Studio.
- Open the project you want to run the analyzer on.
- In Solution Explorer, right-click on your project and choose Manage NuGet Packages. (This option is also available from the Project menu.)
- On the NuGet Package Manager tab:
- Select "nuget.org" as the Package source.
- Go to the Browse tab.
- Select Include prerelease.
- Select that package in the list.
- Select the Install button.
- Select the OK button on the Preview Changes dialog and then select the I Accept button on the License Acceptance dialog if you agree with the license terms for the packages listed.
Use the API Analyzer
When a deprecated API, such as WebClient, is used in a code, API Analyzer highlights it with a green squiggly line. When you hover over the API call, a light bulb is displayed with information about the API deprecation, as in the following example:
The Error List window contains warnings with a unique ID per deprecated API, as shown in the following example (
DE004):
By clicking on the ID, you go to a webpage with detailed information about why the API was deprecated and suggestions regarding alternative APIs that can be used.
Any warnings can be suppressed by right-clicking on the highlighted member and selecting Suppress <diagnostic ID>. There are two ways to suppress warnings:
- locally (in source)
- globally (in a suppression file) - recommended
Suppress warnings locally
To suppress warnings locally, right-click on the member you want to suppress warnings for and then select Quick Actions and Refactorings > Suppress diagnostic ID<diagnostic ID> > in Source. The #pragma warning preprocessor directive is added to your source code in the scope defined:
Suppress warnings globally
To suppress warnings globally, right-click on the member you want to suppress warnings for and then select Quick Actions and Refactorings > Suppress diagnostic ID<diagnostic ID> > in Suppression File.
A GlobalSuppressions.cs file is added to your project after the first suppression. New global suppressions are appended to this file.
Global suppression is the recommended way to ensure consistency of API usage across projects.
Discover cross-platform issues
Note
.NET 5.0 introduces the Platform compatibility analyzer as a replacement of this feature. The platform compatibility analyzer is included in the .NET SDK (no need to install it separately) and is on by default.
Similar to deprecated APIs, the analyzer identifies all APIs that are not cross-platform. For example, Console.WindowWidth works on Windows but not on Linux and macOS. The diagnostic ID is shown in the Error List window. You can suppress that warning by right-clicking and selecting Quick Actions and Refactorings. Unlike deprecation cases where you have two options (either keep using the deprecated member and suppress warnings or not use it at all), here if you're developing your code only for certain platforms, you can suppress all warnings for all other platforms you don't plan to run your code on. To do so, you just need to edit your project file and add the
PlatformCompatIgnore property that lists all platforms to be ignored. The accepted values are:
Linux,
macOS, and
Windows.
<PropertyGroup> <PlatformCompatIgnore>Linux;macOS</PlatformCompatIgnore> </PropertyGroup>
If your code targets multiple platforms and you want to take advantage of an API not supported on some of them, you can guard that part of the code with an
if statement:
if (RuntimeInformation.IsOSPlatform(OSPlatform.Windows)) { var w = Console.WindowWidth; // More code }
You can also conditionally compile per target framework/operating system, but you currently need to do that manually.
Supported diagnostics
Currently, the analyzer handles the following cases:
- Usage of a .NET Standard API that throws PlatformNotSupportedException (PC001).
- Usage of a .NET Standard API that isn't available on the .NET Framework 4.6.1 (PC002).
- Usage of a native API that doesn't exist in UWP (PC003).
- Usage of Delegate.BeginInvoke and EndInvoke APIs (PC004).
- Usage of an API that is marked as deprecated (DEXXXX).
CI machine
All these diagnostics are available not only in the IDE, but also on the command line as part of building your project, which includes the CI server.
Configuration
The user decides how the diagnostics should be treated: as warnings, errors, suggestions, or to be turned off. For example, as an architect, you can decide that compatibility issues should be treated as errors, calls to some deprecated APIs generate warnings, while others only generate suggestions. You can configure this separately by diagnostic ID and by project. To do so in Solution Explorer, navigate to the Dependencies node under your project. Expand the nodes Dependencies > Analyzers > Microsoft.DotNet.Analyzers.Compatibility. Right-click on the diagnostic ID, select Set Rule Set Severity, and then pick the desired option.
See also
- Introducing API Analyzer blog post.
- API Analyzer demo video on YouTube.
- Platform Compatibility Analyzer | https://docs.microsoft.com/en-us/dotnet/standard/analyzers/api-analyzer?WT.mc_id=DOP-MVP-37580 | 2020-11-24T01:43:50 | CC-MAIN-2020-50 | 1606141169606.2 | [array(['media/api-analyzer/green-squiggle.jpg',
'Screenshot of WebClient API with green squiggly line and light bulb on the left.'],
dtype=object)
array(['media/api-analyzer/warnings-id-and-descriptions.jpg',
"Error List window that includes warnings. Screenshot of the Error List window showing warning's ID and description."],
dtype=object)
array(['media/api-analyzer/suppress-in-source.jpg',
'Screenshot of code framed with #pragma warning disable.'],
dtype=object)
array(['media/api-analyzer/suppress-in-sup-file.jpg',
'Screenshot of right-click menu showing options to suppress a warning in Visual Studio.'],
dtype=object)
array(['media/api-analyzer/suppression-file.jpg',
'Screenshot of the GlobalSuppressions.cs file in Solution Explorer.'],
dtype=object)
array(['media/api-analyzer/disable-notifications.jpg',
'Screenshot of Solution Explorer showing diagnostics and pop-up dialog with rule set severity.'],
dtype=object) ] | docs.microsoft.com |
CancelSafe File System Minifilter Driver
The CancelSafe filter is a sample minifilter that you use if you want to use cancel-safe queues.
Universal Windows Driver Compliant
This sample builds a Universal Windows Driver. It uses only APIs and DDIs that are included in OneCoreUAP.
Design and Operation context.
For more information on file system minifilter design, start with the File System Minifilter Drivers section in the Installable File Systems Design Guide. | https://docs.microsoft.com/en-us/samples/microsoft/windows-driver-samples/cancelsafe-file-system-minifilter-driver/ | 2020-11-24T02:13:44 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
Kicking off number seven, Arbitrage welcomed data scientist Zen to his first ever Office Hours.
Arbitrage: So I’m going to just open right with you because I imagine we’re going to have so much to talk about afterwards, I’d hate to run out of time.
Zen: Okay. How long is this?
Arbitrage: I run an hour, I stop right on time.
Arbitrage: Zen is one of our older users. Not in age, but in account age.
Zen: Both!
Arbitrage: You say both, but I can’t tell. You could have an AI running your Zoom right now — we don’t know. So you have three accounts, which one do you consider to be your number one account?
Zen: Oh well obviously Nasdaq Jockey.
Arbitrage: How did that name come to be?
Zen: I’ve had that handle for a long time on Yahoo (I trade stocks). I just made it up. The second model is Evolvz. That one started out with genetic algorithms, so that’s why I named it [that]. And actually, the first model I ever put up was ZBrain.
Arbitrage: Ah well then technically ZBrain would be your OG handle for this tournament.
Zen: Oh yeah, that’s right, that was back in 2016.
Arbitrage: How did you find out about Numerai?
Zen: A friend of mine read a Medium article and said, ‘hey maybe you should go look at this.’ So I did and then hopped on.
Arbitrage: You just said you joined in 2016, do you know the start date of your first account?
Zen: Yeah it was December 12.
Arbitrage: In 2016?
Zen: 2016.
Arbitrage: Okay so a little after the first wave, but still early on. And we’ve established now that you live in New York, or at least the New York area.
Zen: New Jersey.
Arbitrage: New Jersey, yeah like I said New York area basically. What do you do for a living?
Zen: I’m a software engineer by trade, but I’ve had a pretty long career and ended up working mostly in defense. Eventually I became a manager, then a senior manager, took a few buyouts here and there. I’ve kind of come full circle: now I work for a company and I lead the AI department. I do a lot of hands on work too.
Arbitrage: What programming language do you use and why?
Zen: I use Python. I’m self taught, started a few years ago. I actually used Python maybe ten years ago for various little things when it was easier to use something that already existed. But I’ve used just about every language on the planet. Right now everything I do for Numerai is in Python.
Arbitrage: I’ve generally found that to be true. Except Bor who likes to cut his wild streak and run his own way. But I imagine he’s going to switch to Python, he talked a lot about the simplicity.
Zen: Bor is [running] R?
Zen: Very cool. I’m a Python lover, actually. I’ve used just about every language, but Python is great for just getting things done quickly. Maybe not speed, but some things are still good.
Arbitrage: Python wasn’t really fast until, what, 2015?
Zen: Yeah, absolutely. In the beginning it was very slow.
Arbitrage: In your opinion, do you think that was a Moore’s law contribution? Or do you think we just got better at compiling this stuff?
Zen: I think they got a lot better, and with that they’ve done on the backend … I read a little bit about it, but I think they’ve done quite a lot of work to make the core libraries run really fast. It depends what you do and how you do things now.
Arbitrage: Oh for sure. I saw a tweet by Guido [van Rossum] and he was saying that people who are used to old-style Python should just ignore everything data science is doing. It seems like the data science community has almost “forked” Python for our own use. One of the questions that I have to ask, because you’re the legendary Nasdaq Jockey: can you tell us your top three tips for the tournament?
Zen: Ha, well, let me think about that. I think the biggest problem most people have is they over-train still, even though they think they’re not.They’re training too much on the initial [data set], and if they’re using the Validation data they’re screwing themselves.
Zen: I don’t use the Validation data, and I try very hard not to over-train. I do a lot of things to make sure I don’t.
Arbitrage: Alright, so that’s one tip.
Zen: Consistency across the validation eras is important. There’s a couple of them that are really tough to get on, and that’s what Nasdaq Jockey does. It might not be so great at some of the eras in the validation data, but it’s really good on a couple of the tough ones. I’m looking forward to that new [Validation 2] data because now I’m interested in seeing how I’m going to have to change what I do to tune to the new Validation data set.
Arbitrage: I’m going to ask you more about that in a second, but I’m still waiting on tip #3.
Zen: One of the things that screwed me up in the beginning was that I didn’t keep good records of when I made changes of things. It takes so long to know how your model is doing. Just keep good records and go back and make small tweaks, not trying to make gigantic changes all the time (like changing states or models). I haven’t changed Nasdaq Jockey in a long time. With ZBrain I’ve been fooling around, but [Nasdaq Jockey] I haven’t changed in a long time.
Arbitrage: Yeah, I haven’t changed anything with my Arbitrage account in maybe 18 months, beyond getting it adjusted for the different features. It’s done pretty well. Going back, you said of the new Validation data that you’re going to change a lot of stuff. But if your model’s doing well now, why would you change anything?
Zen: Well I’m probably not going to change anything with Nasdaq Jockey, but I have seven other Nasdaq Jockeys that I started three or four weeks ago.
Arbitrage: That was another question, if you’re up to ten accounts now.
Zen: Yeah, I have the three initial ones, and I just made about a month ago seven more. And they’re totally different. A whole different idea. I think they’re looking pretty good, actually.
Arbitrage: Yeah, these have been some pretty easy eras lately, so I’m waiting with bated breath to see how this all turns out.
Zen: Yeah, exactly.
Arbitrage: One of the questions I like to ask people: who is your favorite team member?
Zen: It’s gotta be Anson.
Zen: *Laughing* I don’t really have a favorite.
Arbitrage: But you finally picked one!
Zen: He’s the only one I talk to.
Slyfox: Yesss.
Arbitrage: There ya go, Slyfox.
Zen: I was in Pittsburgh and saw a bar called ‘Sly Fox.’
Slyfox: You should share the picture if you still have it.
Arbitrage: We should have the East Coast meetup at the bar.
Zen: I don’t go to Pittsburgh very often.
Arbitrage: Let’s hope we’ll be able to go to Pittsburgh, let alone worry about going very often… What is your number one feature request or improvement you’d like to see for the tournament?
Zen: I don’t have a big rig or anything, I have an Alienware that I bought five years ago and I do everything on that. I wish the files were smaller. Not the number of records, I think that’s fine, it’s just that there’s so much waste. You can reduce that file size and make it 25% of what it is and still have all of the same features and data. I don’t know if [Numerai’s] looked at that, it just seems pretty wasteful. It’s time consuming and a pain in the ass.
Arbitrage: That’s good, and I think that’s something Slyfox has talked about in the past as something they’d like to iterate on. It comes out of the box ‘float64’ and it could easily be reduced from there.
Zen: I mean really, there’s five targets, you can use zero through four if you want. Then right there off the bat you’ll get a tremendous [improvement]. You can even make it a binary file if you want — I’m old school.
Slyfox: Yeah for sure. It’s something we’re looking into. File size is also something that makes everything we do slower, internally. So yeah, we’re definitely looking into it. Good recommendation.
Zen: Otherwise, I think the whole layout of the tournament with the leaderboard and MMC; it’s all good, it’s just very convoluted right now. It’s hard to tell what we’re going to end up with. You’re setting an objective function for the company — that’s the way I look at it.
It’s like, their objective is to get the best models so that they can create a good metamodel. So they’re tweaking all of our rewards so we give them what they want. I think it’s working, at least it seems to be working. It’s hard to tell. I didn’t like the answer the other day when [Richard] said that he’s okay when people want to stake on the example model. I don’t know, that kind of seemed odd to me.
Arbitrage: I was kind of irked by that too, but if you take a huge step back and think about it, the way it was answered made sense.
Zen: I know it makes sense.
Arbitrage: It is in the sense that it’s a zero-effort way to climb the leaderboard. I don’t like that because I want people to struggle as much as I did and so I want the path to be as difficult and onerous as possible so they don’t inadvertently surpass me, but I digress.
Zen: I understand. I mean, it’s a competition, so you’ve got to have your own secret sauce so you can beat the other guys, but there’s a certain amount of collaboration we’re all doing (to a certain level).
Arbitrage: Agreed. I know this is your first Office Hours, but there’s this section of the process in this Zoom series where we talk about some stuff, but don’t really say anything at all. And I think that’s the collaboration you might be referring to. So you said you’re up to ten models, eight total variations of Nasdaq Jockey — why didn’t you go for ZBrain or Evolvz and try something with those?
Zen: Actually, at this point, they’re all similar. Well, the first three [Nasdaq Jockey, Evolvz, ZBrain] are similar, but the new seven are very different. Just because they’re the same name doesn’t mean they’re the same model. I keep track of everything that’s going on, but Nasdaq Jockey 1 has nothing to do with Nasdaq Jockey. Totally different. One through seven are all different.
Arbitrage: Interesting. For me, I actually do use the numbers, they mean something.
Zen: I wish I had started out like that, and just used Google accounts like that. I can’t wait for single sign-in.
Arbitrage: Yeah, SAMM [Single Account Multiple Models] — we’re all anxiously awaiting that. That’ll definitely be good. So, you have pretty good confidence in your models: are you staking evenly across them, or do you still favor Nasdaq Jockey?
Zen: About every three months I look at the performance and I weight the staking to the best model. I have more on Nasdaq Jockey, less on Evolvz, and even less on ZBrain.
Arbitrage: Yeah personally I look at the approach I took to arriving at that model. If I think it has the best justification from a design standpoint — I came at it with a scientific approach and came to a conclusion that makes sense — I can believe in that a little more than something I cobbled together by chance.
Zen: I just look at the stripped-down performance, not the bonuses, just how well did it really perform on the live data. That’s number 1 for me on staking. I don’t have the other seven staked yet, I have to transfer some NMR there.
Arbitrage: Yeah, I’m waiting to see a little bit before I stake on some of the new ones. In the end, I probably will, but I doubt I’ll stake very large.
From chat: Do we get to rename accounts with the new merger?
Slyfox (in chat): Eventually yes. The username is pretty embedded in a few places (leaderboard, profile page, internal code) etc so it will take a bit of time, but eventually yes.
Slyfox (in meatspace): Another question I’m thinking about is, “what can we build to help you guys track your changes better?” Keno had a lot of good suggestions here, and ideas for somehow letting you label your models in time. If you guys have any ideas how we can make that easier, that’s something we can also build. At the simplest level, letting you change your name might help.
Arbitrage: Yeah, I don’t know. I’m kind of a fan of stickiness. My account is Arbitrage and has been since June of 2016. I don’t want to change that, I want it to stay nice and stable. I guess I’m old school in that sense. You change your profile picture, but your username to me is a fixed thing. It’s tied to the blockchain too, in a way.
Slyfox: It used to be tied to the blockchain. Right now, it is not. The new set of staking contracts are only tied to your Ethereum address.
Arbitrage: Well Zen, or Nasdaq Jockey… I’m going to call you Nasdaq Jockey because that’s who I want to beat. Thank you for coming in today and answering some of my questions.
Zen: Hey, no problem.
Arbitrage: It was really helpful. There is a theme, I’ve noticed, with a lot of the people talking about avoiding overfitting, make sure you average across the eras, and also take good notes. That was Bor’s number one suggestion: good note taking. You can see that that’s consistent across users at the top of the leaderboard. I’m really excited about the questions today, because this first one, I’ve thought about for a while.
Pretend I’m a five year old: explain exactly how MMC2 works (asking for a friend).
“I’m not sure I’m going to do a good job, but I’m gonna give it hell.” — Arbitrage
Banking on the fact that most people have played some team sport by age five, Arbitrage set up the following analogy: If you play a team sport, not everybody can be the pitcher (in baseball). Sometimes the team needs an outfielder, an infielder, pitcher, catcher, people who are really good at handling left-handed pitchers, etc. In the end, it takes all of the varied skill sets coming together to achieve victory for the team.
Extrapolating that example to the Numerai tournament: if all of the data scientists competing were pitchers, then the meta model would be terrible. But if we had a bunch of unique skillsets and played as a team, then we can win.
NJ shared that Michael P used a similar explanation at Numerai HQ in the past (although Numerai engineer Jason didn’t quite agree).
Michael P’s controversial example opted for a basketball team with four Shaquille O’Neals (one of the most dominant players ever but with a specific skill set) and posed the question: would that team be better off with a fifth Shaq or literally any other player with a different skill set (even if that player isn’t as talented). Slyfox and Arbitrage were quick to side with Jason and draft Shaq #5.
Author’s note: Michael P’s basketball reputation dropped to -0.0547
Slyfox tried his hand at an explanation, also choosing a basketball analogy in the form of the plus-minus score. When someone evaluates an athlete’s performance, they can look at their individual stats (like points scored, plays made, etc). But, you can also statistically measure how well the team does when that player is on the court compared to when they’re on the bench. If you play fantasy sports, this kind of scoring is already popular. “To me, MMC is just plus-minus,” Slyfox said. “Does the team perform better with you in it or not?”
“But what if you are the team?” Arbitrage asked.
“In the case of my model,” Arbitrage explained, “I submit predictions on Saturday afternoon. And then the meta model is built after that. So if the meta model converges on the solution that I’ve already uploaded, I don’t get an MMC bonus.”
Michael P: Yeah.
Arbitrage: Yeah.
Slyfox: Well, you’re not helping.
Arbitrage: But I came first — you guys took my solution and now you’re not paying me for it.
Slyfox: We don’t want to give people too much advantage for just being first. I think that’s one of the problems we had with originality (if you’ve been here for long enough).
Arbitrage: Well wait a sec — it’s unlikely that I could predict with very high certainty the exact solution of the meta model. It’s the sum of hundreds and hundreds of other models. But the fact that I did, and my model existed in the top 20 for two and a half months suggests that it’s good and it validates the meta model itself. Yet I’m not getting any MMC for it because of the way that it’s designed.
Slyfox: When we’re designing this payout, we still want to reward you for being good, but we’re not going to reward you because you didn’t add anything to the team.
Arbitrage: I am the team. That’s what I’m saying: I’m the team, I came first, and you just stumbled into my solution.
Slyfox: The timing of it doesn’t really matter, but yeah.
Arbitrage: I’m just playing a little semantic game, but I’m sure I’m not the only one who encounters this problem. Just something to think about. I won’t be playing MMC because I have no incentive to, but I feel like if I am providing you the signal first, and then you stumble into my solution as the optimal one, well I think I should get something for that. Especially if other people are getting a larger piece of the proverbial pie just because they’re different. If I’m the only one complaining about it, well clearly you’re going to ignore me, but I bring it up because it’s an interesting problem that I’m thinking about.
Michael P: Say that you’re the meta model. Now, when other people are playing MMC, in order to have positive rewards they have to be pulling the meta model in a better direction: they have to be better than the meta model. You can’t get good MMC just for being unique if you’re doing worse than the meta model. To get long term expected benefits from it, your model has to be better than the meta model.
If you do have the meta model, if you have the best possible model, and the meta model is better than what anyone else could come up with, then no one will be making money on MMC anyway and everyone would just play the main tournament. MMC was designed to remove those inefficiencies and accelerate the progress towards the best meta model. So if you truly have the best model and the MMC was the best, no one would be playing MMC.
Arbitrage: Just a note since this gets summarized and put on the web: I’m not claiming that I have the best model, it’s apparent that I don’t because I’m not number one now and I’ve only been number one for a couple of days. Just wanted to make sure I clarified that a bit.
You said that you use Validation for training after having applied cross validation properly. Are you planning to use the Validation 2 data for training also?
Arbitrage felt that his model is performing well at the moment, expressing that he mostly hopes Validation 2 doesn’t change his data pipeline, forcing him to go through his code and remove the new data.
Arbitrage doesn’t plan to change anything with his current models — at least at first. Using his remaining account slots, he’s going to train new models on the Validation 2 data and track their performance long-term. “I’m not changing my main models at all,” he said, “they’re really good and they’ve been good for a long time. And I am the meta model.”
Regarding payouts: when do you (or anyone) think they will stabilize? How far are we from a fair payout system?
“When do I think it will stabilize? Never.” — Arbitrage
Because the tournament deals with stock market data, Arbitrage doesn’t believe that it will ever truly “stabilize,” adding that “the second we think we arrive at a fair solution, everybody’s all in, some kind of regime change will occur and blow up our models and we’re going to have some kind of risk we didn’t account for and it will have to change.”
The more relevant question, in Arbitrage’s opinion, is around reaching a fair payout system. “I think we’re still a ways off.” He explained that even though the lift in the NMR market was awesome, if you started staking right before the increase you also saw a 1:1 increase in risk. Because “fair” is relative to the person observing the system, Arbitrage said it’s possible to design a payout system that’s fair to a subset of users, but not for everybody. “I don’t know any possible way to satisfy everybody.”
Keno explained that his question is mostly focused on situations where models are seeing negative reputation and negative MMC but still generating high payouts. To Keno, this suggests the incentives may not be optimally aligned to help Numerai because it looks like models are getting paid despite poor performance.
Arbitrage pointed out that, during his tenure with the Numerai contest, the current payout system is the best that he’s seen so far. He noted that occasionally, there are models that have negative performance but still seem to be paid, speculating that it’s a quirky function of a model being highly performant the majority of the time with short periods of negative performance. “Just because it was wrong one time doesn’t mean it’s bad for the fund.”
Keno referenced the leaderboard, explaining how a model in the 79th spot had a payout of over 400 NMR, while his two models in the top ten received around 50 NMR each. He said, “I’m thinking, ‘What am I doing wrong? Are my models that much worse?’ If they are, then the leaderboard is wrong and it doesn’t reflect reality. That’s my main concern.” Without payouts being directly tied to performance, data scientists lose incentive to increase their stakes.
“I think it has a lot to do with scale,” Arbitrage responded, “it’s almost a wealth effect.” He explained how someone willing to put $200,000 at stake in the tournament is willing to take on a level of risk that many of the participants can’t relate to. “I would never risk that much. That’s trading houses … I would just buy a house.” But, because risk is relative, “this is capitalism so it all works out in the end. There’s a lot of compensation for those high stakers, but that’s exactly how it’s supposed to be.”
Arbitrage believes that the “answer” to that question, or at least what he thinks Richard might say, is that if you want to receive bigger payouts, you need to do better and if you think you’re going to be in the top, increase your stake.
Bor asked Keno if he won’t just catch up to the accounts who are receiving larger payouts, noting that Keno’s stake is growing relatively faster. Keno said that unfortunately, he’ll never catch up based solely on payouts because all of the models are growing exponentially, and the others are already higher on the curve.
Keno said, “It’s a systems problem called ‘success to the successful.’” He used the example of governments taxing the wealthy and providing relief to those in need, concluding that by not engaging in any kind of redistribution, Numerai is okay with models receiving high payouts only because of a large stake and not because of how much the model actually contributes to the meta model or how well it performs.
Slyfox thanked Keno for the question, saying that it’s something they at Numerai think about often but don’t really talk about. He shared his philosophy regarding fairness:
“Obviously, we’re not there yet,” Slyfox said, “I don’t think there are autonomous AI agents competing on Numerai yet.” He sees the tournament as a group of humans working together trying to make the system work. Ultimately, he explained, the long term solution needs to be something that’s completely decentralized and has as few rules as possible. “When I think about stability and what we’re asymptotically going to move towards, the simplest and most fair is: you do well, you get paid, you don’t do well, you get burned in a perfectly symmetric way.”
At its core, Numerai’s payout system still functions this way. The bonuses and compounding stake are additional payout avenues that Numerai uses to reward data scientists beyond what Slyfox believes is the absolute, fair, symmetric payout. These extra payments are necessary, at least for now, because the tournament targets are not yet in a place where the data scientists can reasonably expect consistent payouts.
“The experience of having to go through multiple burn weeks, as we saw in the last few years, is really bad,” Slyfox said. He explained that were Numerai to just stick to their guns and only have perfect, symmetric payout, many of the data scientists might not still be participating, adding that a lot of new users would likely quit if the first six weeks are nothing but getting burned.
“They’re not going to think, ‘oh this is an elegant, symmetric system,’ no, they’re going to think, ‘this sucks.’” — Slyfox
Ultimately, what Numerai is trying to accomplish with all of the bonuses is giving the tournament data scientists more money in a way that doesn’t break the symmetry, and that in the extreme long-term they want to end up with just a symmetric payout. Slyfox explained that when the team thinks about MMC or new tournament targets, they’re designed to be more consistent and stationary so that the payouts are more consistent. The result will hopefully be the best users who do really well in the tournament can expect more consistent payouts, making the bonuses unnecessary.
Is there any study available on how many of the Numerai models are overfit based on live performance?
“I’ve read that Quantopian paper, by the way: 99.9% of the backtests are overfit (Wiecki et al., 2016).” — Arbitrage
As a benchmark, Arbitrage suggested that any model that’s been active for over 20 weeks and still has negative reputation is clearly overfit. Taking MMC into consideration: if a model has negative or near-zero reputation and zero or negative MMC, it’s clearly overfit. He added anything slightly above that is probably just luck.
While no formal study exists, Arbitrage has thought about what a proper study would look like, adding that it would be a little too niche for him as he’s not sure where he would publish it.
Slyfox: Publish it in the forum, Arbitrage, for fame and glory.
Arbitrage: I’ll let someone else get that fame and glory, I need publications in finance journals.
For a beginner, how does MMC change what I should be looking to aim for with my model? Am I now looking to be unique?
MMC means that models should be both unique and performant. Having a high correlation model is still good for the fund and data scientists can earn money on it — bonuses aren’t the only way to make money (they just help). Arbitrage said that specifically targeting MMC might not be an optimal strategy, instead suggesting to combine performance with uniqueness as an option. “But right now, I wouldn’t advise anybody new to go down that path, at least not with your main model,” he concluded.
MMC2 neutralizes our forecast against the meta model: in a world where the meta model is perfect, we should expect MMC2 to always be negative. Is that desirable?
Arbitrage explained that if MMC always offered perfect predictions, the data scientists would be out of business. Numerai would have no need for their submissions because they’re not beating the meta model. “We need to be better than the meta model,” Arbitrage said, “and we need to have performance. In that sense, we need to add something to it and make everything better overall.”
He reiterated that he doesn’t think a world exists where the meta model is perfect since it’s dealing with stocks: there will always be regime changes, currency risks, fraud, and multiple other reasons why the stock market will never be perfectly solved.
“I’m quite happy never having perfect predictions. We’ll always be able to add signal, and no matter how many changes the team makes to the tournament, we’ll always be able to do something.”
What’s the best way to introduce Validation 2 into our validation pipeline?
“I don’t know, I have to see it first. I want to see how it’s structured in the data.” — Arbitrage
Arbitrage hasn’t planned out how he’s going to handle Validation 2 data yet, but did mention that he’ll probably add two iterations of his Arbitrage model with that data. “I don’t really plan on doing anything — I’m not going to change any of my models, and I really hope I don’t have to change any of my code in Compute. That’s my number one feature request: whatever change is done, do not change the number so if it’s column 3 through 313, leave that alone please.”
If models are mostly a random walk, what value do they provide?
Arbitrage’s position is that data scientist performance should approximate a random walk because the models are predicting equities, meaning it’s unlikely to find a strategy that will stay above zero for very long. He mentioned one of Richard’s forum posts about autocorrelation and checking to see if performance is stationary or not.
“Hopefully,” Arbitrage said, “we’re doing a random walk and all of us, individually, are random and none of us are correlated. Because then the signal would be performant if you averaged across all of us.” Basically, each model hopefully has a period of high performance, and by averaging across all of the models and filtering out the noise, the resulting meta model should be performant.
The idea is that during the periods of high performance, a model was right at that time. By building a model on top of all of the performant periods of other models, the meta model carries the edge. Ideally, each individual wouldn’t have an edge, but then Numerai would be able to extract the edge from each model.
What are your ideas around a fair payout system?
“Homo-economicus: we’re all rational agents of the economy.” — Arbitrage
“The only reason we do something is to increase our wealth, or expend wealth to increase utility.” Instead of fairness, Arbitrage instead opted to evaluate the payout system in terms of wealth-maximization. Fair would be compensation based on effort: particularly in the early days, tournament competitors can struggle with the amount of time put into creating a model compared to the rewards. Now, though, Arbitrage expends hardly any effort because he has battle-tested models, and Numerai Compute automates the weekly contribution process, so he continues to earn based on work done in the past.
“As long as my effort is being rewarded,” he said, “and I feel that I’m being compensated for the time that I’m investing, I think it’s worth doing. When that time comes that I think I’m putting in more effort than I’m being rewarded for, then I’ll exit.”
Slyfox: To me there’s two games going on. There’s the tournament, which is just a game of data science, then there’s the hedge fund trying to make money in the markets. The hedge fund’s performance depends on more than just the tournament: it depends on the amount of capital we have and whether or not we can execute on that. It makes sense for those to be somewhat decoupled, and if you want to play the second game (and you’re also an accredited investor) you can talk to us about that. Not advertising, but you could ask us for more information.
With the questions from Slido completed, Arbitrage carried the conversation beyond his usual one-hour limit for the first time, chatting with Slyfox and the audience. Slyfox, and Michael P for fielding questions during this Office Hours, to Arbitrage for hosting, and to Zen / Nasdaq Jockey for being interviewed. | https://docs.numer.ai/office-hours-with-arbitrage/office-hours-recaps/ohwa-7 | 2020-11-24T00:31:33 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.numer.ai |
The OPTIONS request checks the availability of an individual Swift service. The OPTIONS request is processed by the Storage Node or API Gateway Node specified in the URL.
For example, client applications can issue an OPTIONS request to the Swift port on a Storage Node, without providing Swift authentication credentials, to determine whether the Storage Node is available. You can use this request for monitoring or to allow external load balancers to identify when a Storage Node is down.
When used with the info URL or the storage URL, the OPTIONS method returns a list of supported verbs for the given URL (for example, HEAD, GET, OPTIONS, and PUT). The OPTIONS method cannot be used with the auth URL.
The following request parameter is required:
The following request parameters are optional:
A successful execution returns the following headers with an "HTTP/1.1 204 No Content" response. The OPTIONS request to the storage URL does not require that the target exists. | https://docs.netapp.com/sgws-111/topic/com.netapp.doc.sg-swift/GUID-7289E60A-193B-4F23-9E5B-B5A7A6B8139F.html?lang=en | 2020-11-24T01:40:12 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.netapp.com |
Active Directory and LDAP Authorisation
Activating Authorisation
To use Active Directory / LDAP for authorisation first configure a respective authorisation domain in the
authz section of
sg_config:
authz: ldap: http_enabled: true authorization_backend: type: ldap config: ...
Configuring Authorisation
Authorisation is the process of retrieving backend roles for an authenticated user from an LDAP server. This is typically the same server(s) you use for authentication, but you can also use a different server if necessary. The only requirement is that the user to fetch the roles for actually exists on the LDAP server.
Since Search Guard always checks if a user exists in the LDAP server, you need to configure
userbase,
usersearch and
username_attribute also in the
authz section.
Authorisation works similarly to authentication. Search Guard issues an LDAP query containing the username against the role subtree of the LDAP tree.
As an alternative, Search Guard can also fetch roles that are defined as a direct attribute of the user entry in the user subtree.
Both methods can also be combined. Usually you will have either roles defined as an attribute of the user entry or roles stored in a seperate subtree.
Approach 1: Querying the role subtree
Search Guard. Which attribute you want to use is specified by the
userroleattribute setting.
userroleattribute: myattribute
Search Guard then issues the substituted query against the configured role subtree. The whole subtree underneath
rolebase will be searched.
rolebase: 'ou=groups,dc=example,dc=com'
Since Search Guard v24 you can alternatively configure multiple role bases (this combines and replaces the
rolesearch and
rolebase attribute):
roles: normalroles: base: 'ou=groups,dc=example,dc=com' search: '(uniqueMember={0})' other: base: 'ou=othergroups,dc=example,dc=com' search: '(owner={0})'
If you use nested roles (roles which are members of other roles etc.), you can configure Search Guard to resolve these roles as well:
resolve_nested_roles: false
After all roles have been fetched, Search Guard extracts the final role names from a configurable attribute of the role entries:
rolename: cn
If this is not set, the DN of the role entry is used. You can now use this role name for mapping it to one or more Search Guard roles, as defined in
roles_mapping.yml.
Approach 2: Using a user’s attribute as role name
If you store the roles as a direct attribute of the user entries in the user subtree, you only need to configure the attribute name:
userrolename: roles
You can configure multiple attribute names separated by comma:
userrolename: roles, otherroles
This approach can be combined with querying the role subtree. Search Guard will first fetch the roles from the user’s role attribute, and the execute the role search.
If you don’t use/have a role subtree, you can disable the role search completely:
rolesearch_enabled: false
Performance: Controlling LDAP user attributes
By default, Search Guard will read all LDAP user attributes and make them available for index name variable substitution or DLS query variable substitution.
If your LDAP entries have a lot of attributes, you may want to control which attributes should be made available as variables. Fewer attributes result in better runtime performance behaviour.
Example:
authz: ldap: http need a Kibana server user. This is used by Kibana internally to manage stored objects and perform monitoring and maintenance tasks. You do not want to add this Kibana-internal user to your Active Directory installation, but store it in the Search Guard internal user database.
In this case, it makes sense to exclude the Kibana server user from the LDAP authorisation, roles lookups
If the users in your LDAP installation have a large amount of roles, and you have the requirement to resolve nested roles as well, you might run into the following performance issue:
- For each of the users roles, Search Guard resolves nested roles.
- This means at least one additional LDAP query per role.
- If a user has many roles, and these roles are deeply nested, this results in a lot of additional LDAP queries
- This means more network roundtrips and thus, depending on your network latency and LDAP response times, a performance penalty.
However, in most cases not all roles a user has are related to Elasticsearch / Kibana / Search Guard. You might need just one or two roles, and all other roles are irrelevant. If this is the case, you can use the nested role filter feature.
With this feature, you can define a list of roles which are filtered out from the list of the user’s roles, before nested roles are resolved. Wildcards and regular expressions are supported.
So if you already know which roles are relevant for your Elasticsearch cluster and which aren’t, simply list the irrelevant roles and enjoy improved performance.
This only has an effect if
resolve_nested_roles is
true.
nested_role_filter: <true|false> - 'cn=Michael Jackson,ou*people,o=TEST' - ...
For more information on how to exclude users from lookups see the page Exclude certain users from authentication/authorization.
Advanced: Active Directory Global Catalog (DSID-0C0906DC)
Depending on your configuration you may need to use port 3268 instead of 389 so that the LDAP module is able to query the global catalog. Changing the port can help to avoid warnings like
[WARN ][o.l.r.SearchReferralHandler] Could not follow referral to ldap://ForestDnsZones.xxx.xxx.local/DC=ForestDnsZones,DC=xxx,DC=xxx,DC=local org.ldaptive.LdapException:]; remaining name 'DC=ForestDnsZones,DC=xxx,DC=xxx,DC=local' ... Caused by:]
For more details refer to
Configuration summary
Complete authorization example userbase: 'ou=people,dc=example,dc=com' usersearch: '*/'
Configuring multiple role bases
You can also configure multiple role bases. Search Guard will query all role bases to fetch the users LDAP groups: users: primary-userbase: base: 'ou=people,dc=example,dc=com' search: '(uid={0})' secondary-userbase: base: 'ou=otherpeople,dc=example,dc=com' search: '(initials={0})' username_attribute: uid roles: primary-rolebase: base: 'ou=groups,dc=example,dc=com' search: '(uniqueMember={0})' secondary-rolebase: base: 'ou=othergroups,dc=example,dc=com' search: '(owner={0})' userroleattribute: null userrolename: none rolename: cn resolve_nested_roles: true skip_users: - kibanaserver - 'cn=Michael Jackson,ou*people,o=TEST' - '/\S*/'
The names of the configuration keys (
primary-rolebase,
secondary-rolebase…) are just telling names. You can choose them freely and you can configure as many role bases as you need.
Additional resources | https://docs.search-guard.com/latest/active-directory-ldap-authorisation | 2020-11-24T00:38:04 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.search-guard.com |
Axional Studio is a
multi-tenant
Model–view–controller
development framework specifically designed to build large scale enterprise applications.
Multi-tenancy means that a single instance of the software and the entire
supporting infrastructure serves multiple customers.
Each customer shares the software application and also shares a single database
(or database infrastructure).
The data is tagged in the database as belonging to one customer or another,
and the software is smart enough to know to whom the data belongs to.qwre
In contrast, classical applications are
Single tenancy, which means managing a
single instance of the software and the entire supporting infrastructure serves only
a single customer. With single tenancy, each customer has its own independent
database and instance of the software. With this option, there’s essentially no
sharing going on. Everyone has his own perimeter, separated from anyone else.
Model–view–controller (MVC) is a software architectural pattern for implementing user
interfaces on computers. It divides a given application into three interconnected parts
in order to separate internal representations of information from the ways that
information is presented to and accepted from the user.
Multi-tenancy is mainly hidden from developers and should only
be considered for database design. In contrasts
Model–view–controller
approach is open to programmers so they can interact with the
MVC
application framework.
The following sections will provide a step-by-step guide to the process of
Axional Studio development.
1 Metadata
Axional Studio relies on
metadata to describe an application.
A
database dictionary is a group of tables that stores the
metadata of an application.
Every logical database object that Axional exposes is internally managed using metadata.
Objects, (tables in traditional relational database parlance), fields, stored procedures,
and database triggers are all abstract constructs that exist as metadata.
- This metadata will be used to create database structures and database business logic at least once for each target database for a given application dictionary.
- During runtime, the application description metadata will be used to materialize application views, forms or reports.
Every server looks for configuration description in it's configuration database. This database holds the links between users, applications, databases and dictionaries.
- User is validated to users list.
- User options and available databases are loaded.
- As user selects to run an application on a target database (from it's available list of db resources), associated metadata it's loaded and application is materialized.
For historical reasons database dictionaries names are prefixed by
wic_. You should
be familiar with the following database names.
2 Database modeling
According to the
MVC paradigm we need first to have a model. So we can use the
database dictionary to define the application model. As application has at least
a database side and a server side, metadata can be split in two parts:
Database metadata, that defines all database tables and objects including database side business logic (Functions, Stored Procedures or Triggers).
Server metadata, that defines application objects like menus, forms, reports and server side business logic (Transaction handlers and Scripts).
2.1 Database metadata
Before starting an application you require a database model. It may be an existing one or
a new one. In any case,
Axional Studio provides tools for modeling the database,
to keep track of changes and more important: to ensure proper deployment of database
schema and business logic on any target database running the specified application.
- Database schema definition (DDL) including tables, indexes, foreign keys.
- Database business logic including stored procedures, functions and triggers.
- Database logical aspects like field names, default values, field validation rules, include lists, data helpers.
A large enterprise application can have thousands of objects including tables, procedures,
triggers, foreign keys, etc. Handling it manually it's impossible.
Axional Studio provides tools to check database status against
it's model definition an automatic reconditioning procedures.
To make this option available you must use
Axional Studio database modeling features.
2.2 Server side metadata
A set of database dictionary tables store server side application definition including:
- Menus, to allow user navigation across application options.
- Forms, to allow data entry and data visualization.
- Reports of different types including business operational reports, page perfect reports and pixel perfect reports.
- Business logic process like sending and e-mail or perform complex calculations.
- Dashboards, to group complex data visualization.
Axional Studioserver in the application cluster.
2.3 Target database
Using the database model, stored in a
dictionary you can create one or many target databases.
A target database can be used for development, test or production environments but if they
run the same application, then they have the same application dictionary associated.
To setup a target database you need to connect as manager and setup it inside the
wic_conf:
- Register the database object in
wic_dbms_objects.
- Register at least an administrator user to access it, in the
wic_dbms_userstable.
TO DOThis section is incomplete and will be concluded as soon as possible.
Next lines provide a sample configuration to run an application called
app1 on two databases
named
test_db1 and
prod_db1. The server will at least need the following
databases.
3 Views and controllers
As seen before, the development of a web application will be guided through a database named
dictionary.
This database will give programmers a multiuser structured repository for all application code.
After having defined a model, we can start the development of its views.
3.1 DDL Data Definition Language
All database applications relay on SQL language to perform queries or transactions.
Axional includes an
XSQL/DDL language that allows users to write database independent SQL operation.
Writing SQL operations using
XSQL/DDL has great advantages like those described for database modeling but
also adds the possibility to use security injection.
3.2 Security
Axional Studio provides a feature named
security injection that allows isolating application logic from security.
The
security injection defines entities with selected table access restrictions. Then, these entities are applied
to users.
During application runtime and when any query is materialized, each
XSQL/DDL statement used will be injected
with the appropriate security restrictions. | https://docs.deistercloud.com/content/Axional%20development%20products.15/Axional%20Studio.4/Development%20Guide.10/Introduction.2.xml?embedded=true | 2020-11-24T00:25:44 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.deistercloud.com |
The Exchange Union Daemon (
xud) is the reference implementation powering OpenDEX, a decentralized exchange built on top of the Lightning and Connext network.
xud brings individual traders, market makers and exchanges onto OpenDEX to form a single global trading network and liquidity pool.
Traders. Exchanges are creating demand by using
xud to hedge end-user trades on OpenDEX to lock in profits. Market Makers are creating supply, the liquidity on OpenDEX, by leveraging arbitrage to external exchanges (e.g. Binance). Exchange Union provides tools that decreases the friction for both sides, all integrated with
xud.
-> Get started as Market Maker, providing liquidity by arbitraging with external exchanges making a profit
-> Get started as Trader, buying and selling cryptocurrency preserving privacy & without counterparty risk
-> Get started as Developer, contributing or building on top of
xud
-> Get started as Exchange Operator, running a open-source exchange platform with integrated liquidity (coming soon!)
Traders benefit from anonymous & secure peer-to-peer trading on OpenDEX-enabled exchanges
Market makers make profits by arbitraging between external exchanges and OpenDEX.
Exchanges secure profits by locking in trading fees through hedging trades on OpenDEX.
Supports traders, market makers, and exchanges.
Order book aggregates orders from the network locally.
Orders get matched locally with peer orders from the network.
Instant order settlement via atomic swaps on the lightning & connext network.
Full control over funds at all times.
One mnemonic for all assets.
Peer-to-peer discovery of other OpenDEX nodes.
gRPC API to serve other applications, also accessible via the command-line interface
xucli.
The daemon has been designed to be as developer friendly as possible in order to facilitate application development on top of
xud.
api.exchangeunion.com: The automatically generated gRPC API documentation
typedoc.exchangeunion.com: The automatically generated code documentation
xud is in an early stage; just like this page. Please help us to improve by opening issues (or even better PRs) for xud, xud-docker and the docs.
Feel like talking? Chat with us on Discord! | https://docs.exchangeunion.com/ | 2020-11-24T00:51:55 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.exchangeunion.com |
Templates
What is a template in iCC?
Templates are a way for you to design of logical folder structure that can be used for creating workspaces or folders within workspaces from iManage Work client applications. This capability is called Flexible Folders.
How are templates used in iManage Work clients?
iManage Work: Available by default with the iManage Work (includes workspace and Flexible Folders).
iManage DeskSite and iManage FileSite - Install iManage Work Add-ons for Classic clients to use this feature. For more information, refer to iManage Work Add-ons for Classic Clients Installation and User Guide.
Are the templates created by other applications supported by iCC?
iCC supports templates that exist in your library but were created using other applications.
iManage Work WorkSpace Generator: Templates created by iManage Work Workspace Generator are also supported by iCC. The variable format for the custom fields in these templates is also supported. For example, if Custom1 is defined as %workspace_value% in a template then the Custom1 value of the target workspace is assigned to the folders created using this template. For more information, refer to iManage Work Workspace Generator User and Installation Guide.
iManage Work Web - The templates created using classic iManage Work Web are supported by iCC with the limitation that only Document folder, Search folder and Tab objects are recognized. All other object types such as Calendar, Note, Link list, and so on are not considered. Therefore, these templates can be used to create Document folder, Search folder, and Tab objects. iCC also supports %workspace_value%.
What should I do in iCC to set up iManage Work Web templates?
No special configuration is required. Similar to iCC, the templates created using iManage Work Web are also stored in the library . The template workspaces are assigned a special type called 'template' and are listed in the Templates app in iCC. When a user searches for workspaces, these template workspaces are not retrieved by iManage Work because of the special template set for them.
How do I configure the folder properties for templates?
The folder properties can be configured in iCC while creating the folder structure for a template. You can define the properties using any of the following options:
Enter the values manually and select accordingly from the drop-down list.
Define Custom1-12, Custom29, or Custom30 in the format #CXXALIAS# or %CXXALIAS%. For example, if Custom1 value is 9999, #C1ALIAS# sets the folder property as 9999 for the folder. You can use only % as a delimiter to work with iManage Work Workspace Generator.
Select the check boxes for the properties to automatically inherit the properties from the parent workspace (%workspace_value%). This option is currently available for Custom1-24 and Custom 29-30. To set the remaining properties, enter the values manually.
When these templates are used for creating workspaces or flexible folders from iManage client applications, the properties defined in the templates are automatically set for the workspace or folders and the documents created under these folders.
What are the limitations of using iManage Work Workspace Generator templates?
The XML templates are currently not supported by iCC.
What are the different folder options a user sees while using Templates?
iCC recognizes only Document Folder, Search Folder and Tab objects. All other object types that are supported by iManage Work such as Calendar, Note, Link list, and so on are not considered in iCC. Therefore, templates can be used to create only Document Folder, Search Folder, and Tab.
What about iManage Share folders?
If a user has the adequate access permissions to the workspace, they can create an iManage Share folder at the root level in the workspace, even if a template exists for the workspace.
Can I use iCC templates in iManage Work Workspace Generator to create workspaces and folder structure?
Yes. However, iManage Work Workspace Generator creates only those folders that are marked as type Workspace Creation in iCC templates. T he remaining folders can be created by the users when required.
Can I use Work Web to modify templates after they have been edited in iCC?
You should not use iManage Work Web to modify templates after they have been edited in iCC. iManage Work Web removes the key properties that iCC uses to link the optional folders to the workspaces for Flexible Folders.
What is the difference between workspace creation and Required folder? When would I use each one?
The workspace creation folder in a template is automatically created along with the new workspace. However, the Required folder at workspace root level is equivalent to workspace creation. Required is valuable when you want a folder structure to be created mandatorily when a user creates a folder.
Flexible Folders
How are template folders matched in Flexible Folders?
Templates are matched using the match fields defined in the template with the custom metadata set for the target workspace.
How are the templates displayed during Flexible Folders creation?
The templates with EXACT MATCH on the Match Fields and the custom metadata set for the target workspace appear at the top, followed by templates with no match fields defined. If no match field is defined for the target workspace, all the templates are listed under TEMPLATE WITH NO FILTERS.
What happens in Flexible Folders when a template is deleted from the system?
The template mapping for the Flexible Folders still remains. However, a user can select the next matching template from the list to create more Flexible Folders. If there is no matching template, the default folder creation dialog box appears on the Template selection screen.
What happens in Flexible Folders when a template folder is deleted from the system? What happens after the folder is created again?
The Flexible Folder continues to have the key value of the template folder. When the template folder is created again, it is assigned a different folder template ID. However, the matching is done for the folder names of the target location and the template folder does not appear in the list if the Flexible Folder already exists in the target location.
What happens if no templates are set up in iCC?
You cannot create Flexible Folders and the dialog box to create a regular folder appears.
How do I create my own folder that is not tied to a flexible folder from a template?
If no more sub folders are defined in a template at a particular flexible folder level, you can create your own subfolders.
How do I create a folder at the same level when there are optional folders that have not been created?
You have to enable the native New Folder command and the Flexible Folders command simultaneously.
Can I use Flexible Folders with workspaces that were created using third-party tools?
Yes. The folders are matched on the folder names to create the Flexible Folders as there would be no template ID assigned to the workspace.
What care should be taken if a customer is using a third-party foldering tool?
For each folder created using a template, a FolderTemplateId property is created. It is stored in the PROJECT_NVPS table in the library. The FolderTemplateId property is stamped with the folder ID of the template folder used to create the folder. If a customer is using a third-party foldering tool which also creates another FolderTemplateId property with its own template value, a naming collision can occur and cause unpredictable results in either the third-party tool or in the Flexible Folders.
For example, when a user tries to create a new sub folder using iCC, an error can occur or the template may not be displayed correctly. To avoid this issue, libraries should be updated accordingly.
How do we enable users to create their own custom folders at a subfolder level?
This happens if no subfolders are defined in the template. For example, if you only have template folders at the workspace level, users are able to create their own sub folders within the workspace level folders.
Workspace
How do I set up the Workspace Creation dialog box?
For iManage Work, the workspace creation dialog box is available by default with iManage Work. For Desksite and Filesite, install iManage Work Add-ons. In both the cases, you also need the role bit enabled for creating workspaces.
Where does the security come from?
The default security and user/group access rights are defined for the template in iCC. By default, these settings are applied to the workspace. However, users can override the template security settings in client applications while creating workspaces.
Can I manually type in keywords like %C1ALIAS% in my workspace name or description and have them resolved?
Yes, you can set the description property by defining it in the #CXXALIAS# or %CXXALIAS%format for populating the for populating the Custom1-12, Custom29, or Custom30 fields. For example, if Custom1 value is 9999, %C1ALIAS% sets the description as 9999 for the folder.
How do we configure the Properties that are available for Workspace, Folder, Search Folder and Tab?
Workspace: New Workspace Profile Dialog in iManage Dialog Editor tool.
Folder: New Profile Dialog in iManage Dialog Editor tool.
Search Folder: Search Dialog in iManage Dialog Editor tool.
Tab: No configuration required as tabs do not have their own properties.
What happens if I add a new template folder for workspace creation?
iManage Work Workspace Generator adds the new template folder the next time it updates workspaces that use the template. For more information, refer to iManage Work Workspace Generator User and Installation Guide.
As the workspace is already created, this Flexible Folders is treated as a Required folder. The next time a user tries to create a folder in the workspace, it gets automatically created.
How can I determine which template was used to create a workspace?
The template ID can be found in the list of the name-value pairs properties of a workspace by issuing the following API call:
GET /customers/{customerId}/libraries/{libraryId}/workspaces/{workspaceId}/name-value-pairs
For example:
"data": {
"TemplateId": "active!179427"
}
Next, issue the following API to get a list of all templates:
GET customers/{customerId}/libraries/{libraryId}/templates
In the list of templates returned, locate the template based on the TemplateID. | https://docs.imanage.com/cc-help/10.3.1/en/FAQ.html | 2020-11-24T00:55:15 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.imanage.com |
Editing Labor Transactions
Occasionally mistakes are made. Mechanics may forget to clock off of a job they previously worked on in order to start tracking time on the job they are currently working on. This records too much labor on the previous job, and too little labor on the current job, or none at all.
Complete the following steps to edit Paperless Shop labor transactions:
- Select the Edit a Labor Transaction option (WPD).
- Enter the mechanic number or press F1 to select a mechanic from the lookup list.
- Enter the date range for the transactions you would like displayed.
- Highlight a transaction and click the EDIT button.
- You may make adjustments to the start and end date/times as well as modify the completion code.
- Click the UPDATE button after you are satisfied with your changes.
- Edit any additional transactions necessary to correct the mechanic's timeline. | https://docs.rtafleet.com/rta-manual/paperless-shop-module/editing-labor-transactions/ | 2020-11-24T01:17:22 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.rtafleet.com |
As you create or edit your vRealize Automation cloud template designs, use the most appropriate network resources for your objectives. Learn about the NSX and cloud-agnostic network options that are available in the cloud template.
Select one of the available network resource types based on machine and related conditions in your vRealize Automation cloud template.
Cloud agnostic network resource
Cloud.Networkresource type. The default resource displays as:
Cloud_Network_1: type: Cloud.Network properties: networkType: existing
Use a cloud agnostic network when you want to specify networking characteristics for a target machine type that is not, or might not, be connected to an NSX network.
- Cloud agnostic machine
- vSphere
- Google Cloud Platform (GCP)
- Amazon Web Services (AWS)
- Microsoft Azure
- VMware Cloud on AWS (VMC)
networkType) settings:
- public
- private
- outbound
- existing
vSphere network resource
Cloud.vSphere.Networkresource type. The default resource displays as:
Cloud_vSphere_Network_1: type: Cloud.vSphere.Network properties: networkType: existing
Use a vSphere network when you want to specify networking characteristics for a vSphere machine type (
Cloud.vSphere.Machine).
The vSphere network resource is only available for a
Cloud.vSphere.Machine machine type.
networkType) settings:
- public
- private
- existing
For more information about network types, see Netwerkinstellingen gebruiken in netwerkprofielen en cloudsjablonen in vRealize Automation.
NSX network resource
Cloud.NSX.Networkresource type. The default resource displays as:
Cloud_NSX_Network_1: type: Cloud.NSX.Network properties: networkType: existing
Use an NSX network when you want to attach a network resource to one or more machines that have been associated to an NSX-V or NSX-T cloud account. The NSX network resource allows you to specify NSX networking characteristics for a vSphere machine resource that is associated to an NSX-V or NSX-T cloud account.
Cloud.NSX.Networkresource is available for these network type (
networkType) settings:
- public
- private
- outbound
- existing
- routed - Routed networks are only available for NSX-V and NSX-T.
Each on-demand NSX-T network creates a new Tier-1 logical router. Each on-demand NSX-V network creates a new Edge.
To support NAT rules and NAT port forwarding, you can add a
Cloud.NSX.Gateway cloud template resource to allow DNAT rules to be specified for the gateway/router that is connected to an outbound NSX-V or NSX-T network. The gateway must be attached to a single outbound network and can be connected to multiple machines or load balancers that are connected to the same outbound network. DNAT rules specified within the gateway reference these machines or load balancers as their target. NAT rules cannot be specified for clustered machines, however as a Day 2 operation they can be specified for individual machines within the cluster.
For related information, see Network, security, and load balancer examples in vRealize Automation cloud templates.
External IPAM integration options
For information about properties that are available for use with your Infoblox IPAM integrations in cloud template designs and deployments, see Infoblox-specifieke eigenschappen en uitbreidbaarheidskenmerken voor IPAM-integraties in vRealize Automation gebruiken.
Available day 2 operations
For a list of common day 2 operations that are available for cloud template and deployment resources, see Welke acties kan ik op vRealize Automation Cloud Assembly-implementaties uitvoeren.
For an example of how to move from one network to another, see Een geïmplementeerde machine naar een ander netwerk verplaatsen.
Learn more
For information about defining network resources, see Netwerkresources in vRealize Automation.
For information about defining network profiles, see Meer informatie over netwerkprofielen in vRealize Automation.
For examples of cloud template designs that illustrate sample network resources and settings, see Network, security, and load balancer examples in vRealize Automation cloud templates. | https://docs.vmware.com/nl/vRealize-Automation/8.2/Using-and-Managing-Cloud-Assembly/GUID-19347DB8-8BF9-4FD1-B5A3-97A900915B8E.html | 2020-11-24T01:42:17 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
smrt.rtsolver package¶
This directory contains different solvers of the radiative transfer equation. Based on the electromagnetic properties of
each layer computed by the EM model, these RT solvers compute the emission and propagation of energy in the medium up to the surface (the atmosphere is usually
dealt with independently in dedicated modules in
smrt.atmosphere).
The solvers differ by the approximations and numerical methods.
dort is currently the most accurate and recommended
in most cases unless the computation time is a constraint.
The selection of the solver is done with the
make_model() function.
For Developers
To experiment with DORT, we recommand to copy the file dort.py to e.g. dort_mytest.py so it is immediately available through
make_model().
To develop a new solver that will be accessible by the
make_model() function, you need to add
a file in this directory, give a look at dort.py which is not simple but the only one at the moment. Only the method solve needs
to be implemented. It must return a
Result instance with the results. Contact the core developers to have more details. | https://smrt.readthedocs.io/en/latest/smrt.rtsolver.html | 2020-11-24T00:50:57 | CC-MAIN-2020-50 | 1606141169606.2 | [] | smrt.readthedocs.io |
Framapic
Framapic is a free and open source service which allows you to share pictures in a secure and confidential way.
- Paste the image you want to share.
- If needed, define the retention policy.
- You can then share the link you are given with other people for them to see the picture.
Your pictures are encrypted and stored on our servers. We cannot see the content of your files nor decrypt them. This service is based on a free software: Lutim.
Thanks to a new feature added by its developer, Framapic now allows you to easily create an image gallery! Find out how in our example.
Video Tutorial
Tutorial made by arpinux, landscape architect of GNU/Linux beginners linux distribution HandyLinux.
See more:
- Try Framapic
- How to
- Create a gallery
- Android app:
- De-google-ify Internet
- Support Framasoft | https://docs.framasoft.org/en/lutim/index.html | 2020-11-24T01:27:15 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.framasoft.org |
In order to manage Custom Resources from a
ServiceCluster we have to tell KubeCarrier how to find them and how we want to offer them to our users.
First we need some kind of
CustomResourceDefinition or Operator installation in our
ServiceCluster.
To help get you started we have a fictional example CRD that can be used without having to setup an Operator.
Register the CRD in the
ServiceCluster.
# make sure you are connected to the ServiceCluster # that's `eu-west-1` if you followed our earlier guide. $ kubectl config use-context kind-eu-west-1 Switched to context "kind-eu-west-1". $ kubectl apply \ -f customresourcedefinition.apiextensions.k8s.io/couchdbs.couchdb.io created $ kubectl get crd NAME CREATED AT couchdbs.couchdb.io 2020-03-10T10:27:51Z
Now we will tell the KubeCarrier installation to work with this CRD.
We can accomplish this, by creating a
CatalogEntrySet. This object describes which CRD should be fetched from which ServiceCluster, metadata for the Service Hub and it will limit which fields are available to users.
CatalogEntrySet definition
apiVersion: catalog.kubecarrier.io/v1alpha1 kind: CatalogEntrySet metadata: name: couchdbs.eu-west-1 spec: metadata: displayName: CouchDB description: The compfy database discover: crd: name: couchdbs.couchdb.io serviceClusterSelector: {} derive: expose: - versions: - v1alpha1 fields: - jsonPath: .spec.username - jsonPath: .spec.password - jsonPath: .status.phase - jsonPath: .status.fauxtonAddress - jsonPath: .status.address - jsonPath: .status.observedGeneration
# make sure you are connected to the KubeCarrier Cluster # that's `kubecarrier` if you followed our earlier guide. $ kubectl config use-context kind-kubecarrier Switched to context "kind-kubecarrier". $ kubectl apply -n team-a \ -f catalogentryset.catalog.kubecarrier.io/couchdbs created $ kubectl get catalogentryset -n team-a NAME STATUS CRD AGE couchdbs Ready couchdbs.couchdb.io 19s
As soon as the
CatalogEntrySet is ready, you will notice two new
CustomResourceDefinitions appearing in the Cluster:
$ kubectl get crd -l kubecarrier.io/origin-namespace=team-a NAME CREATED AT couchdbs.eu-west-1.team-a 2020-07-31T09:36:04Z couchdbs.internal.eu-west-1.team-a 2020-07-31T09:35:50Z
The
couchdbs.internal.eu-west-1.team-a object is just a copy of the CRD present in the
ServiceCluster, while
couchdbs.eu-west-1.team-a is a “slimed-down” version, only containing fields specified in the
CatalogEntrySet. Both CRDs are “namespaced” by their API group.
Now that we have successfully registered a
CustomResourceDefinition from another cluster, attached metadata to it and created a “public” interface for other people, we can go ahead and actually offer this
CouchDB object to other users.
The
CatalogEntrySet we created in previous step is managing
CatalogEntries for all
ServiceClusters that match the given
serviceClusterSelector.
$ kubectl get catalogentry -n team-a NAME STATUS BASE CRD TENANT CRD AGE couchdbs.eu-west-1 Ready couchdbs.internal.eu-west-1.team-a couchdbs.eu-west-1.team-a 26s
We can now reference these
CatalogEntries in a
Catalog and offer them to
Tenants.
Every
Account with the
Tenant role has a
Tenant object created in each
Provider namespace.
$ kubectl get tenant -n team-a NAME AGE team-b 5m35s
These objects allow the
Provider to organize them by setting labels on them, so they can be selected by a
Catalog.
This
Catalog selects all
CatalogEntries and offers them to all
Tenants:
Catalog definition
apiVersion: catalog.kubecarrier.io/v1alpha1 kind: Catalog metadata: name: default spec: # selects all the Tenants tenantSelector: {} # selects all the CatalogEntries catalogEntrySelector: {}
$ kubectl apply -n team-a \ -f catalog.catalog.kubecarrier.io/default created $ kubectl get catalog -n team-a NAME STATUS AGE default Ready 5s
When the
Catalog is ready, selected
Tenants can discover objects available to them and RBAC is setup to users to work with the CRD in their namespace.
Here we also use
kubectl user impersonation (
--as), to showcase RBAC:
# Offering objects contain information about CRDs that are shared to a Tenant. # They contain all the information to validate and create new instances. $ kubectl get offering -n team-b --as=team-b-member NAME DISPLAY NAME PROVIDER AGE couchdbs.eu-west-1.team-a CouchDB team-a 3m15s # Region exposes information about the underlying Clusters. $ kubectl get region -n team-b --as=team-b-member NAME PROVIDER DISPLAY NAME AGE eu-west-1.team-a team-a EU West 1 5m14s # Provider exposes information about the Provider of an Offering. $ kubectl get provider -n team-b --as=team-b-member NAME DISPLAY NAME AGE team-a The A Team 6m11s | https://docs.kubermatic.com/kubecarrier/v0.3/getting_started/catalogs/ | 2020-11-24T00:32:42 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.kubermatic.com |
The user-defined events are based on search.
All the user-defined events are listed on the User-defined Events page under Settings. The following fields are specified for each event.
You can edit or delete the event. While editing it, you can specify the email address and the frequency of the email notification. | https://docs.vmware.com/en/VMware-vRealize-Network-Insight-Cloud/services/com.vmware.vrni.using.doc/GUID-43C5080E-DFE6-4869-B15C-AA5EDDF16330.html | 2020-11-24T01:50:28 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
.
If smart card authentication is enabled and other authentication methods are disabled, users are then required to log in using smart card authentication.
If login from the vSphere Web Client is not working, and if user name and password authentication is turned off, a root or administrator user can turn user name and password authentication back on from the Platform Services Controller command line by running the following command. The example is for Windows; for Linux, use sso-config.sh.
sso-config.bat -set_authn_policy -pwdAuthn true
You can find the sso-config script at the following locations:
Prerequisites
-For example:
sso-config.bat -set_tc_cert_authn -switch true -cacerts MySmartCA1.cer -t vsphere.local
- Restart the virtual or physical machine.
service-control --stop vmware-stsd service-control --start vmware-stsd
- To enable smart cart authentication for VMware Directory Service (vmdir), run the following command.
sso-config.[bat|sh] -set_authn_policy -certAuthn true -cacerts first_trusted_cert.cer,second_trusted_cert.cer -t tenantFor example:
sso-config.[bat|sh] -set_authn_policy -certAuthn true -cacerts MySmartCA1.cer,MySmartCA2.cer -t vsphere.localIf you specify multiple certificates, spaces between certificates are not allowed.
- To disable all other authentication methods, run the following commands.
sso-config.sh -set_authn_policy -pwdAuthn false -t vsphere.local sso-config.sh -set_authn_policy -winAuthn false -t vsphere.local sso-config.sh -set_authn_policy -securIDAuthn false -t vsphere.localYou can use these commands to enable and disable different authentication methods as needed.
- (Optional) To set a certificate policies allowlist, run the following command.
sso-config.[bat|sh] -set_authn_policy -certPolicies policiesTo specify multiple policies, separate them with a command, for example:
sso-config.bat -set_authn_policy -certPolicies 2.16.840.1.101.2.1.11.9,2.16.840.1.101.2.1.11.19This allowlist specifies object IDs of policies that are allowed in the certificate's certificate policy extension. An X509 certificate can have a Certificate Policy extension.
- (Optional) To list configuration information, run the following command.
sso-config.[bat|sh] -get_authn_policy -t tenantName | https://docs.vmware.com/en/VMware-vSphere/6.0/com.vmware.vsphere.security.doc/GUID-02904AAD-D71C-4251-998F-854C25A2E18E.html | 2020-11-24T02:03:55 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
Services for iOS¶
At the moment, there are two iOS APIs partially implemented, GameCenter and Storekit. Both use the same model of asynchronous calls explained below. ‘OK’, the call operation is completed, with an error probably caused locally (no internet connection, API incorrectly configured, etc). If the error value is ‘OK’, a response event will be produced and added to the ‘pending events’ queue. Example:
func on_purchase_pressed(): var result = InAppStore.purchase( { "product_id": "my_product" } ) if result == OK: animation.play("busy") # show the "waiting for response" animation else: show_error() # put this on a 1 second timer or something func check_events(): while InAppStore.get_pending_event_count() > 0: var event = InAppStore platform/iphone/in_app_store.mm
The Store Kit API is accessible through the “InAppStore” singleton (will always be available from gdscript). It is initialized automatically. It has three methods for purchasing: platform/iphone/game_center.mm
The Game Center API is available through the “GameCenter” singleton. It has 6 methods:
Error post_score(Variant p_score);
Erroraward_achievement(Variant p_params);
Error reset_achievements();
Error request_achievements();
Error request_achievement_descriptions();
Error show_game_center(Variant p_params);
plus the standard pending event interface.
post_score¶
Posts a score to a Game Center leaderboard.
Parameters¶
Takes a Dictionary as a parameter, with two fields:
scorea float number
categorya string with the category name
Example:
Example: | https://godot-es-docs.readthedocs.io/en/latest/tutorials/platform/services_for_ios.html | 2020-11-24T00:12:38 | CC-MAIN-2020-50 | 1606141169606.2 | [] | godot-es-docs.readthedocs.io |
Getting Batch Recommendations
Use an asynchronous batch workflow to get recommendations from large datasets that do not require real-time updates. For instance, you might create a batch inference job to get product recommendations for all users on an email list, or to get item-to-item similarities (SIMS) across an inventory. To get batch recommendations, you can create a batch inference job by calling CreateBatchInferenceJob.
In order to get batch recommendations, the IAM user role that invokes the CreateBatchInferenceJob operation must have read and write permissions to your input and output Amazon S3 buckets respectively. For more information on bucket permissions, see User Policy Examples in the Amazon Simple Storage Service (S3) Developer Guide.
Amazon S3 buckets and objects must be either encryption free or, if you are using AWS Key Management Service (AWS KMS) for encryption, you must give your IAM user and Amazon Personalize IAM service role permission to use your key. For more information see Using key policies in AWS KMS in the AWS Key Management Service Developer Guide.
You can perform batch inference operations with any of the following tools:
How scoring works
Item scores calculated by batch recommendation jobs are calculated the same ways as described in Getting Real-Time Recommendations, and can be viewed in the batch job's output JSON file. Scores are only returned by models trained with the HRNN and Personalize-Ranking recipes.
Input and Output JSON Examples
The CreateBatchInferenceJob uses a solution version to make recommendations based on data provided in an input JSON file. The result is then returned as a JSON file to an Amazon S3 bucket. The following tab list contains correctly formatted JSON input and output examples for each recipe type.
USER_PERSONALIZATION
- Input
{"userId": "4638"} {"userId": "663"} {"userId": "3384"} ...
- Output
{"input":{"userId":"4638"}, "output": {"recommendedItems": ["296", "1", "260", "318"]}, {"scores": [0.0009785, 0.000976, 0.0008851]}} {"input":{"userId":"663"}, "output": {"recommendedItems": ["1393", "3793", "2701", "3826"]}, {"scores": [0.00008149, 0.00007025, 0.000652]}} {"input":{"userId":"3384"}, "output": {"recommendedItems": ["8368", "5989", "40815", "48780"]}, {"scores": [0.003015, 0.00154, 0.00142]}} ...
Popularity-Count
- Input
{} {"itemId": "105"} {"itemId": "41"} ...
- Output
{"input": {}, "output": {"recommendedItems": ["105", "106", "441"]}} {"input": {"itemId": "105"}, "output": {"recommendedItems": ["105", "106", "441"]}} {"input": {"itemId": "41"}, "output": {"recommendedItems": ["105", "106", "441"]}} ...
Personalize-Ranking
- Input
{"userId": "891", "itemList": ["27", "886", "101"]} {"userId": "445", "itemList": ["527", "55", "901"]} {"userId": "71", "itemList": ["27", "351", "101"]} ...
- Output
{"input": {"userId": "891", "itemList": ["27", "886", "101"]}, "output": {"recommendedItems": ["27", "101", "886"]}, {"scores": [0.48421, 0.28133, 0.23446]}} {"input": {"userId": "445", "itemList": ["527", "55", "901"]}, "output": {"recommendedItems": ["901", "527", "55"]}, {"scores": [0.46972, 0.31011, 0.22017]}} {"input": {"userId": "71", "itemList": ["29", "351", "199"]}, "output": {"recommendedItems": ["351", "29", "199"]}, {"scores": [0.68937, 0.24829, 0.06232]}} ...
SIMS
- Input
{"itemId": "105"} {"itemId": "106"} {"itemId": "441"} ...
- Output
{"input": {"itemId": "105"}, "output": {"recommendedItems": ["106", "107", "49"]}, } {"input": {"itemId": "106"}, "output": {"recommendedItems": ["105", "107", "49"]}} {"input": {"itemId": "441"}, "output": {"recommendedItems": ["2", "442", "435"]}} ...
Getting Batch Recommendations (Amazon Personalize Console)
The following procedure outlines the batch inference workflow using the Amazon Personalize console. This procedure assumes that you have already created a solution that is properly formatted to perform the desired batch job on your dataset.
Open the Amazon Personalize console at
and sign in to your account.
Choose Batch inference jobs in the navigation pane, then choose Create batch inference job.
In Batch inference job details, in Batch inference job name, specify a name for your batch inference job.
For IAM service role, choose the Amazon Personalize IAM service role that has read and write access to your input and output Amazon S3 buckets respectively.
For Solution, choose the solution that you want to use to generate the recommendations The solution recipe must match the input data's format.
In Input data configuration, specify the Amazon S3 path to your input file. In Output data configuration, specify the path to your output Amazon S3 bucket.
Choose Create batch inference job. Batch inference job creation starts and the Batch inference jobs page appears with the Batch inference job detail section displayed.
Your screen should look similar to the following:
Note
Creating a batch inference job takes time.
When the batch inference job's status changes to Active, you can retrieve the job's output from the designated output Amazon S3 bucket. The output file's name will be of the format
.
input-name.out
Getting Batch Recommendations (AWS CLI)
The following is an example of a batch inference workflow using the AWS CLI for a
solution trained using the the
USER_PERSONALIZATION
recipe. A JSON file
called
batch.json is passed as input, and the output file,
batch.json.out, is returned to an Amazon S3 bucket.
For
batch-inference-job-config, the example includes
USER_PERSONALIZE recipe specific
itemExplorationConfig hyperparameters:
explorationWeight and
explorationItemAgeCutOff. Optionally include
explorationWeight and
explorationItemAgeCutOff values to configure exploration.
For more information, see
User-Personalization Recipe.
aws personalize create-batch-inference-job --job-name
batchTest\ --solution-version-arn
arn:aws:personalize:us-west-2:012345678901:solution/batch-test-solution-version/1234abcd\ --job-input s3DataSource={path=s3://
personalize/batch/input/input.json} \ --job-output s3DataDestination={path=s3://
personalize/batch/output/} \ --role-arn arn:aws:iam::
012345678901:role/import-export-role\ --batch-inference-job-config itemExplorationConfig={explorationWeight=
0.3, explorationItemAgeCutOff=
30}
{ "batchInferenceJobArn": "arn:aws:personalize:us-west-2:012345678901:batch-inference-job/batchTest" }
Once a batch inference job is created, you can inspect it further with the DescribeBatchInferenceJob operation.
aws personalize describe-batch-inference-job --batch-inference-job-arn arn:aws:personalize:us-west-2:012345678901:batch-inference-job/batchTest
{ "jobName": "batchTest", "batchInferenceJobArn": "arn:aws:personalize:us-west-2:012345678901:batch-inference-job/batchTest", "solutionVersionArn": "
arn:aws:personalize:us-west-2:012345678901:solution/batch-test-solution-version/1234abcd", "jobInput": { "s3DataSource": { "path": "s3://personalize/batch/input/batch.json" } }, "jobOutput": { "s3DataDestination": { "path": "s3://personalize/batch/output/" } }, "roleArn": "arn:aws:iam::012345678901:role/import-export-role", "status": "ACTIVE", "creationDateTime": 1542392161.837, "lastUpdateDateTime: 1542393013.377 }
Getting Batch Recommendations (AWS Python SDK)
Use the following code to get batch recommendations using the AWS Python SDK. The
example includes
itemExplorationConfig
hyperparameters for solution versions trained using the
USER_PERSONALIZATION recommendation
recipe. Optionally include the
itemExplorationConfig
hyperparameters to configure exploration. For more information see
User-Personalization Recipe.
The
operation reads an input JSON file from an Amazon S3 bucket and places an output JSON
file
(
) in an Amazon S3 bucket.
input-file-name.out
The first item in the response file is considered by Amazon Personalize to be of most interest to the user.
import boto3 personalize_rec = boto3.client(service_name='personalize') personalize_rec.create_batch_inference_job ( solutionVersionArn = "
Solution version ARN", jobName = "
Batch job name", roleArn = "
IAM role ARN", batchInferenceJobConfig = { # optional USER_PERSONALIZATION recipe hyperparameters "itemExplorationConfig": { "explorationWeight": "
0.3", "explorationItemAgeCutOff": "
30" } }, jobInput = {"s3DataSource": {"path": "
S3 input path"}}, jobOutput = {"s3DataDestination": {"path": "
S3 output path"}} )
The command returns the ARN for the batch job (the
batchRecommendationsJobArn).
Processing the batch job might take a while to complete. You can check a job's status
by
calling DescribeBatchInferenceJob and passing a
batchRecommendationsJobArn as the input parameter. You can also list all
Amazon Personalize batch inference jobs in your AWS environment by calling ListBatchInferenceJobs. | https://docs.aws.amazon.com/personalize/latest/dg/recommendations-batch.html | 2020-11-24T02:14:51 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.aws.amazon.com |
Advanced Tagging¶
The
OTA_LITE_TAG variable can be used in a few ways to add advanced
tagging capabilities to a factory.
Terminology¶
Platform Build - A build created by a change to the LMP/OE. This is the base OS image.
Container Build - A build created by a change to containers.git.
Target - This an entry in a factory’s TUF targets.json file. It represents what should be thought of as an immutable combination of the Platform build’s OSTree hash + the output of a Container build.
Tag - A Target has a “custom” section where with a list of Tags. The tags can be used to say things like “this is a development build” or this is a “production” build.
OTA_LITE_TAG¶
The
OTA_LITE_TAG determines what tag a target will get when a container
or platform build is performed. The format is:
TAG[:INHERIT_TAG][,TAG[:INHERIT_TAG]....]
Scenario 1: A new platform build that re-uses containers¶
Consider the case where a factory might have a tag called “postmerge” defined for both the lmp.yml and containers.yml builds. A new branch is added to the LMP called “postmerge-stable” that’s going to be based on older, more stable code. However, this new build will use the containers found in the most recent “postmerge” target. This can be expressed in lmp.yml with:
OTA_LITE_TAG: "postmerge-stable:postmerge"
However, this means that changes to containers.git will now also need to produce a new “postmerge-stable” target. So containers.yaml would need to be update with:
OTA_LITE_TAG: "postmerge,postmerge-stable"
Consider this pseudo targets example:
targets: build-1: ostree-hash: DEADBEEF docker-apps: foo:v1, bar:v1 tags: postmerge-stable build-2: ostree-hash: GOODBEEF docker-apps: foo:v2, bar:v2 tags: postmerge
If a change to the postmerge-stable branch was pushed to the LMP, a new target, build-3, would be added. The build logic would then look through the targets list to find the most recent “postmerge” target so that it can copy those docker-apps. This would result in a new target:
build-3: ostree-hash: NEWHASH docker-apps: foo:v2, bar:v2 tags: postmerge-stable
On the other hand, there might also be a new container build for “postmerge”.
In this case the tag specification
postmerge,postmerge-stable would tell
the build logic to produce two new targets:
build-4: # for postmerge-stable it will be based on build-3 ostree-hash: NEWHASH docker-apps: foo:v3, bar:v3 tags: postmerge-stable build-4: # for postmerge, it will be based on build-2 ostree-hash: GOODBEEF docker-apps: foo:v3, bar:v3 tags: postmerge
Scenario 2: Multiple container builds using the same platform¶
This scenario is the reverse of the previous one. A factory might have a platform build tagged with “devel-X”. However, there are two versions of containers being worked on: “devel-X” and “devel-Y”. This could be handled by changing lmp.yml to:
OTA_LITE_TAG: "devel-X,devel-Y"
and containers.yml to:
OTA_LITE_TAG: "devel-X,devel-Y:devel-X"
Scenario 3: Multiple teams, different cadences¶
Some organizations may have separate core platform and application teams. In this scenario, it may be desirable to let each team move at their own decoupled paces. Furthermore, the applications team might have stages(branches) of development they are working on. This could be handled by changing lmp.yml to:
OTA_LITE_TAG: "devel-core"
Then each branch of development for the containers could have things like:
# For the "app-dev" branch of containers.git OTA_LITE_TAG: "app-dev:devel-core" # For the "app-qa" branch of containers.git OTA_LITE_TAG: "app-qa:devel-core"
This scenario is going to produce
devel tagged builds that have no
containers, but can be generically verfied. Then each containers.git branch
will build targets and grab the latest “devel-core” tag to base its platform
on. NOTE: Changes to devel-core don’t cause new container builds. In
order to get a container’s branch updated to the latest
devel-core a user
would need to push an empty commit to containers.git to trigger a new build.
eg:
# from branch app-dev git commit --allow-empty -m 'Pull in latest devel-core changes' | https://docs.foundries.io/71/reference/advanced-tagging.html | 2020-11-24T01:29:56 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.foundries.io |
Quickstart: Add guest users to your directory in the Azure portal
You can invite anyone to collaborate with your organization by adding them to your directory as a guest user. Then you can either send an invitation email that contains a redemption link or send a direct link to an app you want to share. Guest users can sign in with their own work, school, or social identities. Along with this quickstart, you can learn more about adding guest users in the Azure portal, via PowerShell, or in bulk.
In this quickstart, you'll add a new guest user to your Azure AD directory via the Azure portal, send an invitation, and see what the guest user's invitation redemption process looks like.
If you don’t have an Azure subscription, create a free account before you begin.
Prerequisites
To complete the scenario in this tutorial, you need:
- A role that allows you to create users in your tenant directory, like the Global Administrator role or any of the limited administrator directory roles.
- A valid email account that you can add to your tenant directory, and that you can use to receive the test invitation email.
Add a new guest user in Azure AD
Sign in to the Azure portal as an Azure AD administrator.
In the left pane,. test guest user to the app.
Sign in to the Azure portal as an Azure AD administrator.
In the left pane, select Enterprise applications. test user (if necessary) and select the test user in the list. Then click Select.
Select Assign.
Accept the invitation
Now sign in as the guest user to see the invitation.
In your inbox, find the "You're invited" email.
In the email body, select Get Started. A Review permissions page opens in the browser.
Select Accept. The Access Panel opens, which lists the applications the guest user can access.
Clean up resources
When no longer needed, delete the test guest user and the test app.
- Sign in to the Azure portal as an Azure AD administrator.
- In the left pane, select Azure Active Directory.
- Under Manage, select Enterprise applications.
- Open the application Salesforce, and then select Delete.
- In the left pane, select Azure Active Directory.
- Under Manage, select Users.
- Select the test user, and then select Delete user.
Next steps
In this tutorial, you created a guest user in the Azure portal, and sent an invitation to share apps. Then you viewed the redemption process from the guest user's perspective and verified that the app appeared on the guest user's Access Panel. To learn more about adding guest users for collaboration, see Add Azure Active Directory B2B collaboration users in the Azure portal. | https://docs.microsoft.com/en-us/azure/active-directory/external-identities/b2b-quickstart-add-guest-users-portal?WT.mc_id=AZ-MVP-5003450 | 2020-11-24T00:33:17 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.microsoft.com |
If you specify a path to a vCenter Server folder that includes certain special characters in the name of an entity, you must escape the special characters.
Do not escape the slashes in the path name itself. For example, represent the path to the folder /datacenter_01/vm/img%-12 as /datacenter_01/vm/img%25-12.
Certain cmdlets and parameters require escape sequences in entity names. | https://docs.vmware.com/en/VMware-Horizon-7/7.2/com.vmware.view.integration.doc/GUID-7C599FA6-297F-42EB-AC2C-886663C8D27F.html | 2020-11-24T01:49:25 | CC-MAIN-2020-50 | 1606141169606.2 | [] | docs.vmware.com |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.