content
stringlengths 0
557k
| url
stringlengths 16
1.78k
| timestamp
timestamp[ms] | dump
stringlengths 9
15
| segment
stringlengths 13
17
| image_urls
stringlengths 2
55.5k
| netloc
stringlengths 7
77
|
---|---|---|---|---|---|---|
MOAB
MOAB is a scheduling engine for HPC centers.
Warning
As adoption of MOAB has significantly reduced over the last years and there are no active users of Waldur with MOAB requirements, we have deprecated and removed MOAB provider.
Migration from MOAB to SLURM
If you are still running MOAB and would like to move to a supported scheduler - SLURM, please find the suggested articles below. They are provided by external parties with no affiliation to Waldur team.
- Moab to SLURM migration guide
- Converting Moab/Torque scripts to Slurm
- Converting Moab/Torque script to Slurm #2
- Slurm vs Moab/Torque
Last update: 2021-05-07 | https://docs.waldur.com/admin-guide/providers/moab/ | 2021-06-12T21:11:32 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.waldur.com |
$ISVALIDNUM
Synopsis
$ISVALIDNUM(num,scale,min,max)
Parameters
Description
The $ISVALIDNUM function validates num and returns a boolean value. It optionally performs a range check using min and max values. The scale parameter is used during range checking to specify how many fractional digits to compare. A boolean value of 1 means that num is a properly formed number and passes the range check, if one is specified.
$ISVALIDNUM validates American format numbers, which use a period (.) as the decimal separator. It does not validate European format numbers, which use a comma (,) as the decimal separator. $ISVALIDNUM does not consider valid a number that contains numeric group separators; it returns 0 (invalid) for any number containing a comma or a blank space, regardless of the current locale.
Parameters. Validation fails ($ISVALIDNUM returns 0) if:
num is the empty string ("").
num contains any characters other than the digits 0–9, a leading + or – sign, a decimal point (.), and a letter “E” or “e”. For scientific notation the uppercase “E” is the standard exponent operator; the lowercase “e” is a configurable exponent operator, using the %SYSTEM.Process.ScientificNotation()
Opens in a new window method.
num contains more than one + or – sign, decimal point, or letter “E” or “e”.
The optional + or – sign is not the first character of num.
The letter “E” or “e” indicating a base-10 exponent is not followed by an integer in a numeric string.
With the exception of $ISVALIDNUM, specifying a base-10 number with a non-integer exponent in any expression results in a <SYNTAX> error. For example, WRITE 7E3.5.
The scale parameter value causes evaluation using rounded or truncated versions of the num value. The actual value of the num variable is not changed by $ISVALIDNUM processing.
If num is the INF, –INF, or NAN value returned by $DOUBLE, $ISVALIDNUM returns 1.
The largest floating point number that InterSystems IRIS supports is 1.7976931348623157081E308. Specifying a larger number in any InterSystems IRIS numeric operation generates a <MAXNUMBER> error. The largest InterSystems IRIS decimal floating point number specified as a string that $ISVALIDNUM supports is 9.223372036854775807E145. For floating point number strings larger than this, use $ISVALIDDOUBLE. For further details, refer to “Extremely Large Numbers” in the “Data Types and Values” chapter of Using ObjectScript.
scale
The scale parameter is used during range checking to specify how many fractional digits to compare. Specify an integer value for scale; fractional digits in the scale value are ignored. You can specify a scale value larger than the number of fractional digits specified in the other parameters. You can specify a scale value of –1; all other negative scale values result in a <FUNCTION> error.
A nonnegative scale value causes num to be rounded to that number of fractional digits before performing min and max range checking. A scale value of 0 causes num to be rounded to an integer value (3.9 = 4) before performing range checking. A scale value of –1 causes num to be truncated to an integer value (3.9 = 3) before performing range checking. To compare all specified digits without rounding or truncating, omit the scale parameter. A scale value that is nonnumeric or the null string is equivalent to a scale value of 0.
Rounding is performed for all scale values except –1. A value of 5 or greater is always rounded up.
If you omit the scale parameter, retain the comma as a place holder.
When rounding numbers, be aware that IEEE floating point numbers and standard InterSystems IRIS InterSystems IRIS floating point number. Standard InterSystems IRIS fractional numbers have a precision of 18 decimal digits on all supported InterSystems IRIS system platforms..
min and max
You can specify a minimum allowed value, a maximum allowed value, neither, or both. If specified, the num value (after the scale operation) must be greater than or equal to the min value, and less than or equal to the max value. A null string as a min or max value is equal to zero. If a value does not meet these criteria, $ISVALIDNUM returns 0.
If you omit a parameter, retain the comma as a place holder. For example, when omitting scale and specifying min or max, or when omitting min and specifying max. Trailing commas are ignored.
If the num, min, or max value is a $DOUBLE number, then all three of these numbers are treated as a $DOUBLE number for this range check. This prevents unexpected range errors caused by the small generated fractional part of a $DOUBLE number.
Examples
In the following example, each invocation of $ISVALIDNUM returns 1 (valid number):
WRITE !,$ISVALIDNUM(0) ; All integers OK WRITE !,$ISVALIDNUM(4.567) ; Real numbers OK WRITE !,$ISVALIDNUM("4.567") ; Numeric strings OK WRITE !,$ISVALIDNUM(-.0) ; Signed numbers OK WRITE !,$ISVALIDNUM(+004.500) ; Leading/trailing zeroes OK WRITE !,$ISVALIDNUM(4E2) ; Scientific notation OK
In the following example, each invocation of $ISVALIDNUM returns 0 (invalid number):
WRITE !,$ISVALIDNUM("") ; Null string is invalid WRITE !,$ISVALIDNUM("4,567") ; Commas are not permitted WRITE !,$ISVALIDNUM("4A") ; Invalid character
In the following example, each invocation of $ISVALIDNUM returns 1 (valid number), even though INF (infinity) and NAN (Not A Number) are, strictly speaking, not numbers:
DO ##class(%SYSTEM.Process).IEEEError(0) WRITE !,$ISVALIDNUM($DOUBLE($ZPI)) ; DOUBLE numbers OK WRITE !,$ISVALIDNUM($DOUBLE("INF")) ; DOUBLE INF OK WRITE !,$ISVALIDNUM($DOUBLE("NAN")) ; DOUBLE NAN OK WRITE !,$ISVALIDNUM($DOUBLE(1)/0) ; generated INF OK
The following example shows the use of the min and max parameters. All of the following return 1 (number is valid and also passes the range check):
WRITE !,$ISVALIDNUM(4,,3,5) ; scale can be omitted WRITE !,$ISVALIDNUM(4,2,3,5) ; scale can be larger than ; number of fractional digits WRITE !,$ISVALIDNUM(4,0,,5) ; min or max can be omitted WRITE !,$ISVALIDNUM(4,0,4,4) ; min and max are inclusive WRITE !,$ISVALIDNUM(-4,0,-5,5) ; negative numbers WRITE !,$ISVALIDNUM(4.00,2,04,05) ; leading/trailing zeros WRITE !,$ISVALIDNUM(.4E3,0,3E2,400) ; base-10 exponents expanded
The following example shows the use of the scale parameter with min and max. All of the following return 1 (number is valid and also passes the range check):
WRITE !,$ISVALIDNUM(4.55,,4.54,4.551) ; When scale is omitted, all digits of num are checked. WRITE !,$ISVALIDNUM(4.1,0,4,4.01) ; When scale=0, num is rounded to an integer value ; (0 fractional digits) before min & max check. WRITE !,$ISVALIDNUM(3.85,1,3.9,5) ; num is rounded to 1 fractional digit, ; (with values of 5 or greater rounded up) ; before min check. WRITE !,$ISVALIDNUM(4.01,17,3,5) ; scale can be larger than number of fractional digits. WRITE !,$ISVALIDNUM(3.9,-1,2,3) ; When scale=-1, num is truncated to an integer value
Notes
$ISVALIDNUM and $ISVALIDDOUBLE Compared
The $ISVALIDNUM and $ISVALIDDOUBLE functions both validate numbers and return a boolean value (0 or 1).
Both functions accept as valid numbers the INF, –INF, and NAN values returned by $DOUBLE. $ISVALIDDOUBLE also accepts as valid numbers the not case-sensitive strings “NAN” and “INF”, as well as the variants “Infinity” and “sNAN”, and any of these strings beginning with a single plus or minus sign. $ISVALIDNUM rejects all of these strings as invalid, and returns 0.
WRITE !,$ISVALIDNUM($DOUBLE("NAN")) ; returns 1 WRITE !,$ISVALIDDOUBLE($DOUBLE("NAN")) ; returns 1 WRITE !,$ISVALIDNUM("NAN") ; returns 0 WRITE !,$ISVALIDDOUBLE("NAN") ; returns 1Copy code to clipboard
Both functions parse signed and unsigned integers (including –0), scientific notation numbers (with “E” or “e”), real numbers (123.45) and numeric strings (“123.45”).
Neither function recognizes the European DecimalSeparator character (comma (,)) or the NumericGroupSeparator character (American format: comma (,); European format: period (.)). For example, both reject the string “123,456” as an invalid number, regardless of the current locale setting.
Both functions parse multiple leading signs (+ and –) for numbers. Neither accepts multiple leading signs in a quoted numeric string.
If a numeric string is too big to be represented by an InterSystems IRIS floating point number, the default is to automatically convert it to an IEEE double-precision number. However, such large numbers fail the $ISVALIDNUM test, as shown in the following example:
WRITE !,"E127 no IEEE conversion required" WRITE !,$ISVALIDNUM("9223372036854775807E127") WRITE !,$ISVALIDDOUBLE("9223372036854775807E127") WRITE !,"E128 automatic IEEE conversion" WRITE !,$ISVALIDNUM("9223372036854775807E128") WRITE !,$ISVALIDDOUBLE("9223372036854775807E128")
$ISVALIDNUM, $NORMALIZE, and $NUMBER Compared
The $ISVALIDNUM, $NORMALIZE, and $NUMBER functions all validate numbers. $ISVALIDNUM returns a boolean value (0 or 1). $NORMALIZE and $NUMBER return a validated version of the specified number.
These three functions offer different validation criteria. Select the one that best meets your needs.
American format numbers are validated by all three functions. European format numbers are only validated by the $NUMBER function.
All three functions parse signed and unsigned integers (including –0), scientific notation numbers (with “E” or “e”), and numbers with a fractional part. However, $NUMBER can be set (using the “I” format) to reject numbers with a fractional part (including scientific notation with a negative base-10 exponent). All three functions parse both numbers (123.45) and numeric strings (“123.45”).
Leading and trailing zeroes are stripped out by all three functions. The decimal character is stripped out unless followed by a nonzero value.
DecimalSeparator: $NUMBER validates the decimal character (American format: period (.) or European format: comma (,)) based on its format parameter (or the default for the current locale). The other functions only validate American format decimal numbers, regardless of the current locale setting.
NumericGroupSeparator: $NUMBER accepts NumericGroupSeparator characters (in American format: comma (,) or blank space; in European format: period (.) or blank space). It accepts and strips out any number of NumericGroupSeparator characters, regardless of position. For example, in American format it validate “12 3,,4,56.9,9” as the number 123456.99. $NORMALIZE does not recognize NumericGroupSeparator characters. It validates character-by-character until it encounters a nonnumeric character; for example, it validates “123,456.99” as the number 123. $ISVALIDNUM rejects the string “123,456” as an invalid number.
Multiple leading signs (+ and –) are interpreted by all three functions for numbers. However, only $NORMALIZE accepts multiple leading signs in a quoted numeric string.
Trailing + and – signs: All of the three functions reject trailing signs in numbers. However, in a quoted numeric string $NUMBER parses one (and only one) trailing sign, $NORMALIZE parses multiple trailing signs, and $ISVALIDNUM rejects any string containing a trailing sign as an invalid number.
Parentheses: $NUMBER parses parentheses surrounding an unsigned number in a quoted string as indicating a negative number. $NORMALIZE and $ISVALIDNUM reject parentheses.
Numeric strings containing multiple decimal characters: $NORMALIZE validates character-by-character until it encounters the second decimal character. For example, in American format it validates “123.4.56” as the number 123.4. $NUMBER and $ISVALIDNUM reject any string containing more than one decimal character as an invalid number.
Numeric strings containing other nonnumeric characters: $NORMALIZE validates character-by-character until it encounters an alphabetic character. It validates “123A456” as the number 123. $NUMBER and $ISVALIDNUM validate the entire string, they reject “123A456” as an invalid number.
The null string: $NORMALIZE parses the null string as zero (0). $NUMBER and $ISVALIDNUM reject the null string.
The $ISVALIDNUM and $NUMBER functions provide optional min/max range checking.
$ISVALIDNUM, $NORMALIZE, and $NUMBER all provide rounding of numbers to a specified number of fractional digits. $ISVALIDNUM and $NORMALIZE can round fractional digits, and round or truncate a number with a fractional part to return an integer. For example, $NORMALIZE can round 488.65 to 488.7 or 489, or truncate it to 488. $NUMBER can round both fractional digits and integer digits. For example, $NUMBER can round 488.65 to 488.7, 489, 490 or 500.
See Also
$ISVALIDDOUBLE function
$NORMALIZE function
More information on locales in the section on “System Classes for National Language Support” in Specialized System Tools and Utilities. | https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=RCOS_FISVALIDNUM | 2021-06-12T20:53:49 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.intersystems.com |
Crate version_check[−][src].
Examples
Set a
cfg flag in
build.rs if the running compiler was determined to be
at least version
1.13.0:
extern crate version_check as rustc; if rustc::is_min_version("1.13.0").unwrap_or(false) { println!("cargo:rustc-cfg=question_mark_operator"); }
See
is_max_version or
is_exact_version to check if the compiler
is at most or exactly a certain version.
Check that the running compiler was released on or after
2018-12-18:
extern crate version_check as rustc; match rustc::is_min_date("2018-12-18") { Some(true) => "Yep! It's recent!", Some(false) => "No, it's older.", None => "Couldn't determine the rustc version." };
See
is_max_date or
is_exact_date to check if the compiler was
released prior to or exactly on a certain date.
Check that the running compiler supports feature flags:
extern crate version_check as rustc; match rustc::is_feature_flaggable() { Some(true) => "Yes! It's a dev or nightly release!", Some(false) => "No, it's stable or beta.", None => "Couldn't determine the rustc version." };
Check that the running compiler is on the stable channel:
extern crate version_check as rustc; match rustc::Channel::read() { Some(c) if c.is_stable() => format!("Yes! It's stable."), Some(c) => format!("No, the channel {} is not stable.", c), None => format!("Couldn't determine the rustc version.") };
To interact with the version, release date, and release channel as structs,
use
Version,
Date, and
Channel, respectively. The
triple()
function returns all three values efficiently.
Alternatives
This crate is dead simple with no dependencies. If you need something more and don’t care about panicking if the version cannot be obtained, or if you don’t mind adding dependencies, see rustc_version. | https://docs.rs/version_check/0.9.3/version_check/ | 2021-06-12T20:59:45 | CC-MAIN-2021-25 | 1623487586390.4 | [] | docs.rs |
-
The order service provides functionality for managing the sales orders, sales credits, purchase orders and purchase credits in your system.
With this service you can create and modify the resources listed below. The primary resource managed by this service is the order which represents each type of order as a uniform resource distinguished by the orderTypeCode. This is a very low level resource and requires multiple calls to create simple orders. For order types that have more specialized resources available, integrators should prefer to use those.
Integrations creating sales orders should prefer the sales order resource which has the benefit of being atomic: the order will be fully created or not created. With one call, the integration can create a complete sales order with all order rows attached and a resulting sales-order.created resource event will fire. Reception of this event (via a webhook) means that the sales order is fully created and available.
Sales orders created as sales order resources can still be manipulated using the order resource endpoints after creation. Changes to rows and status for example, still apply to sales orders. | https://api-docs.brightpearl.com/order/ | 2021-10-15T22:43:41 | CC-MAIN-2021-43 | 1634323583087.95 | [] | api-docs.brightpearl.com |
(Alexander Hamilton)
If one wonders why a biography of Alexander Hamilton did not stir Lin-Manuel Miranda to write his Broadway musical before Ron Chernow’s bestselling work all you have to do is look at Willard Sterne Randall’s ALEXANDER HAMILTON: A LIFE written a year before Chernow’s monograph. Randall’s effort is a clear narrative written by a traditional historian that lacks many of the details, insights on a personal level, and coverage of the most important aspects of Hamilton’s extraordinary life that Chernow presents. Randall, who has written biographies of George Washington and Thomas Jefferson before tackling his present subject seems most concerned with who was right about America, Jefferson, or Hamilton. He concludes that Jefferson was correct for the 18th century, Hamilton, for more modern times. Randall’s study is reliable and readable and mostly rests on primary materials.
Other than the depth of coverage that Randall provides my major criticism is how he attributes his material to sources. His chapter endnotes are not complete and he makes it very difficult to ascertain where he gets his material. There are too many examples of; “One family historian recently observed,” or, “As one historian put it,” or, “One historian’s description,” is annoying and not the way most historians present their sources.
(Hercules Mulligan)
In terms of Hamilton’s private life, Randall seems certain that Hamilton and his sister-in-law, Angelica Schuyler Church were lovers. His writing is crisp, but in terms of Hamilton family relations it is very speculative, particularly the description of Elizabeth Hamilton and her relationship with her husband. In other areas Randall is on firmer ground. His discussion of Hamilton’s early years where he was fueled by the writings of John Locke and accepted the ideas of “free will” as opposed to Calvinist dogma is excellent. Randall concentrates on a number of individuals that have not been detailed by most historians. The individual that most comes to mind is Hercules Mulligan, a merchant who initially served as Hamilton’s guardian when he arrived from the Caribbean. Later, Mulligan would become a valuable spy against the British in New York during the American Revolution as well is becoming a peer of Hamilton, and one of his most important confidants. Randall will also spend a great deal of time with the back and forth between Samuel Seabury’s “True Thoughts on the Proceedings of the Continental Congress” v. that of Hamilton’s “A Full Vindication,” which is important because it juxtaposes the loyalist and anti-loyalist positions visa vie the British, and the formulation of Hamilton’s basic political and economic philosophy.
(George Washington)
Important areas that Randall reviews include the Washington-Hamilton relationship, where one can see how mutually dependent each would become on the other through the revolution and leading up to Washington’s presidency. The machinations surrounding General Horatio Gates’ attempts to replace Washington during the revolution and actions taken by the general and his supporters after the revolution also receive important coverage. Randall will dissect the needs of the Continental Army and spares no criticism in his comments on the incompetence of a number of members of the Continental Congress. Randall stresses the importance of Hamilton’s relationship with John Laurence and the Marquis de Lafayette, particularly as it affected his actions during the revolution, and importantly, develops the ideological abyss that consumes Hamilton’s relationship with James Madison, especially after the Constitutional Convention.
(Angelica Schuyler Church)
As opposed to other authors Randall does not provide a great deal of detail of Hamilton private life and career after Washington becomes president. The majority of the book deals with Hamilton’s early life, the revolution, and the period leading up to and including the writing of the constitution. Randall analyzes issues like the assumption of debt, the National Bank, the need for public credit in detail. Further, he explores foreign policy implications of Hamilton’s domestic economic agenda, but does not develop the ideological and personal contradictions with Thomas Jefferson fully. The relationship with Aaron Burr also does not receive the attention that it warrants because that relationship spanned Hamilton’s entire career.
To enhance the monograph Randall should have balanced Hamilton’s career and influence on historical events more evenly and not given short shrift to the Washington presidency where he served as Secretary of the Treasury and the events that occurred following his retirement from office. Randall has written a useful biography of Hamilton, but in no way does it approach the level of Ron Chernow’s later effort. | https://docs-books.com/2018/08/ | 2021-10-15T22:39:06 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs-books.com |
C: the Online Bookstore sample and an article about coding techniques and programming practices. Fast forward through Visual Studio .net 2002, Visual Studio .net 2003, Visual Studio 2005, Visual Studio 2008, Visual Studio 2010, Visual Studio 2012, Visual Studio 2013, and now Visual Studio 2015 Preview and you end up where we are today. When I joined Microsoft .NET 1.0 was in development and wasn't even public - now it's open source.
Late November is when many Americans celebrate Thanksgiving and give thanks for what they have. I'm thankful for my time at Microsoft and the many awesome and interesting people who I worked and traveled with over the years. When she was little, my daughter, Stephanie, used to draw on my whiteboard (at least the part she could reach), and now she's a college graduate and engaged to be married. I'm thankful for what my career and benefits at Microsoft have afforded her. And most of all, I'm thankful to my wife, Nicole, for putting up with the countless times I worked weekends and holidays, brought a work laptop on vacation, missed dinner with the family, and pulled all-nighters readying for another product launch.
Walt Disney is quoted as saying, “I only hope that we don't lose sight of one thing - that it was all started by a mouse.” Let’s not lose sight of one thing – that Micro Soft was all started by a developer tool.
The future disappears into memory, With only a moment between, Forever dwells in that moment, Hope is what remains to be seen.
2137 | https://docs.microsoft.com/en-us/archive/blogs/robcaron/coda | 2021-10-16T01:24:04 | CC-MAIN-2021-43 | 1634323583087.95 | [] | docs.microsoft.com |
Submitting Jobs¶
Overview¶
The easiest and most common way to submit render jobs to Deadline is via our many submission scripts, which are written for each rendering application it supports. After you have submitted your job, you can monitor progress using the Monitor. See the Monitoring Jobs documentation for more information.
If you would like more control over the submission process, or would like to submit arbitrary command line jobs to Deadline, see the Manual Job Submission documentation for more information.
Integrated Submission Scripts¶
Where possible, we have created integrated submission scripts that allow you to submit jobs directly from the application you’re working with. These scripts are convenient because you don’t have to launch a separate application to submit the job. In addition, these scripts often provide more submission options because they have direct access to the scene or project file you are submitting.
See the Plug-ins documentation for more information on how to set up the integrated submission scripts (where applicable) and submit jobs for specific applications.
Monitor Submission Scripts¶
In cases where an application does not have an integrated submission script, you can submit the jobs from the Submit menu in the Plug-ins documentation for more information on how submit jobs for specific applications.
You can also create your own submission scripts for the Monitor. Check out the Monitor Scripting documentation for more details.
Common Job Submission Options¶
There are many job options that can be specified on submission. A lot of these options are general job properties that aren’t specific to the application you’re rendering with. Some of these options are described below, many other options that are specific to the application that you’re rendering with. These are covered in each application’s plug-in guide, which can be found in the Worker has to render a task for this job before an error is reported, and the task is requeued. Specify 0 for no limit.
If the Auto Task Timeout is properly configured in the Repository Options, then enabling the Auto Task Timeout option will allow a task timeout to be automatically calculated, which is based on the render times of previous frames for the job.
Concurrent Tasks and Limiting Tasks To A Worker’s Task Limit
The number of tasks that can render concurrently on a single Worker. This is useful if the rendering application only uses one thread to render and your Workers have multiple CPUs. Caution should be used when using this feature if your renders require a large amount of RAM.
If you limit the tasks to a Worker’s task limit, then by default, the Worker won’t dequeue more tasks then it has CPUs. This task limit can be overridden for individual Workers by an administrator. See the Worker Settings documentation for more information.
Machine Limit and Machine Allow List/Deny List
Use the Machine Limit to specify the maximum number of Workers that can render your job at one time. Specify 0 for no limit. You can also force the job to render on specific Workers by using an allow list, or you can avoid specific Workers by using a deny list. See the Limit Documentation for more information.
Limits
The limits that your job must adhere to. See the Limit Documentation for more information.
Dependencies
This job will not start until the specified existing job finishes.
Scene/Project/Data File (if applicable)
The file path to the Scene/Project/Data File to be processed/rendered as the job. The file needs to be in a shared location so that the Worker machines can find it when they go to render it directly. See Submit Scene/Project File with Job below for a further option.
Note all external asset/file paths referenced by the Scene/Project/Data File should be resolvable by your Worker machines on your network. Scene/Project File With Job
If this option is enabled, the scene or project file you want to render will be submitted with the job, and then copied locally to the Worker machine during rendering. The benefit to this is that you have a copy of the file in the state that it was in when it was submitted. However, if your scene or project file uses relative asset paths, enabling this option can cause the render to fail when the asset paths can’t be resolved.
Note, only the Scene/Project File is submitted with the job and ALL external/asset files referenced by the Scene/Project File are still required by the Worker machines.
If this option is disabled, the file needs to be in a shared location so that the Worker machines can find it when they go to render it directly. Leaving this option disabled is required if the file has references (footage, textures, caches, etc) that exist in a relative location.
Note if you modify the original file, it will affect the render job.
Draft and Integration Submission Options¶
The majority of the submission scripts that ship with Deadline have Integration options to connect to Shotgun, FTrack, NIM, and/or use Draft to perform post-rendering compositing operations. The Integration and Draft job options are essentially the same in every submission script, and more information can be found in their respective documentation:
Jigsaw¶
Jigsaw is a flexible multi-region rendering system for Deadline. This is available for 3ds Max, Maya, Houdini, modo, and Rhino. It can be used to render regions of various sizes for a single frame, and in 3ds Max and Maya, it can be used to track and render specific objects over an animation.
Draft can then be used to automatically assemble the regions into the final frame or frames. It can also be used to automatically composite re-rendered regions onto the original frame. Note, whilst Jigsaw is an unlicensed feature, Draft Tile Assembler is a licensed product, which is free for users with an active Deadline annual support and maintenance contract.
Jigsaw is built into the 3ds Max, Maya, Houdini, modo, and Rhino submitters. With the exception of 3ds Max, Jigsaw viewport will be displayed in a separate window.
The viewport can be used to create and manipulate regions. Regions are then submitted to Deadline to render. The available options are listed below:
General Options
These options are always available:
Add Region: Adds a new region.
Delete All: Deletes all the current regions.
Create From Grid: Creates a grid of regions to cover the full viewport. The X value controls the number of columns and the Y value controls the number of rows.
Fill Regions: Automatically creates new regions to fill the parts of the viewport that are not currently covered by a region.
Clean Regions: Deletes any regions that are fully contained within another region.
Undo: Undo the last change made to the regions.
Redo: Redo the last change that was previously undone.
Selected Regions Options
These options are only available when a single region is selected:
Clone: Creates a duplicate region parallel to the selected region in the specified direction.
Lock Postion: If enabled, the region will be locked to its current position.
Enable Region: If disabled, the region will be ignored when submitting the job.
X Position: The horizontal position of the selected region, taken from the left.
Y Position: The vertical position of the selected region, taken from the top.
Width: The width of the selected region.
Height: The height of the selected region.
These options are only available when one or more regions are selected.
Delete: Deletes the selected regions.
Split: Splits the selected regions into sub-regions based on the Tiles In X and Tyles In Y settings.
These options are only available when multiple regions are selected.
Merge: Combines the selected regions into a single region that covers the full area of the selected regions.
Zoom Options
These zoom options are always available: Jigsaw window.
Maya Options
These options are currently only available for Maya:
Reset Background: Gets the current viewport image from Maya.
Fit Selection: Create regions surrounding the selected items in the Maya scene.
Mode: The type of regions to be used when fitting the selected items. The options are Tight (fitting the minimum 2D bounding box of the points) and Loose (fitting the minimum 2D bounding box of the bounding box of the object).
Padding: The amount of padding to add when fitting the selection (this is a percentage value that is added in each direction).
Save Regions: Saves the information in the regions directly into the Maya scene.
Load Regions: Loads the saved regions information from the Maya scene.
Frame List Formatting Options¶
During job submission, you usually have the option to specify the frame list you want to render, which often involves manually typing the frame list into a text box. In this case, you can make use of the following frame list formatting options.
Using a step frame also works for reverse frame sequences:
100-1x5 100:1:5
Advanced Frame Lists
Individual frames for the same job are never repeated when creating tasks for a job, which allows you to get creative with your frame lists without worrying about rendering the same frame more than once., a job’s frame range can be modified after a job has been submitted to Deadline by right-clicking on a job and selecting “Modify Frame Range…”. | https://docs.thinkboxsoftware.com/products/deadline/10.1/1_User%20Manual/manual/job-submitting.html | 2021-10-15T22:59:43 | CC-MAIN-2021-43 | 1634323583087.95 | [array(['../_images/jigsaw_window.png', '../_images/jigsaw_window.png'],
dtype=object) ] | docs.thinkboxsoftware.com |
Internationalisation¶
This document describes the internationalisation features of Wagtail and how to create multi-lingual sites.
Wagtail uses Django’s Internationalisation framework so most of the steps are the same as other Django projects.
Contents
- Internationalisation
- Wagtail admin translations
- Change Wagtail admin language on a per user basis
- Changing the primary language of your Wagtail installation
- Creating sites with multiple languages.
Creating sites with multiple languages¶
You can create sites with multiple language support by leveraging Django’s translation features.
This section of the documentation will show you how to use Django’s translation features with Wagtail and also describe a couple of methods for storing/retrieving translated content using Wagtail pages.
Enabling multiple language support¶
Firstly, make sure the USE_I18N Django setting is set to
True.
To enable multi-language support, add
django.middleware.locale.LocaleMiddleware to your
MIDDLEWARE:
MIDDLEWARE = ( ... 'django.middleware.locale.LocaleMiddleware', )
This middleware class looks at the user’s browser language and sets the language of the site accordingly.
Serving different languages from different URLs¶
Just enabling the multi-language support in Django sometimes may not be enough. By default, Django will serve different languages of the same page with the same URL. This has a couple of drawbacks:
- Users cannot change language without changing their browser settings
- It may not work well with various caching setups (as content varies based on browser settings)
Django’s
i18n_patterns feature, when enabled, prefixes the URLs with the language code (eg
/en/about-us). Users are forwarded to their preferred version, based on browser language, when they first visit the site.
This feature is enabled through the project’s root URL configuration. Just put the views you would like to have this enabled for in an
i18n_patterns list and append that to the other URL patterns:
# mysite/urls.py from django.conf.urls import include, re_path from django.conf.urls.i18n import i18n_patterns from django.conf import settings from django.contrib import admin from wagtail.admin import urls as wagtailadmin_urls from wagtail.documents import urls as wagtaildocs_urls from wagtail.core import urls as wagtail_urls from search import views as search_views urlpatterns = [ re_path(r'^django-admin/', include(admin.site.urls)), re_path(r'^admin/', include(wagtailadmin_urls)), re_path(r'^documents/', include(wagtaildocs_urls)), ] urlpatterns += i18n_patterns( # These URLs will have /<language_code>/ appended to the beginning re_path(r'^search/$', search_views.search, name='search'), re_path(r'', include(wagtail_urls)), )
You can implement switching between languages by changing the part at the beginning of the URL. As each language has its own URL, it also works well with just about any caching setup.
Translating templates¶
Static text in templates needs to be marked up in a way that allows Django’s
makemessages command to find and export the strings for translators and also allow them to switch to translated versions on the when the template is being served.
As Wagtail uses Django’s templates, inserting this markup and the workflow for exporting and translating the strings is the same as any other Django project.
See:
Translating content¶
The most common approach for translating content in Wagtail is to duplicate each translatable text field, providing a separate field for each language.
This section will describe how to implement this method manually but there is a third party module you can use, wagtail modeltranslation, which may be quicker if it meets your needs.
Duplicating the fields in your model
For each field you would like to be translatable, duplicate it for every language you support and suffix it with the language code:
class BlogPage(Page): title_fr = models.CharField(max_length=255) body_en = StreamField(...) body_fr = StreamField(...) # Language-independent fields don't need to be duplicated thumbnail_image = models.ForeignKey('wagtailimages.Image', on_delete=models.SET_NULL, null=True, ...)
Note
We only define the French version of the
title field as Wagtail already provides the English version for us.
Organising the fields in the admin interface
You can either put all the fields with their translations next to each other on the “content” tab or put the translations for other languages on different tabs.
See Customising the tabbed interface for information on how to add more tabs to the admin interface.
Accessing the fields from the template
In order for the translations to be shown on the site frontend, the correct field needs to be used in the template based on what language the client has selected.
Having to add language checks every time you display a field in a template, could make your templates very messy. Here’s a little trick that will allow you to implement this while keeping your templates and model code clean.
You can use a snippet like the following to add accessor fields on to your page model. These accessor fields will point at the field that contains the language the user has selected.
Copy this into your project and make sure it’s imported in any
models.py files that contain a
Page with translated fields. It will require some modification to support different languages.
from django.utils import translation class TranslatedField: def __init__(self, en_field, fr_field): self.en_field = en_field self.fr_field = fr_field def __get__(self, instance, owner): if translation.get_language() == 'fr': return getattr(instance, self.fr_field) else: return getattr(instance, self.en_field)
Then, for each translated field, create an instance of
TranslatedField with a nice name (as this is the name your templates will reference).
For example, here’s how we would apply this to the above
BlogPage model:
class BlogPage(Page): ... translated_title = TranslatedField( 'title', 'title_fr', ) body = TranslatedField( 'body_en', 'body_fr', )
Finally, in the template, reference the accessors instead of the underlying database fields:
{{ page.translated_title }} {{ page.body }} | http://docs.wagtail.io/en/latest/advanced_topics/i18n/index.html | 2019-07-15T22:39:03 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.wagtail.io |
Log.
Direct logins and file transfers from off-campus are not allowed. Please see our Off-Campus Access FAQ for details on connecting from off-campus.
The OS X operating system on the Mac already has SSH as part of its default installation and will work the same way as described above from a terminal window. A Setting up SSH on Windows can be found here. The Windows version of SSH is a graphical program, not command line oriented like the examples above. It contains a Secure Shell Client for logins and a Secure Shell File Transfer program which is a substitute for scp. | https://docs.rice.edu/confluence/plugins/viewsource/viewpagesrc.action?pageId=30750193 | 2019-07-15T23:19:49 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.rice.edu |
Source code for flake8.main.cli
"""Command-line implementation of flake8.""" from typing import List, Optional from flake8.main import application[docs]def main(argv=None): # type: (Optional[List[str]]) -> None """Execute the main bit of the application. This handles the creation of an instance of :class:`Application`, runs it, and then exits the application. :param list argv: The arguments to be passed to the application for parsing. """ app = application.Application() app.run(argv) app.exit() | https://flake8.readthedocs.io/en/latest/_modules/flake8/main/cli.html | 2019-07-15T22:24:34 | CC-MAIN-2019-30 | 1563195524254.28 | [] | flake8.readthedocs.io |
The "No software" platform
The meteoric rise of Salesforce has provided us with a platform that lives up the No Software promise. We no longer need to deal with legacy on-premise software with large up-front costs and instead have a platform that can be easily migrated to and grows with the needs of your business and organisation.
There was a mistaken belief that No Software meant there would be no need for a solid software development lifecycle but Salesforce, like all software that powers our organisations, requires rigorous change management to ensure that changes are effectively communicated from the business to the implementation team, configured and built correctly, suitably verified and tested to ensure it matches what the business actually wanted, and finally delivered to production in a way that delivers the benefits to the end users.
This process of managing change falls under the remit of Salesforce release management. So what is DevOps?
DevOps for Salesforce is different
Salesforce itself removes a lot of the complexity of other DevOps processes. Managing infrastructure, scalability, hosting, even tests - traditionally the responsibility of ops, and more recently DevOps - is all handled by the platform itself.
Things are changing in the Salesforce world, in part through the introduction of Salesforce DX, but Salesforce teams are generally behind the industry curve when it comes to adopting more robust DevOps practises.
The lure of “best practice”
When we speak with Salesforce teams, especially those struggling with increasing complexity, there is often a desire to adopt a “best practice” approach to DevOps and their Software Development Lifecycle.
The most important thing to understand when defining how your team works and what their software development lifecycle looks like is what issues you’re struggling with and how the process you’re adopting is going to address those issues. We believe there is a spectrum of best practice and you should implement the parts that are going to deliver the best immediate return on investment, whilst still laying a foundation to build upon.
There is no single correct answer that fits all. Hopefully these guides will provide you with the information you need to understand what DevOps could look like for your company. | http://docs.gearset.com/en/articles/2287409-what-is-devops-for-salesforce | 2019-07-15T22:37:44 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.gearset.com |
analyzedb.
- For append optimized tables, analyzedb updates statistics incrementally, if the statistics are not current. For example, if table data is changed after statistics were collected for the table. If there are no statistics for the table, statistics are collected.
- For heap tables, statistics are always updated.. does not automatically remove old snapshot information. Over time, the snapshots can consume a large amount of disk space. To recover disk space, you can keep the most recent snapshot information and remove the older snapshots.. specifies a statement_mem value for the ANALYZE commands. This example sets the statement_mem value to 250 MB.
analyzedb -d mytest -t public.foo -m 250MB') ;
CREATE LANGUAGE plpythonu; | http://gpdb.docs.pivotal.io/43330/utility_guide/admin_utilities/analyzedb.html | 2019-07-15T22:17:05 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['/images/icon_gpdb.png', None], dtype=object)] | gpdb.docs.pivotal.io |
Custom formatting of date and time columns
If the format you want to use cannot be created with the given settings, the custom format string allows you to create your own formats using a code explained in the examples below.
Examples:Below are some examples of custom format strings for datetime formats. For more information, see literature about custom DateTime format strings, such as that on MSDN.
Note: If you want to use any of the custom date and time format specifiers alone in a format string (for example, to use the "d", "h", or "M" specifier by itself), you must either add a space before or after the specifier, or include a percent sign ("%") before the single custom date and time specifier, to avoid it being interpreted as a standard format string.
You can also add any custom string value, but if any of the specifier characters are included in the string, they need to be escaped by a backsl. | http://docs.spotfire.cloud.tibco.com/spotfire/GUID-D997D316-2686-413F-9417-4A52F37754CA.html | 2019-07-15T22:39:41 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.spotfire.cloud.tibco.com |
Modem Call Control
The Modem Call Control (mcc) API functions of this API are provided by the modemService service.
IPC interfaces binding
Here's a code sample binding to modem services:
bindings: { clientExe.clientComponent.le_mcc -> modemService.le_mcc }
Starting a Call
To initiate a call, create a new call object with a destination telephone number calling the le_mcc_Create() function.
le_mcc_Start() must still initiate the call when ready.
The le_mcc_Start() function initiates a call attempt (it's asynchronous because it can take time for a call to connect). If function failed, the le_mcc_GetTerminationReason() API can be used to retrieve the terminal reason.
It's essential to register a handler function to get the call events. Use le_mcc_AddCallEventHandler() API to install that handler function. The handler will be called for all calls' events (incoming and outgoing).
The le_mcc_RemoveCallEventHandler() API uninstalls the handler function.
The following APIs can be used to manage incoming or outgoing calls:
- le_mcc_GetTerminationReason() - termination reason.
- le_mcc_GetPlatformSpecificTerminationCode() - let you get the platform specific termination code by retrieving the termination code from
le_mcc_CallRef_t. Please refer to Platform specific error codes for platform specific termination code description.
- le_mcc_IsConnected() - connection status.
- le_mcc_GetRemoteTel() - displays remote party telephone number associated with the call.
- le_mcc_HangUp() will disconnect this call.
When finished with the call object, call le_mcc_Delete() to free all the allocated resources associated with the object.
Multi-threading/multi-application behaviour: the callRef is linked to a specific client (i.e. connection with the mcc service). Each client will have its own callRef for a same call. That is, if a call event handler is registered by one thread but le_mcc_Create() is called by another thread, they will each get different call references for the same call. So, when multiple threads are being used to work with the same call, a comparison of the call references themselves can't be used to tell whether or not they refer to the same call.
The Adaptive Multi Rate (AMR) is an audio compression format optimized for speech coding and used during a voice call. Two AMRs are supported: An AMR Narrowband that encodes a bandwidth of 200--3400 Hz signals at variable bitrates ranging from 4.75 to 12.2 kbit/s and an AMR Wideband that encodes a wider bandwidth of 50--7000 Hz and thus improves the speech quality.
le_mcc_SetAmrWbCapability() function enables or disables the AMR Wideband supported capability. le_mcc_GetAmrWbCapability() function returns the AMR Wideband capability status.
Answering a call
Receiving calls is similar sending calls. Add a handler through le_mcc_AddCallEventHandler() to be notified of incoming calls.
To answer, call le_mcc_Answer(). To reject it, call le_mcc_HangUp().
Ending all calls
A special function can be used to hang-up all the ongoing calls: le_mcc_HangUpAll(). This function can be used to hang-up any calls that have been initiated through another client like AT commands.
Supplementary service
Calling Line Identification Restriction (CLIR) can be activated or deactivated by le_mcc_SetCallerIdRestrict() API. The status is independent for each call object reference. Status can be read with the le_mcc_GetCallerIdRestrict() API. If the status is not set, le_mcc_GetCallerIdRestrict() API returns LE_UNAVAILABLE. By default the CLIR status is not set.
Call waiting supplementary service can be activated or deactivated by le_mcc_SetCallWaitingService(). Its status can be given by le_mcc_GetCallWaitingService(). A call waiting can be answered using le_mcc_ActivateCall() API. This API is also used to activate an on hold call (current call is then placed on hold). An activated, waiting or on hold call can be released using le_mcc_HangUp() function. See 3GPP TS 02.83 / 22.083 for any details concerning call waiting / call hold supplementary services.
Sample codes
A sample code that implements a dialing call can be found in le_mccTest.c file (please refer to Sample code of Modem Call control page). | https://docs.legato.io/latest/c_mcc.html | 2019-07-15T22:44:11 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.legato.io |
Installation Prerequisites
The Splunk ODBC Driver is currently available for Windows only.
You must install the Splunk ODBC Driver on the Windows PC that runs the application you want to use it with (Excel or Tableau). computer:
- Splunk Enterprise 5 or later running on any supported operating system
Client computer (running Splunk ODBC Driver):
- Windows 7 or later (including Windows Server 2008 R2 and later Windows Server versions)
- Microsoft Excel 2010 or later, or Tableau Professional or Server
- Microsoft Visual C++ 2010 Redistributable Package, downloadable from. (If you use the Splunk ODBC Driver with a 64-bit app such as the 64-bit edition of Microsoft Excel, install the x64 redistributable package from.)
Important: The Splunk Enterprise computer and the client computer can be the same computer.
This documentation applies to the following versions of Splunk® ODBC Driver: 1.0, 1.0.1
When do you expect to support an OS other than Windows?
The Splunk ODBC Driver is supported on Windows Server 2008 R2 and later Windows Server versions.
Whether this can be installed on Windows Server 2008 R2? Which is Splunk Enterprise computer and the client computer.
Hello Jordan, We do have plans to support non-Windows operating systems, though the timing of the support has not been finalized. If you could share which OS you're working with, and which application (Excel? Tableau?), that would be a great help to us! | https://docs.splunk.com/Documentation/ODBC/1.0.1/UseODBC/Prerequisites | 2019-07-15T22:54:29 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)] | docs.splunk.com |
Application blocking allows you to enable or block applications from launching.:
- Path-based. You can specify a path to a folder. Or, you can specify a fully qualified file name (the configured path includes the full path and file name of the executable).
- Hash-based. You can specify to allow or block based on a hash that matches a particular executable.
- Publisher-based. You can specify a publisher to allow, and executables associated with that publisher can launch. You cannot block applications by publisher.Note: If you configure multiple types of application blocking, it is important to understand the order in which they are evaluated. For more details, see Work with Multiple Types of Application Blocking.
Application blocking is not enabled on User Environment Manager endpoints that use the SyncTool. | https://docs.vmware.com/en/VMware-User-Environment-Manager/9.3/com.vmware.user.environment.manager-adminguide/GUID-25424714-FD06-4C7B-9F02-BEA7FC42A945.html | 2019-07-15T21:53:50 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.vmware.com |
5. Working with content¶
In this chapter, we demonstrate features of Swarm related to storage and retrieval. First we discuss how to solve mutability of resources in a content addressed system using the Ethereum Name Service on the blockchain, then using Feeds in Swarm. Then we briefly discuss how to protect your data by restricting access using encryption. We also discuss in detail how files can be organised into collections using manifests and how this allows virtual hosting of websites. Another form of interaction with Swarm, namely mounting a Swarm manifest as a local directory using FUSE. We conclude by summarizing the various URL schemes that provide simple HTTP endpoints for clients to interact with Swarm.
5.1. Using ENS names¶
Note
In order to resolve ENS names, your Swarm node has to be connected to an Ethereum blockchain (mainnet, or testnet). See Getting Started for instructions. This section explains how you can register your content to your ENS name.
ENS is the system that Swarm uses to permit content to be referred to by a human-readable name, such as “theswarm.eth”. It operates analogously to the DNS system, translating human-readable names into machine identifiers - in this case, the Swarm hash of the content you’re referring to. By registering a name and setting it to resolve to the content hash of the root manifest of your site, users can access your site via a URL such as
bzz://theswarm.eth/.
Note
Currently The bzz scheme is not supported in major browsers such as Chrome, Firefox or Safari. If you want to access the bzz scheme through these browsers, currently you have to either use an HTTP gateway, such as or use a browser which supports the bzz scheme, such as Mist <>.
Suppose we upload a directory to Swarm containing (among other things) the file
example.pdf.
$ swarm --recursive up /path/to/dir >2477cc8584cc61091b5cc084cdcdb45bf3c6210c263b0143f030cf7d750e894d
If we register the root hash as the
content for
theswarm.eth, then we can access the pdf at
bzz://theswarm.eth/example.pdf
if we are using a Swarm-enabled browser, or at
via a local gateway. We will get served the same content as with:
Please refer to the official ENS documentation for the full details on how to register content hashes to ENS.
In short, the steps you must take are:
- Register an ENS name.
- Associate a resolver with that name.
- Register the Swarm hash with the resolver as the
content.
We recommend using. This will make it easy for you to:
- Associate the default resolver with your name
- Register a Swarm hash.
Note
When you register a Swarm hash with you MUST prefix the hash with 0x. For example 0x2477cc8584cc61091b5cc084cdcdb45bf3c6210c263b0143f030cf7d750e894d
5.2. Feeds¶
Note
Feeds, previously known as Mutable Resource Updates, is an experimental feature, available since Swarm POC3. It is under active development, so expect things to change.
Since Swarm hashes are content addressed, changes to data will constantly result in changing hashes. Swarm Feeds provide a way to easily overcome this problem and provide a single, persistent, identifier to follow sequential data.
The usual way of keeping the same pointer to changing data is using the Ethereum Name Service (ENS). However, since ENS is an on-chain feature, it might not be suitable for each use case since:
- Every update to an ENS resolver will cost gas to execute
- It is not be possible to change the data faster than the rate that new blocks are mined
- ENS resolution requires your node to be synced to the blockchain
Swarm Feeds provide a way to have a persistent identifier for changing data without having to use ENS. It is named Feeds for its similarity with a news feed.
If you are using Feeds in conjunction with an ENS resolver contract, only one initial transaction to register the “Feed manifest address” will be necessary. This key will resolve to the latest version of the Feed (updating the Feed will not change the key).
You can think of a Feed as a user’s Twitter account, where he/she posts updates about a particular Topic. In fact, the Feed object is simply defined as:
type Feed struct { Topic Topic User common.Address }
That is, a specific user posting updates about a specific Topic.
Users can post to any topic. If you know the user’s address and agree on a particular Topic, you can then effectively “follow” that user’s Feed.
Important
How you build the Topic is entirely up to your application. You could calculate a hash of something and use that, the recommendation is that it should be easy to derive out of information that is accesible to other users.
For convenience,
feed.NewTopic() provides a way to “merge” a byte array with a string in order to build a Feed Topic out of both.
This is used at the API level to create the illusion of subtopics. This way of building topics allows using a random byte array (for example the hash of a photo)
and merge it with a human-readable string such as “comments” in order to create a Topic that could represent the comments about that particular photo.
This way, when you see a picture in a website you could immediately build a Topic out of it and see if some user posted comments about that photo.
Feeds are not created, only updated. If a particular Feed (user, topic combination) has never posted to, trying to fetch updates will yield nothing.
5.2.1. Feed Manifests¶
A Feed Manifest is simply a JSON object that contains the
Topic and
User of a particular Feed (i.e., a serialized
Feed object). Uploading this JSON object to Swarm in the regular way will return the immutable hash of this object. We can then store this immutable hash in an ENS Resolver so that we can have a ENS domain that “follows” the Feed described in the manifest.
5.2.2. Feeds API¶
There are 3 different ways of interacting with Feeds : HTTP API, CLI and Golang API.
5.2.2.1. HTTP API¶
5.2.2.1.1. Posting to a Feed¶
Since Feed updates need to be signed, and an update has some correlation with a previous update, it is necessary to retrieve first the Feed’s current status. Thus, the first step to post an update will be to retrieve this current status in a ready-to-sign template:
- Get Feed template
GET /bzz-feed:/?topic=<TOPIC>&user=<USER>&meta=1
GET /bzz-feed:/<MANIFEST OR ENS NAME>/?meta=1
- Where:
user: Ethereum address of the user who publishes the Feed
topic: Feed topic, encoded as a hex string. Topic is an arbitrary 32-byte string (64 hex chars)
Note
- If
topicis omitted, it is assumed to be zero, 0x000…
- if
name=<name>(optional) is provided, a subtopic is composed with that name
- A common use is to omit topic and just use
name, allowing for human-readable topics
You will receive a JSON like the below:
{ "feed": { "topic": "0x6a61766900000000000000000000000000000000000000000000000000000000", "user": "0xdfa2db618eacbfe84e94a71dda2492240993c45b" }, "epoch": { "level": 16, "time": 1534237239 } "protocolVersion" : 0, }
- Post the update
Extract the fields out of the JSON and build a query string as below:
POST /bzz-feed:/?topic=<TOPIC>&user=<USER>&level=<LEVEL>&time=<TIME>&signature=<SIGNATURE>
- Where:
topic: Feed topic, as specified above
user: your Ethereum address
level: Suggested frequency level retrieved in the JSON above
time: Suggested timestamp retrieved in the JSON above
protocolVersion: Feeds protocol version. Currently
0
signature: Signature, hex encoded. See below on how to calclulate the signature
- Request posted data: binary stream with the update data
5.2.2.1.2. Reading a Feed¶
To retrieve a Feed’s last update:
GET /bzz-feed:/?topic=<TOPIC>&user=<USER>
GET /bzz-feed:/<MANIFEST OR ENS NAME>
Note
- Again, if
topicis omitted, it is assumed to be zero, 0x000…
- If
name=<name>is provided, a subtopic is composed with that name
- A common use is to omit
topicand just use
name, allowing for human-readable topics, for example:
GET /bzz-feed:/?name=profile-picture&user=<USER>
To get a previous update:
Add an addtional
time parameter. The last update before that
time (unix time) will be looked up.
GET /bzz-feed:/?topic=<TOPIC>&user=<USER>&time=<T>
GET /bzz-feed:/<MANIFEST OR ENS NAME>?time=<T>
5.2.2.2. Go API¶
5.2.2.2.1. Query object¶
The
Query object allows you to build a query to browse a particular
Feed.
The default
Query, obtained with
feed.NewQueryLatest() will build a
Query that retrieves the latest update of the given
Feed.
You can also use
feed.NewQuery() instead, if you want to build a
Query to look up an update before a certain date.
Advanced usage of
Query includes hinting the lookup algorithm for faster lookups. The default hint
lookup.NoClue will have your node track Feeds you query frequently and handle hints automatically.
5.2.2.2.2. Request object¶
The
Request object makes it easy to construct and sign a request to Swarm to update a particular Feed. It contains methods to sign and add data. We can manually build the
Request object, or fetch a valid “template” to use for the update.
A
Request can also be serialized to JSON in case you need your application to delegate signatures, such as having a browser sign a Feed update request.
5.2.2.2.3. Posting to a Feed¶
- Retrieve a
Requestobject or build one from scratch. To retrieve a ready-to-sign one:
func (c *Client) GetFeedRequest(query *feed.Query, manifestAddressOrDomain string) (*feed.Request, error)
- Use
Request.SetData()and
Request.Sign()to load the payload data into the request and sign it
- Call
UpdateFeed()with the filled
Request:
func (c *Client) UpdateFeed(request *feed.Request, createManifest bool) (io.ReadCloser, error)
5.2.2.2.4. Reading a Feed¶
To retrieve a Feed update, use client.QueryFeed().
QueryFeed returns a byte stream with the raw content of the Feed update.
func (c *Client) QueryFeed(query *feed.Query, manifestAddressOrDomain string) (io.ReadCloser, error)
manifestAddressOrDomain is the address you obtained in
CreateFeedWithManifest or an
ENS domain whose Resolver
points to that address.
query is a Query object, as defined above.
You only need to provide either
manifestAddressOrDomain or
Query to
QueryFeed(). Set to
"" or
nil respectively.
5.2.2.2.5. Creating a Feed Manifest¶
Swarm client (package swarm/api/client) has the following method:
func (c *Client) CreateFeedWithManifest(request *feed.Request) (string, error)
CreateFeedWithManifest uses the
request parameter to set and create a
Feed manifest.
Returns the resulting
Feed manifest address that you can set in an ENS Resolver (setContent) or reference future updates using
Client.UpdateFeed()
5.2.2.2.6. Example Go code¶
// Build a `Feed` object to track a particular user's updates f := new(feed.Feed) f.User = signer.Address() f.Topic, _ = feed.NewTopic("weather",nil) // Build a `Query` to retrieve a current Request for this feed query := feeds.NewQueryLatest(&f, lookup.NoClue) // Retrieve a ready-to-sign request using our query // (queries can be reused) request, err := client.GetFeedRequest(query, "") if err != nil { utils.Fatalf("Error retrieving feed status: %s", err.Error()) } // set the new data request.SetData([]byte("Weather looks bright and sunny today, we should merge this PR and go out enjoy")) // sign update if err = request.Sign(signer); err != nil { utils.Fatalf("Error signing feed update: %s", err.Error()) } // post update err = client.UpdateFeed(request) if err != nil { utils.Fatalf("Error updating feed: %s", err.Error()) }
5.2.2.3. Command-Line¶
The CLI API allows us to go through how Feeds work using practical examples. You can look up CL usage by typing
swarm feed into your CLI.
In the CLI examples, we will create and update feeds using the bzzapi on a running local Swarm node that listens by default on port 8500.
5.2.2.3.1. Creating a Feed Manifest¶
The Swarm CLI allows creating Feed Manifests directly from the console.
swarm feed create is defined as a command to create and publish a
Feed manifest.
- The feed topic can be built in the following ways:
- use
--topicto set the topic to an arbitrary binary hex string.
- use
--nameto set the topic to a human-readable name.
- For example,
--namecould be set to “profile-picture”, meaning this feed allows to get this user’s current profile picture.
- use both
--topicand
--nameto create named subtopics.
- For example, –topic could be set to an Ethereum contract address and
--namecould be set to “comments”, meaning this feed tracks a discussion about that contract.
The
--user flag allows to have this manifest refer to a user other than yourself. If not specified, it will then default to your local account (
--bzzaccount).
If you don’t specify a name or a topic, the topic will be set to
0 hex and name will be set to your username.
$ swarm --bzzapi feed create --name test
creates a feed named “test”. This is equivalent to the HTTP API way of
$ swarm --bzzapi feed create --topic 0x74657374
since
test string == 0x74657374 hex. Name and topic are interchangeable, as long as you don’t specify both.
feed create will return the feed manifest.
You can also use
curl in the HTTP API, but, here, you have to explicitly define the user (which, in this case, is your account) and the manifest.
$ curl -XPOST -d 'name=test&user=<your account>&manifest=1'
is equivalent to
$ curl -XPOST -d 'topic=0x74657374&user=<your account>&manifest=1'
5.2.2.3.2. Posting to a Feed¶
To update a Feed with the CLI, use
feed update. The update argument has to be in
hex. If you want to update your test feed with the update hello, you can refer to it by name:
$ swarm --bzzapi feed update --name test 0x68656c6c6f203
You can also refer to it by topic,
$ swarm --bzzapi feed update --topic 0x74657374 0x68656c6c6f203
or manifest.
$ swarm --bzzapi feed update --manifest <manifest hash> 0x68656c6c6f203
5.2.2.3.3. Reading Feed status¶
You can read the feed object using
feed info. Again, you can use the feed name, the topic, or the manifest hash. Below, we use the name.
$ swarm --bzzapi feed info --name test
5.2.2.3.4. Reading Feed Updates¶
Although the Swarm CLI doesn’t have the functionality to retrieve feed updates, we can use
curl and the HTTP api to retrieve them. Again, you can use the feed name, topic, or manifest hash. To return the update
hello for your
test feed, do this:
$ curl '<your address>&name=test'
5.2.3. Computing Feed Signatures¶
- computing the digest:
- The digest is computed concatenating the following:
- 1-byte protocol version (currently 0)
- 7-bytes padding, set to 0
- 32-bytes topic
- 20-bytes user address
- 7-bytes time, little endian
- 1-byte level
- payload data (variable length)
- Take the SHA3 hash of the above digest
- Compute the ECDSA signature of the hash
- Convert to hex string and put in the
signaturefield above
5.2.3.1. JavaScript example¶
var web3 = require("web3"); if (module !== undefined) { module.exports = { digest: feedUpdateDigest } } var topicLength = 32; var userLength = 20; var timeLength = 7; var levelLength = 1; var headerLength = 8; var updateMinLength = topicLength + userLength + timeLength + levelLength + headerLength; function feedUpdateDigest(request /*request*/, data /*UInt8Array*/) { var topicBytes = undefined; var userBytes = undefined; var protocolVersion = 0; protocolVersion = request.protocolVersion try { topicBytes = web3.utils.hexToBytes(request.feed.topic); } catch(err) { console.error("topicBytes: " + err); return undefined; } try { userBytes = web3.utils.hexToBytes(request.feed.user); } catch(err) { console.error("topicBytes: " + err); return undefined; } var buf = new ArrayBuffer(updateMinLength + data.length); var view = new DataView(buf); var cursor = 0; view.setUint8(cursor, protocolVersion) // first byte is protocol version. cursor+=headerLength; // leave the next 7 bytes (padding) set to zero topicBytes.forEach(function(v) { view.setUint8(cursor, v); cursor++; }); userBytes.forEach(function(v) { view.setUint8(cursor, v); cursor++; }); // time is little-endian view.setUint32(cursor, request.epoch.time, true); cursor += 7; view.setUint8(cursor, request.epoch.level); cursor++; data.forEach(function(v) { view.setUint8(cursor, v); cursor++; }); console.log(web3.utils.bytesToHex(new Uint8Array(buf))) return web3.utils.sha3(web3.utils.bytesToHex(new Uint8Array(buf))); } // data payload data = new Uint8Array([5,154,15,165,62]) // request template, obtained calling<0xUSER>&topic=<0xTOPIC>&meta=1 request = {"feed":{"topic":"0x1234123412341234123412341234123412341234123412341234123412341234","user":"0xabcdefabcdefabcdefabcdefabcdefabcdefabcd"},"epoch":{"time":1538650124,"level":25},"protocolVersion":0} // obtain digest digest = feedUpdateDigest(request, data) console.log(digest)
5.3. Manifests¶
In general manifests declare a list of strings associated with Swarm hashes. A manifest matches to exactly one hash, and it consists of a list of entries declaring the content which can be retrieved through that hash. This is demonstrated by the following example:
Let’s create a directory containing the two orange papers and an html index file listing the two pdf documents.
$ ls -1 orange-papers/ index.html smash.pdf sw^3.pdf $ cat orange-papers/index.html <!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> </head> <body> <ul> <li> <a href="./sw^3.pdf">Viktor Trón, Aron Fischer, Dániel Nagy A and Zsolt Felföldi, Nick Johnson: swap, swear and swindle: incentive system for swarm.</a> May 2016 </li> <li> <a href="./smash.pdf">Viktor Trón, Aron Fischer, Nick Johnson: smash-proof: auditable storage for swarm secured by masked audit secret hash.</a> May 2016 </li> </ul> </body> </html>
We now use the
swarm up command to upload the directory to Swarm to create a mini virtual site.
Note
In this example we are using the public gateway through the bzz-api option in order to upload. The examples below assume a node running on localhost to access content. Make sure to run a local node to reproduce these examples.
$ swarm --recursive --defaultpath orange-papers/index.html --bzzapi up orange-papers/ 2> up.log > 2477cc8584cc61091b5cc084cdcdb45bf3c6210c263b0143f030cf7d750e894d
The returned hash is the hash of the manifest for the uploaded content (the orange-papers directory):
We now can get the manifest itself directly (instead of the files they refer to) by using the bzz-raw protocol
bzz-raw:
$ wget -O- "" > { "entries": [ { "hash": "4b3a73e43ae5481960a5296a08aaae9cf466c9d5427e1eaa3b15f600373a048d", "contentType": "text/html; charset=utf-8" }, { "hash": "4b3a73e43ae5481960a5296a08aaae9cf466c9d5427e1eaa3b15f600373a048d", "contentType": "text/html; charset=utf-8", "path": "index.html" }, { "hash": "69b0a42a93825ac0407a8b0f47ccdd7655c569e80e92f3e9c63c28645df3e039", "contentType": "application/pdf", "path": "smash.pdf" }, { "hash": "6a18222637cafb4ce692fa11df886a03e6d5e63432c53cbf7846970aa3e6fdf5", "contentType": "application/pdf", "path": "sw^3.pdf" } ] }
Note
macOS users can install wget via homebrew (or use curl).
Manifests contain content_type information for the hashes they reference. In other contexts, where content_type is not supplied or, when you suspect the information is wrong, it is possible to specify the content_type manually in the search query. For example, the manifest itself should be text/plain:"text/plain"
Now you can also check that the manifest hash matches the content (in fact, Swarm does this for you):
$ wget -O-"text/plain" > manifest.json $ swarm hash manifest.json > 2477cc8584cc61091b5cc084cdcdb45bf3c6210c263b0143f030cf7d750e894d
A useful feature of manifests is that we can match paths with URLs. In some sense this makes the manifest a routing table and so the manifest acts as if it was a host.
More concretely, continuing in our example, when we request:
GET^3.pdf
Swarm first retrieves the document matching the manifest above. The url path
sw^3 is then matched against the entries. In this case a perfect match is found and the document at 6a182226… is served as a pdf.
As you can see the manifest contains 4 entries, although our directory contained only 3. The extra entry is there because of the
--defaultpath orange-papers/index.html option to
swarm up, which associates the empty path with the file you give as its argument. This makes it possible to have a default page served when the url path is empty.
This feature essentially implements the most common webserver rewrite rules used to set the landing page of a site served when the url only contains the domain. So when you request
GET
you get served the index page (with content type
text/html) at
4b3a73e43ae5481960a5296a08aaae9cf466c9d5427e1eaa3b15f600373a048d.
Swarm manifests don’t “break” like a file system. In a file system, the directory matches at the path separator (/ in linux) at the end of a directory name:
-- dirname/ ----subdir1/ ------subdir1file.ext ------subdir2file.ext ----subdir2/ ------subdir2file.ext
In Swarm, path matching does not happen on a given path separator, but on common prefixes. Let’s look at an example:
The current manifest for the
theswarm.eth homepage is as follows:
wget -O- " > manifest.json > {"entries":[{"hash":"ee55bc6844189299a44e4c06a4b7fbb6d66c90004159c67e6c6d010663233e26","path":"LICENSE","mode":420,"size":1211,"mod_time":"2018-06-12T15:36:29Z"}, {"hash":"57fc80622275037baf4a620548ba82b284845b8862844c3f56825ae160051446","path":"README.md","mode":420,"size":96,"mod_time":"2018-06-12T15:36:29Z"}, {"hash":"8919df964703ccc81de5aba1b688ff1a8439b4460440a64940a11e1345e453b5","path":"Swarm_files/","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"acce5ad5180764f1fb6ae832b624f1efa6c1de9b4c77b2e6ec39f627eb2fe82c","path":"css/","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"0a000783e31fcf0d1b01ac7d7dae0449cf09ea41731c16dc6cd15d167030a542","path":"ethersphere/orange-papers/","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"b17868f9e5a3bf94f955780e161c07b8cd95cfd0203d2d731146746f56256e56","path":"f","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"977055b5f06a05a8827fb42fe6d8ec97e5d7fc5a86488814a8ce89a6a10994c3","path":"i","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"48d9624942e927d660720109b32a17f8e0400d5096c6d988429b15099e199288","path":"js/","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"294830cee1d3e63341e4b34e5ec00707e891c9e71f619bc60c6a89d1a93a8f81","path":"talks/","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"}, {"hash":"12e1beb28d86ed828f9c38f064402e4fac9ca7b56dab9cf59103268a62a2b35f","contentType":"text/html; charset=utf-8","mode":420,"size":31371,"mod_time":"2018-06-12T15:36:29Z"} ]}
Note the
path for entry
b17868...: It is
f. This means, there are more than one entries for this manifest which start with an f, and all those entries will be retrieved by requesting the hash
b17868... and through that arrive at the matching manifest entry:
$ wget -O- {"entries":[{"hash":"25e7859eeb7366849f3a57bb100ff9b3582caa2021f0f55fb8fce9533b6aa810","path":"avicon.ico","mode":493,"size":32038,"mod_time":"2018-06-12T15:36:29Z"}, {"hash":"97cfd23f9e36ca07b02e92dc70de379a49be654c7ed20b3b6b793516c62a1a03","path":"onts/glyphicons-halflings-regular.","contentType":"application/bzz-manifest+json","mod_time":"0001-01-01T00:00:00Z"} ]}
So we can see that the
f entry in the root hash resolves to a manifest containing
avicon.ico and
onts/glyphicons-halflings-regular. The latter is interesting in itself: its
content_type is
application/bzz-manifest+json, so it points to another manifest. Its
path also does contain a path separator, but that does not result in a new manifest after the path separator like a directory (e.g. at
onts/). The reason is that on the file system on the hard disk, the
fonts directory only contains one directory named
glyphicons-halflings-regular, thus creating a new manifest for just
onts/ would result in an unnecessary lookup. This general approach has been chosen to limit unnecessary lookups that would only slow down retrieval, and manifest “forks” happen in order to have the logarythmic bandwidth needed to retrieve a file in a directory with thousands of files.
When requesting
wget -O- ", Swarm will first retrieve the manifest at the root hash, match on the first
f in the entry list, resolve the hash for that entry and finally resolve the hash for the
favicon.ico file.
For the
theswarm.eth page, the same applies to the
i entry in the root hash manifest. If we look up that hash, we’ll find entries for
mages/ (a further manifest), and
ndex.html, whose hash resolves to the main
index.html for the web page.
Paths like
css/ or
js/ get their own manifests, just like common directories, because they contain several files.
Note
If a request is issued which Swarm can not resolve unambiguosly, a
300 "Multiplce Choices" HTTP status will be returned.
In the example above, this would apply for a request for, as it could match both
images/ as well as
index.html
5.4. Encryption¶
Introduced in POC 0.3, symmetric encryption is now readily available to be used with the
swarm up upload command.
The encryption mechanism is meant to protect your information and make the chunked data unreadable to any handling Swarm node.
Swarm uses Counter mode encryption to encrypt and decrypt content. When you upload content to Swarm, the uploaded data is split into 4 KB chunks. These chunks will all be encoded with a separate randomly generated encryption key. The encryption happens on your local Swarm node, unencrypted data is not shared with other nodes. The reference of a single chunk (and the whole content) will be the concatenation of the hash of encoded data and the decryption key. This means the reference will be longer than the standard unencrypted Swarm reference (64 bytes instead of 32 bytes).
When your node syncs the encrypted chunks of your content with other nodes, it does not share the full references (or the decryption keys in any way) with the other nodes. This means that other nodes will not be able to access your original data, moreover they will not be able to detect whether the synchronized chunks are encrypted or not.
When your data is retrieved it will only get decrypted on your local Swarm node. During the whole retrieval process the chunks traverse the network in their encrypted form, and none of the participating peers are able to decrypt them. They are only decrypted and assembled on the Swarm node you use for the download.
More info about how we handle encryption at Swarm can be found here.
Note
Swarm currently supports both encrypted and unencrypted
swarm up commands through usage of the
--encrypt flag.
This might change in the future as we will refine and make Swarm a safer network.
Important
The encryption feature is non-deterministic (due to a random key generated on every upload request) and users of the API should not rely on the result being idempotent; thus uploading the same content twice to Swarm with encryption enabled will not result in the same reference.
Example usage:
First, we create a simple test file.
$ echo "testfile" > mytest.txt
We upload the test file without encryption,
$ swarm up mytest.txt > <file reference>
and with encryption.
$ swarm up --encrypt mytest.txt > <encrypted reference>
Note that the reference of the encrypted upload is longer than that of the unencrypted upload. Note also that, because of the random encryption key, repeating the encrypted upload results in a different reference:
$ swarm up --encrypt mytest.txt <another encrypted reference>
5.5. Access Control¶
Swarm supports restricting access to content through several access control strategies:
Password protection - where a number of undisclosed parties can access content using a shared secret
(pass, act)
Selective access using Elliptic Curve key-pairs:
- For an undisclosed party - where only one grantee can access the content
(pk)
- For a number of undisclosed parties - where every grantee can access the content
(act)
Creating access control for content is currently supported only through CLI usage.
Accessing restricted content is available through CLI and HTTP. When accessing content which is restricted by a password HTTP Basic access authentication can be used out-of-the-box.
Important
When accessing content which is restricted to certain EC keys - the node which exposes the HTTP proxy that is queried must be started with the granted private key as its
bzzaccount CLI parameter.
5.5.1. Password protection¶
The simplest type of credential is a passphrase. In typical use cases, the passphrase is distributed by off-band means, with adequate security measures. Any user that knows the passphrase can access the content.
When using password protection, a given content reference (e.g.: a given Swarm manifest address or, alternatively, a Mutable Resource address) is encrypted using scrypt with a given passphrase and a random salt. The encrypted reference and the salt are then embedded into an unencrypted manifest which can be freely distributed but only accessed by undisclosed parties that posses knowledge of the passphrase.
Password protection can also be used for selective access when using the
act strategy - similarly to granting access to a certain EC key access can be also given to a party identified by a password. In fact, one could also create an
act manifest that solely grants access to grantees through passwords, without the need to know their public keys.
Example usage:
Important
Restricting access to content on Swarm is a 2-step process - you first upload your content, then wrap the reference with an access control manifest. We recommend that you always upload your content with encryption enabled. In the following examples we will refer the uploaded content hash as
reference hash
First, we create a simple test file. We upload it to Swarm (with encryption).
$ echo "testfile" > mytest.txt $ swarm up --encrypt mytest.txt > <reference hash>
Then, for the sake of this example, we create a file with our password in it.
$ echo "mypassword" > mypassword.txt
This password will protect the access-controlled content that we upload. We can refer to this password using the –password flag. The password file should contain the password in plaintext.
The
swarm access command sets a new password using the
new pass argument. It expects you to input the password file and the uploaded Swarm content hash you’d like to limit access to.
$ swarm access new pass --password mypassword.txt <reference hash> > <reference of access controlled manifest>
The returned hash is the hash of the access controlled manifest.
When requesting this hash through the HTTP gateway you should receive an
HTTP Unauthorized 401 error:
$ curl<reference of access controlled manifest>/ > Code: 401 > Message: cant decrypt - forbidden > Timestamp: XXX
You can retrieve the content in three ways:
- The same request should make an authentication dialog pop-up in the browser. You could then input the password needed and the content should correctly appear. (Leave the username empty.)
- Requesting the same hash with HTTP basic authentication would return the content too.
curlneeds you to input a username as well as a password, but the former can be an arbitrary string (here, it’s
x).
$ curl<reference of access controlled manifest>/
- You can also use
swarm downwith the
--passwordflag.
$ swarm --password mypassword.txt down bzz:/<reference of access controlled manifest>/ mytest2.txt $ cat mytest2.txt > testfile
5.5.2. Selective access using EC keys¶
A more sophisticated type of credential is an Elliptic Curve private key, identical to those used throughout Ethereum for accessing accounts.
In order to obtain the content reference, an Elliptic-curve Diffie–Hellman (ECDH) key agreement needs to be performed between a provided EC public key (that of the content publisher) and the authorized key, after which the undisclosed authorized party can decrypt the reference to the access controlled content.
Whether using access control to disclose content to a single party (by using the
pk strategy) or to
multiple parties (using the
act strategy), a third unauthorized party cannot find out the identity
of the authorized parties.
The third party can, however, know the number of undisclosed grantees to the content.
This, however, can be mitigated by adding bogus grantee keys while using the
act strategy
in cases where masking the number of grantees is necessary. This is not the case when using the
pk strategy, as it as
by definition an agreement between two parties and only two parties (the publisher and the grantee).
Important
Accessing content which is access controlled is enabled only when using a local Swarm node (e.g. running on localhost) in order to keep your data, passwords and encryption keys safe. This is enforced through an in-code guard.
Danger
NEVER (EVER!) use an external gateway to upload or download access controlled content as you will be putting your privacy at risk! You have been fairly warned!
Protecting content with Elliptic curve keys (single grantee):
The
pk strategy requires a
bzzaccount to encrypt with. The most comfortable option in this case would be the same
bzzaccount you normally start your Swarm node with - this will allow you to access your content seamlessly through that node at any given point in time.
Grantee public keys are expected to be in an secp256 compressed form - 66 characters long string (an example would be
02e6f8d5e28faaa899744972bb847b6eb805a160494690c9ee7197ae9f619181db). Comments and other characters are not allowed.
$ swarm --bzzaccount <your account> access new pk --grant-key <your public key> <reference hash> > <reference of access controlled manifest> the node which was granted access through its public key.
Protecting content with Elliptic curve keys and passwords (multiple grantees):
The
act strategy also requires a
bzzaccount to encrypt with. The most comfortable option in this case would be the same
bzzaccount you normally start your Swarm node with - this will allow you to access your content seamlessly through that node at any given point in time
Note
the
act strategy expects a grantee public-key list and/or a list of permitted passwords to be communicated to the CLI. This is done using the
--grant-keys flag and/or the
--password flag. Grantee public keys are expected to be in an secp256 compressed form - 66 characters long string (e.g.
02e6f8d5e28faaa899744972bb847b6eb805a160494690c9ee7197ae9f619181db). Each grantee should appear in a separate line. Passwords are also expected to be line-separated. Comments and other characters are not allowed.
swarm --bzzaccount 2f1cd699b0bf461dcfbf0098ad8f5587b038f0f1 access new act --grant-keys /path/to/public-keys/file --password /path/to/passwords/file <reference hash> 4b964a75ab19db960c274058695ca4ae21b8e19f03ddf1be482ba3ad3c5b9f9b.
As with the
pk strategy - one of the nodes which were granted access through their public keys.
5.5.3. HTTP usage¶
Accessing restricted content on Swarm through the HTTP API is, as mentioned, limited to your local node
due to security considerations.
Whenever requesting a restricted resource without the proper credentials via the HTTP proxy, the Swarm node will respond
with an
HTTP 401 Unauthorized response code.
When accessing password protected content:
When accessing a resource protected by a passphrase without the appropriate credentials the browser will
receive an
HTTP 401 Unauthorized response and will show a pop-up dialog asking for a username and password.
For the sake of decrypting the content - only the password input in the dialog matters and the username field can be left blank.
The credentials for accessing content protected by a password can be provided in the initial request in the form of::<password>@localhost:8500/bzz:/<hash or ens name> (
curl needs you to input a username as well as a password, but the former can be an arbitrary string (here, it’s
x).)
Important
Access controlled content should be accessed through the
bzz:// protocol
When accessing EC key protected content:
When accessing a resource protected by EC keys, the node that requests the content will try to decrypt the restricted
content reference using its own EC key which is associated with the current bzz account that
the node was started with (see the
--bzzaccount flag). If the node’s key is granted access - the content will be
decrypted and displayed, otherwise - an
HTTP 401 Unauthorized error will be returned by the node.
5.5.4. Access control in the CLI: example usage¶
First, we create a simple test file. We upload it to Swarm using encryption.
$ echo "testfile" > mytest.txt $ swarm up --encrypt mytest.txt > <reference hash>
Then, we define a password file and use it to create an access-controlled manifest.
$ echo "mypassword" > mypassword.txt $ swarm access new pass --password mypassword.txt <reference hash> > <reference of access controlled manifest>
We can create a passwords file with one password per line in plaintext (
password1 is probably not a very good password).
$ for i in {1..3}; do echo -e password$i; done > mypasswords.txt $ cat mypasswords.txt > password1 > password2 > password3
Then, we point to this list while wrapping our manifest.
$ swarm access new act --password mypasswords.txt <reference hash> > <reference of access controlled manifest>
We can access the returned manifest using any of the passwords in the password list.
$ echo password1 > password1.txt $ swarm --password1.txt down bzz:/<reference of access controlled manifest>
We can also curl it.
$ curl<reference of access controlled manifest>/
pkstrategy
First, we create a simple test file. We upload it to Swarm using encryption.
$ echo "testfile" > mytest.txt $ swarm up --encrypt mytest.txt > <reference hash>
Then, we draw an EC key pair and use the public key to create the access-controlled manifest.
$ swarm access new pk --grant-key <public key> <reference hash> > <reference of access controlled manifest>
We can retrieve the access-controlled manifest via a node that has the private key. You can add a private key using
geth (see here).
$ swarm --bzzaccount <address of node with granted private key> down bzz:/<reference of access controlled manifest> out.txt $ cat out.txt > "testfile"
actstrategy
We can also supply a list of public keys to create the access-controlled manifest.
$ swarm access new act --grant-keys <public key list> <reference hash> > <reference of access controlled manifest>
Again, only nodes that possess the private key will have access to the content.
$ swarm --bzzaccount <address of node with a granted private key> down bzz:/<reference of access controlled manifest> out.txt $ cat out.txt > "testfile"
5.6. FUSE¶
Another way of interacting with Swarm is by mounting it as a local filesystem using FUSE (Filesystem in Userspace). There are three IPC API’s which help in doing this.
Note
FUSE needs to be installed on your Operating System for these commands to work. Windows is not supported by FUSE, so these command will work only in Linux, Mac OS and FreeBSD. For installation instruction for your OS, see “Installing FUSE” section below.
5.6.1. Installing FUSE¶
- Linux (Ubuntu)
$ sudo apt-get install fuse $ sudo modprobe fuse $ sudo chown <username>:<groupname> /etc/fuse.conf $ sudo chown <username>:<groupname> /dev/fuse
Mac OS
Either install the latest package from or use brew as below
$ brew update $ brew install caskroom/cask/brew-cask $ brew cask install osxfuse
5.6.2. CLI Usage¶
The Swarm CLI now integrates commands to make FUSE usage easier and streamlined.
Note
When using FUSE from the CLI, we assume you are running a local Swarm node on your machine. The FUSE commands attach to the running node through bzzd.ipc
One use case to mount a Swarm hash via FUSE is a file sharing feature accessible via your local file system. Files uploaded to Swarm are then transparently accessible via your local file system, just as if they were stored locally.
To mount a Swarm resource, first upload some content to Swarm using the
swarm up <resource> command.
You can also upload a complete folder using
swarm --recursive up <directory>.
Once you get the returned manifest hash, use it to mount the manifest to a mount point
(the mount point should exist on your hard drive):
$ swarm fs mount <manifest-hash> <mount-point>
For example:
$ swarm fs mount <manifest-hash> /home/user/swarmmount
Your running Swarm node terminal output should show something similar to the following in case the command returned successfuly:
Attempting to mount /path/to/mount/point Serving 6e4642148d0a1ea60e36931513f3ed6daf3deb5e499dcf256fa629fbc22cf247 at /path/to/mount/point Now serving swarm FUSE FS manifest=6e4642148d0a1ea60e36931513f3ed6daf3deb5e499dcf256fa629fbc22cf247 mountpoint=/path/to/mount/point
You may get a “Fatal: had an error calling the RPC endpoint while mounting: context deadline exceeded” error if it takes too long to retrieve the content.
In your OS, via terminal or file browser, you now should be able to access the contents of the Swarm hash at
/path/to/mount/point, i.e.
ls /home/user/swarmmount
Through your terminal or file browser, you can interact with your new mount as if it was a local directory. Thus you can add, remove, edit, create files and directories just as on a local directory. Every such action will interact with Swarm, taking effect on the Swarm distributed storage. Every such action also will result in a new hash for your mounted directory. If you would unmount and remount the same directory with the previous hash, your changes would seem to have been lost (effectively you are just mounting the previous version). While you change the current mount, this happens under the hood and your mount remains up-to-date.
To unmount a
swarmfs mount, either use the List Mounts command below, or use a known mount point:
$ swarm fs unmount <mount-point> > 41e422e6daf2f4b32cd59dc6a296cce2f8cce1de9f7c7172e9d0fc4c68a3987a
The returned hash is the latest manifest version that was mounted. You can use this hash to remount the latest version with the most recent changes.
To see all existing swarmfs mount points, use the List Mounts command:
$ swarm fs list
Example Output:
Found 1 swarmfs mount(s): 0: Mount point: /path/to/mount/point Latest Manifest: 6e4642148d0a1ea60e36931513f3ed6daf3deb5e499dcf256fa629fbc22cf247 Start Manifest: 6e4642148d0a1ea60e36931513f3ed6daf3deb5e499dcf256fa629fbc22cf247
5.7. BZZ URL schemes¶
Swarm offers 6 distinct URL schemes:
5.7.1." } ] }
5.7.2.%
5.7.3." } ] }
5.7.4.%
5.7.5.": "" } | https://swarm-guide.readthedocs.io/en/latest/usage.html | 2019-07-15T22:32:37 | CC-MAIN-2019-30 | 1563195524254.28 | [] | swarm-guide.readthedocs.io |
Integrating Analysis Tools¶
Whereas it is feasible for an interpreter to perform an analysis on its own - in a lot of applications the analysis is a performed by a separate program (could be a third-party tool). Typically it is the plugin that handles the triggering and monitoring of the analysis tool. The plugin also handles the result feedback to the user.
Depending on the type of analysis, the expected load on a deployment and requirements on result visualization, webgme offers a couple of different built in options for handling the analysis.
Creating a process from the plugin¶
The most straight-forward approach is to invoke the analysis tool by creating a child-process from the plugin. The major draw-back of this is that the analysis tool would be running directly on the same host as the webgme server.
One way to come around this is to replace the default server-worker-manager in webgme with the docker-work-manager.
In this tutorial we will simply assume that OpenModelica is installed on the host machine and invoke the compiler/simulator from the plugin. (The code we written here can without modification be used with the docker-worker-manager approach so we’re not locking ourselves into a corner.)
Executor Framework¶
An other approach is to use the webgme executor framework, where external workers attach themselves to the webgme server and handles jobs posted by, e.g. plugins.
WebGME Routers¶
The approaches above both assume that the invocation point for the analysis job is a process call. This may not be the case for all analysis tools. In these cases the invocation call can either be directly implemented in the JavaScript code or the spawned process can handle the communication with the analysis service.
Alternatively a custom webgme router can be created to proxy requests and handle results. From such router, practically any tools or services can be accessed and integrated with the user interface. | https://webgme.readthedocs.io/en/stable/analysis_tools/introduction.html | 2019-07-15T21:56:18 | CC-MAIN-2019-30 | 1563195524254.28 | [] | webgme.readthedocs.io |
The MeetNotes slackbot assists you in running meetings from Slack.
If you need help integrating Slack with MeetNotes click here.
Once you have integrated Slack, you will receive a welcome message from the bot.
You can send an invite link to team members in your slack workspace. The Slack bot sends an invite on #general or anther channel that you choose.
The MeetNotes Slack bot gains new powers everyday. All of the superpowers are listed here!
Daily Morning Summary Message
MeetNotes bot sends daily summary of the day. It helps user plan the day. User can add Agenda for the upcoming meeting, follow up on open Action Items from previous meeting. All from the slack. A preview of daily summary is shown below:
Create a New Meeting
Running impromptu meetings is easy. Ask the MeetNotes slack bot to create a meeting for you. A link to the new meeting is created which takes you to the app. Learn more here.
Adding Agenda to a Meeting
Predefined Agenda ensures on schedule result oriented meetings. Contribute to meeting effectiveness by adding an agenda item from Slack. Learn more about adding agenda here.
Add Notes to Meetings
Its easy to add notes to meeting. It could be an important update, a new development or a change of plans, just add it to a meeting. Learn more here.
Get Action Items from Meetings
If you need a quick update of what tasks are due or need an update on what your colleagues are working on, use the get-actions command on Slack. Learn more here.
View Notes from a Meeting
You can use the get-notes command to view notes without switching between applications. Learn how to use this feature here .
Alternatively you can share meeting notes over slack channels from app also. Learn more about sharing notes here.
Action item Follow-up
Get the MeetNotes Slackbot to follow up on your behalf. Get a quick status update from your colleagues by using the follow-up command. Learn more here.
Slack Notifications
Slack bot sends 3 kinds of notifications for a meeting.
First notification is sent if there are open Action Items.
If meeting has open Action Items or as a meeting organiser user has not set the Agenda. MeetNotes bot sends notification 10 minutes before a meeting starts.
You can update action item status, add notes or share notes from this reminder.
Second notification is sent to organiser of the meeting only. One hour before the meeting Slack bot notifies the organiser of meeting to set and share agenda with the attendees.
Third notification is for feedback and is sent after the meeting. Every attendee can vote how the meeting went.
| http://docs.meetnotes.co/en/articles/1305844-meetnotes-slack-bot-user-guide | 2019-07-15T22:03:45 | CC-MAIN-2019-30 | 1563195524254.28 | [array(['https://downloads.intercomcdn.com/i/o/66688963/1cf1156cd290b5ec32e53192/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/66689140/cea4bcae2585a00714ccbf84/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/66692844/90f86582e509b4b4e9c3fccf/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/66696284/fe63dcfe4f3affb7c8b697e7/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/45517749/204cd400f82932fd131d8ecb/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/45519225/fe28c664bfe6f7d58eb6dfea/image.png',
None], dtype=object) ] | docs.meetnotes.co |
All content with label amazon+async+docs+guide+hot_rod+infinispan+jboss_cache+listener+podcast+release+xa.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, recovery, transactionmanager, dist, partitioning, query, intro, lock_striping, jbossas, nexus, schema, cache, s3,
grid, jcache, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, ec2, 缓存, hibernate, aws, custom_interceptor, setup, clustering, eviction, gridfs, concurrency, out_of_memory, import, index, events, batch, configuration, hash_function, buddy_replication, loader, write_through, cloud, mvcc, notification, tutorial, presentation, jbosscache3x, distribution, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, transaction, interactive, xaresource, build, searchable, demo, installation, cache_server, scala, client, migration, non-blocking, jpa, filesystem, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, standalone, webdav, hotrod, snapshot, repeatable_read, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, jgroups, lucene, locking, rest
more »
( - amazon, - async, - docs, - guide, - hot_rod, - infinispan, - jboss_cache, - listener, - podcast, - release, - xa )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/amazon+async+docs+guide+hot_rod+infinispan+jboss_cache+listener+podcast+release+xa | 2019-07-15T23:26:14 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label article+async+deadlock+development+grid+gui_demo+hot_rod+infinispan+jboss_cache+listener+pojo_cache+release+user_guide.
Related Labels:
podcast, expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, partitioning, query, intro,, mvcc, notification, tutorial, presentation, jbosscache3x, distribution, jira, cachestore, data_grid, cacheloader, resteasy, hibernate_search, cluster, br, transaction, interactive, xaresource, build, searchable, demo, cache_server, installation, scala, client, migration, non-blocking, jpa, filesystem, tx, eventing, client_server, testng, infinispan_user_guide, standalone, hotrod, webdav, snapshot, repeatable_read, docs, consistent_hash, store, whitepaper, jta, faq, as5, 2lcache, jsr-107, lucene, jgroups, locking, rest
more »
( - article, - async, - deadlock, - development, - grid, - gui_demo, - hot_rod, - infinispan, - jboss_cache, - listener, - pojo_cache, - release, - user_guide )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/article+async+deadlock+development+grid+gui_demo+hot_rod+infinispan+jboss_cache+listener+pojo_cache+release+user_guide | 2019-07-15T23:28:57 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label async+client+distribution+gridfs+guide+hibernate+infinispan+query+snapshot.
Related Labels:
podcast, expiration, publish, datagrid, interceptor, server, rehash, replication, transactionmanager, release, partitioning, deadlock, intro, archetype, jbossas, lock_striping, nexus, schema, listener,
state_transfer, cache, amazon, s3, memcached, grid, jcache, test, api, xsd, ehcache, maven, documentation, youtube, userguide, write_behind, 缓存, ec2, aws, interface, custom_interceptor, clustering, setup, eviction, concurrency, out_of_memory, jboss_cache, import, index, hash_function, configuration, batch, buddy_replication, loader, colocation, write_through, cloud, remoting, mvcc, notification, tutorial, presentation, murmurhash2, xml, read_committed, jbosscache3x, cachestore, data_grid, cacheloader, hibernate_search, resteasy, cluster, br, development, websocket, transaction, interactive, xaresource, build, hinting, searchable, demo, installation, command-line, migration, non-blocking, rebalance, jpa, filesystem, tx, user_guide, eventing, shell, client_server, testng, infinispan_user_guide, murmurhash, standalone, webdav, hotrod, docs, batching, consistent_hash, store, jta, faq, as5, 2lcache, jsr-107, jgroups, lucene, locking, rest, hot_rod
more »
( - async, - client, - distribution, - gridfs, - guide, - hibernate, - infinispan, - query, - snapshot )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/async+client+distribution+gridfs+guide+hibernate+infinispan+query+snapshot | 2019-07-15T22:58:27 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label batching+gridfs+hotrod+import+infinispan+jgroups+mvcc+notification+query+setup.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, replication, transactionmanager, dist, release, deadlock, archetype, lock_striping,, batch, hash_function, buddy_replication, loader, xa, cloud, remoting, tutorial, murmurhash2, jbosscache3x, read_committed, xml, distribution, cachestore, data_grid, cacheloader, hibernate_search, cluster, development, permission, websocket, transaction, async, interactive, xaresource, build, searchable, demo, installation, cache_server, scala, client, migration, filesystem, jpa, tx, gui_demo, eventing, client_server, murmurhash, infinispan_user_guide, standalone, webdav, snapshot, repeatable_read, docs, consistent_hash, jta, faq, 2lcache, jsr-107, docbook, lucene, locking, rest, hot_rod
more »
( - batching, - gridfs, - hotrod, - import, - infinispan, - jgroups, - mvcc, - notification, - query, - setup )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/batching+gridfs+hotrod+import+infinispan+jgroups+mvcc+notification+query+setup | 2019-07-15T23:39:54 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label buddy_replication+consistent_hash+deadlock+gridfs+hotrod+infinispan+notification+setup+write_behind+xaresource.
Related Labels:
expiration, publish, datagrid, coherence, interceptor, server, rehash, replication, recovery, transactionmanager, dist, release, partitioning, query, archetype, lock_striping, jbossas, guide, schema,
listener, state_transfer, cache, s3, amazon, grid, memcached, test, jcache, api, xsd, ehcache, maven, documentation, ec2, 缓存, hibernate, aws,, websocket, transaction, async, interactive, build, searchable, demo, installation, cache_server, scala, client, migration, rebalance, filesystem, jpa, tx, gui_demo, eventing, client_server, murmurhash, infinispan_user_guide, standalone, webdav, snapshot, repeatable_read, docs, batching, jta, faq, 2lcache, jsr-107, lucene, jgroups, locking, rest, hot_rod
more »
( - buddy_replication, - consistent_hash, - deadlock, - gridfs, - hotrod, - infinispan, - notification, - setup, - write_behind, - xaresource )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/buddy_replication+consistent_hash+deadlock+gridfs+hotrod+infinispan+notification+setup+write_behind+xaresource | 2019-07-15T23:16:19 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label cachestore+docs+events+filesystem+gridfs, userguide, write_behind, ec2, 缓存, s, hibernate, getting, aws, interface, setup, clustering, mongodb, eviction, out_of_memory, concurrency, examples, jboss_cache, import, configuration, hash_function, buddy_replication, loader, write_through, cloud, mvcc, tutorial, notification, read_committed, xml, jbosscache3x, distribution, started, data_grid, cacheloader, resteasy, hibernate_search, cluster, development, websocket, transaction, async, interactive, xaresource, build, gatein, searchable, demo, scala, client, migration, jpa, tx, user_guide, gui_demo, eventing, client_server, testng, infinispan_user_guide, hotrod, webdav, snapshot, consistent_hash, batching, store, jta, faq, 2lcache, as5, lucene, jgroups, locking, rest, hot_rod
more »
( - cachestore, - docs, - events, - filesystem, - gridfs, - infinispan, - installation, - jcache, - jsr-107, - repeatable_read )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/cachestore+docs+events+filesystem+gridfs+infinispan+installation+jcache+jsr-107+repeatable_read | 2019-07-15T23:31:01 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
All content with label eap+eap6+getting_started+http+jboss+realm.
Related Labels:
high, wildfly, tutorial, service, 2012, standalone, ssl, s, load, modcluster, security, cli, balancing, availability, clustering, cluster, mod_jk, storeconfig, tomcat,
domain, wildly, favourite, l, httpd, as, ha, high-availability, mod_cluster, as7, ews
more »
( - eap, - eap6, - getting_started, - http, - jboss, - realm )
Powered by a free Atlassian Confluence Open Source Project License granted to Red Hat, Inc.. Evaluate Confluence today. | https://docs.jboss.org/author/label/eap+eap6+getting_started+http+jboss+realm | 2019-07-15T23:02:38 | CC-MAIN-2019-30 | 1563195524254.28 | [] | docs.jboss.org |
6. PSS¶
pss (Postal Service over Swarm) is a messaging protocol over Swarm with strong privacy features. The pss API is exposed through a JSON RPC interface described in the API Reference, here we explain the basic concepts and features.
Note
pss is still an experimental feature and under active development and is available as of POC3 of Swarm. Expect things to change.
Note
There is no CLI support for
pss.
6.1. Basics¶
With
pss you can send messages to any node in the Swarm network. The messages are routed in the same manner as retrieve requests for chunks. Instead of chunk hash reference,
pss messages specify a destination in the overlay address space independently of the message payload. This destination can describe a specific node if it is a complete overlay address or a neighbourhood if it is partially specified one. Up to the destination, the message is relayed through devp2p peer connections using forwarding kademlia (passing messages via semi-permanent peer-to-peer TCP connections between relaying nodes using kademlia routing). Within the destination neighbourhood the message is broadcast using gossip.
Since
pss messages are encrypted, ultimately the recipient is whoever can decrypt the message. Encryption can be done using asymmetric or symmetric encryption methods.
The message payload is dispatched to message handlers by the recipient nodes and dispatched to subscribers via the API.
Important
pss does not guarantee message ordering (Best-effort delivery)
nor message delivery (e.g. messages to offline nodes will not be cached and replayed) at the moment.
6.1.1. Privacy features¶
Thanks to end-to-end encryption, pss caters for private communication.
Due to forwarding kademlia,
pss offers sender anonymity.
Using partial addressing,
pss offers a sliding scale of recipient anonymity: the larger the destination neighbourhood (the smaller prefix you reveal of the intended recipient overlay address), the more difficult it is to identify the real recipient. On the other hand, since dark routing is inefficient, there is a trade-off between anonymity on the one hand and message delivery latency and bandwidth (and therefore cost) on the other. This choice is left to the application.
Forward secrecy is provided if you use the Handshakes module.
6.2. Usage¶
See the API Reference for details.
6.2.1. Registering a recipient¶
Intended recipients first need to be registered with the node. This registration includes the following data:
Encryption key- can be a ECDSA public key for asymmetric encryption or a 32 byte symmetric key.
Topic- an arbitrary 4 byte word.
Address- destination (fully or partially specified Swarm overlay address) to use for deterministic routing.
The registration returns a key id which is used to refer to the stored key in subsequent operations.
After you associate an encryption key with an address they will be checked against any message that comes through (when sending or receiving) given it matches the topic and the destination of the message.
6.2.2. Sending a message¶
There are a few prerequisites for sending a message over
pss:
Encryption key id- id of the stored recipient’s encryption key.
Topic- an arbitrary 4 byte word (with the exception of
0x0000to be reserved for
rawmessages).
Message payload- the message data as an arbitrary byte sequence.
Note
The Address that is coupled with the encryption key is used for routing the message. This does not need to be a full address; the network will route the message to the best of its ability with the information that is available. If no address is given (zero-length byte slice), routing is effectively deactivated, and the message is passed to all peers by all peers.
Upon sending the message it is encrypted and passed on from peer to peer. Any node along the route that can successfully decrypt the message is regarded as a recipient. If the destination is a neighbourhood, the message is passed around so ultimately it reaches the intended recipient which also forwards the message to their peers, recipients will continue to pass on the message to their peers, to make it harder for anyone spying on the traffic to tell where the message “ended up.”
After you associate an encryption key with a destination they will be checked against any message that comes through (when sending or receiving) given it matches the topic and the address in the message.
Important
When using the internal encryption methods, you MUST associate keys (whether symmetric or asymmetric) with an address space AND a topic before you will be able to send anything.
6.2.3. Sending a raw message¶
It is also possible to send a message without using the builtin encryption. In this case no recipient registration is made, but the message is sent directly, with the following input data:
Message payload- the message data as an arbitrary byte sequence.
Address- the Swarm overlay address to use for the routing.
6.2.4. Receiving messages¶
You can subscribe to incoming messages using a topic. Since subscription needs push notifications, the supported RPC transport interfaces are websockets and IPC.
Important
pss does not guarantee message ordering (Best-effort delivery)
nor message delivery (e.g. messages to offline nodes will not be cached and replayed) at the moment.
6.3. Advanced features¶
Note
This functionalities are optional features in pss. They are compiled in by default, but can be omitted by providing the appropriate build tags.
6.3.1. Handshakes¶
pss provides a convenience implementation of Diffie-Hellman handshakes using ephemeral symmetric keys. Peers keep separate sets of keys for a limited amount of incoming and outgoing communications, and create and exchange new keys when the keys expire. | https://swarm-guide.readthedocs.io/en/latest/pss.html | 2019-07-15T22:21:05 | CC-MAIN-2019-30 | 1563195524254.28 | [] | swarm-guide.readthedocs.io |
Advanced Tutorial
Here we introduce some advanced usages of AutoSolvate. To learn the basic usages, please refer to the basic Tutorial page.
Advanced Example 1: Custom Solvent
Apart from the 5 common solvents contained in AutoSolvate, the user can use custom solvents found in databases or literature to build the solvated structure, as long as the .frcmod and .off files are available.
Here we show a simple use case. We are still going to work on the neutral naphthalene molecule used in the basic Tutorial. However, this time we will put it in a custom solvent, dimethylsulfoxide (DMSO), which is not contained in AutoSolvate.
Step 1: Find custom solvent force field files
Theoretically, we can generate GAFF force field for any solvent, and use that for our simulation.
However, it is ideal to use the solvent force fields that have been verified in publications and can reliably reproduce experimental results.
For example, if you want to simulate some solute in DMSO, you may want to look for existing Amber force field of DMSO. One online resource is the AMBER parameter database hosted by the Bryce Group at the University of Manchester:.
On the website, you may find some solvent boxes available, including DMSO. You will find two downloadable files for the DMSO solvent box:
OFF: The DMSO solvent box library file
FRCMOD: The DMSO force field modification file
You can download them and save as: dmso.frcmod and dmso.off.
Next, it is very important to find out the name of the solvent box file that will be recognized by AmberTools, and pass this name to AutoSolvate.
If you open the dmso.off file, you will see the first few lines as below:
1!!index array str 2 "d" 3!entry.d.unit.atoms table str name str type int typex int resx int flags int seq int elmnt dbl chg 4 "S" "S" 0 1 131073 1 16 0.307524 5 "CT1" "CT" 0 1 131073 2 6 -0.262450
Notice that for a solvent box OFF file, the 2nd line is the name of the solvent box that can be recognized by AmberTool/tleap. In this case, the name is d. That means, if one loads this OFF file and uses tleap to add the solvent box, the corresponding command should be:
>>> solventbox [solute_unit_name] d [box_size] [closeness]
So d is the solvent name that we should pass to AutoSolvate.
Note
If you don’t like the original solvent box name given in the OFF file, feel free to change it to something else. For example, you can change the 2nd line of dmso.off to “DMSO”. Later you want to pass the new name, “DMSO” to AutoSolvate
Step 2: Run AutoSolvate with the custom solvent
To generate the solvent box structure and MD prmtop files with custom solvent, the basic procedure is the same as the simple example about adding water (see Tutorial).
The only difference is to provide 3 extra options: #. solvent name (not the real name, but the name given in OFF file) with option -s #. solvent OFF file path with option -l #. solvent FRCMOD file path with option -p
Assuming that you have naphthalene_neutral.xyz, dmso.off, dmso.frcmod files all in the current working directory, and the environment with AutoSolvate installed has been activated. To add the DMSO solvent box to the neutral naphthalene molecule, you can simply run the following command:
>>> autosolvate boxgen -m naphthalene_neutral.xyz -s d -l dmso.off -p dmso.frcmod
This command should generate the solvated files: d_solvated.inpcrd, d_solvated.prmtop, and d_solvated.pdb.
Advanced Example 2: Automated recommendation of solvent-solute closeness
The automated recommendation of solvent-solute closeness allows to generate initial structures where the closeness is closer to the equilibrated closeness. To demonstrate the automated recommendation of solvent-solute closeness run the following command:
>>> autosolvate boxgen -m naphthalene_neutral.xyz -s water -c 0 -u 1 -t automated
with the option -t automated the closeness will be automatically determined from the solvent identity. | https://autosolvate.readthedocs.io/en/latest/advancedTutorial.html | 2022-06-25T02:31:33 | CC-MAIN-2022-27 | 1656103033925.2 | [] | autosolvate.readthedocs.io |
Bank Account Data
Instant access to income and transaction history data
Overview
Metamap's Bank Account Data provides instant access (with user consent) to bank account data such as raw transaction history (3 to 12 months), balance, account, and identity information & enriched data (transaction categorisation and income verification)
User's bank account data returned in the Dashbard.
Availability
We currently support the most popular and top banks and financial institutions across Latin America, Africa and South East Asia.
- This can be integrated through our (1) mobile & web SDKs and through (2) direct link.
Features
- Instant access to bank account financial data
Users can login using their bank account credentials to securely and accurately share all of their financial data stored within their account instead of filling out long forms.
- Access to structured bank account data
With our Bank Account Data merit, once your users provide bank account credentials or financial documents, Metamap sends you their financial information in a consistent and easily digestible data format. The data will be returned in a single call and will cover data such as: Identity, Accounts, Transactions and Balance.
- Enriched transaction data
By analysing the transaction history data (3 to 12 months) we are able to categorise each transaction (into 60+ categories), understand income sources, track spending patterns over months and more.
- Identity verification
With the account information verified by the financial institution the name, phone number and email can be compared with other Metamap or external sources to verify that your user is who they say they are..
Setup
There are 3 main steps to set up the Bank Account Data merit:
- Setup your metamap on the dashboard
- Use our quick start integration steps
- Review the verification results
Step 1: Setup a Metamap
The first step to setting up a Bank Account Data merit is to create a new metamap in the Dashboard. Once you've created a new metamap, add the Bank Account Data product to the user flow, and enable the countries that you want to support.
Dashboard: Add the Bank Account merit to your metamap.
Step 2: Integrate
Currently you can use Metamap's Bank Account Data in two ways:
-.
Here's what your users will see if you use Metamap's prebuilt UX:
Animated GIF of user flow on a mobile interface.
To implement this:
- Setup the metamap for Bank Account
Step 3: Process verification results
There are two main ways to visualize and use the data
- Dashboard Verification Results
In the dashboard, visit the Verifications tab, and click on a Verification to review the results.
- Webhook verification results
You will need to configure your webhooks, then handle the webhook responses that will be sent to your webhook URL.
{ "flowId": "61654d7549baf62d79e6f632", "accounts": [ { "name": "Savings Account", "type": "CASA", "number": "SA111111 ", "currency": "MXN", "transactions": [ { "date": "2021-08-19T00:00:00.000000", "amount": "1000.00", "description": " Deposit" }, { "date": "2021-08-03T00:00:00.000000", "amount": "1500.00", "description": "Withdrawal" }, { "date": "2021-08-02T00:00:00.000000", "amount": "1500.00", "description": "Withdrawal" }, { "date": "2021-08-01T00:00:00.000000", "amount": "12000.00", "description": "Deposit" } ], "current_balance": "10000.00", "available_balance": "10000.00" }, { "name": "Chequing Account", "type": "CASA", "number": "CA111111", "currency": "MXN", "transactions": [ { "date": "2021-08-19T00:00:00.000000", "amount": "2500.00", "description": "Grocery Purchase" }, { "date": "2021-08-19T00:00:00.0000 00", "amount": "3200.00", "description": "Gasoline" } ], "current_balance": "15000.00", "available_balance": "15000.0 0" } ], "identity": { "name": "David Applewood", "email": "[email protected]", "phone": "91111593" }, "eventName": "financial_institution_scrape_data_received", "timestamp": "2021-10-21T12:51:16.522Z", "institution": { "id": "banamex_mx", "name": "Banamex", "type": "Bank" }, "verificationId": "617161d95f83df001bcfade6" }
Reference
Available Data Types
Supported Institutions
The available data per institution is indicated with green dots ():
Updated 1 day ago | https://docs.getmati.com/docs/bank-account-data | 2022-06-25T01:48:01 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://files.readme.io/71e2af8-Bank_Account_-_Verification_result-_expanded.jpg',
"Bank Account - Verification result- expanded.jpg User's bank account data returned in the Dashbard."],
dtype=object)
array(['https://files.readme.io/71e2af8-Bank_Account_-_Verification_result-_expanded.jpg',
"Click to close... User's bank account data returned in the Dashbard."],
dtype=object)
array(['https://files.readme.io/d6f7797-Bank_Account_Data_-_Flowbuilder_Zoom.png',
'Bank Account Data - Flowbuilder Zoom.png Dashboard: Add the Bank Account merit to your metamap.'],
dtype=object)
array(['https://files.readme.io/d6f7797-Bank_Account_Data_-_Flowbuilder_Zoom.png',
'Click to close... Dashboard: Add the Bank Account merit to your metamap.'],
dtype=object)
array(['https://files.readme.io/35769c6-Bank_Account_Data_-_Demo.gif',
'Bank Account Data - Demo.gif Animated GIF of user flow on a mobile interface.'],
dtype=object)
array(['https://files.readme.io/35769c6-Bank_Account_Data_-_Demo.gif',
'Click to close... Animated GIF of user flow on a mobile interface.'],
dtype=object) ] | docs.getmati.com |
Device Manager
Applies To: Windows 7, Windows Server 2008 R2
You can use Device Manager to install and update the drivers for hardware devices, change the hardware settings for those devices, and troubleshoot problems.
A device driver is software that allows Windows to communicate with a specific hardware device. Before Windows can use any new hardware, a device driver must be installed. | https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754610(v=ws.11)?redirectedfrom=MSDN | 2022-06-25T02:51:42 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.microsoft.com |
.
フィードバック
フィードバックありがとうございました
このトピックへフィードバック | https://docs.us.sios.com/sps/8.6.4/ja/topic/lcdsync | 2022-06-25T02:28:51 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.us.sios.com |
butter¶
build MIMO butterworth filter of order ord and cut-off freq over Nyquist freq ratio Wn. The filter will have N input and N output and N*ord states.
Note: the state-space form of the digital filter does not depend on the sampling time, but only on the Wn ratio. As a result, this function only returns the A,B,C,D matrices of the filter state-space form. | https://ic-sharpy.readthedocs.io/en/latest/includes/linear/src/libss/butter.html | 2022-06-25T01:53:06 | CC-MAIN-2022-27 | 1656103033925.2 | [] | ic-sharpy.readthedocs.io |
permlock
flisp
jlinstackwalk (Win32)
flisp itself is already threadsafe, this lock only protects the
jl_ast_context_list_tpool
The following is a leaf lock (level 2), and only acquires level 1 locks (safepoint) internally:
- typecache
- Module->lock are a level 6 lock, which can only recurse to acquire locks at lower levels:
- codegen
- jlmodulesmutex
LLVMContext : codegen lock
Method : Method->writelock
- roots array (serializer and codegen)
- invoke / specializations / tfunc modifications | https://docs.juliacn.com/latest/devdocs/locks/ | 2022-06-25T02:05:40 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.juliacn.com |
Multiple Consents
Yapily multiple consents represent the cases where the
Institution returns more than one authorisation code, for example with AMEX, if the user
authorises their consent to share two or more accounts, AMEX will return a consent for each card.
WarningWarning
We do not recommend executing account authorisation requests for AMEX without either using a
callback or a custom redirect. If done so, if the user
authorises more than one AMEX account, you will only be able to track the first account authorised by the bank using the consent id (returned
in authorisation request response). Any extra consents will be created but you will not be able to receive the consent id for the additional
consents without using the callback or redirect.
Use with a callback
If you have created an authorisation request with a
callback and the bank returns multiple consents, you will receive the consents as query parameters
at the
callback in the format:
?consent={{value1}}&consent={{value2}}...
e.g.
Use with a redirect url
NoteNote
This section only applies if you have your own Open Banking AISP/PISP licenses and are using your own certificates to register with each Yapily
Institution. See Redirect Url for more information.
For AMEX, the response at the redirect will contain multiple comma separated
authToken values and one
state:
?authToken={{value1}},{{value2}}...&state={{value}}
e.g.
You will then need to execute (Forwarding) Send OAuth2 Code for each
authToken with the
same state to retrieve a
consentToken for each of the user's accounts that have been authorised for sharing.
Use with one-time-token
Similarly, when you also specify to use a
one-time-token in the authorisation request and the bank returns multiple consents, you will get one-time
tokens as query parameters at the
callback in the format:
?one-time-token={{value1}}&one-time-token={{value2}}...
e.g.
You will then need to execute Exchange One Time Token for each
one-time-token to retrieve a
consentToken for
each of the user's accounts that have been authorised for sharing. | https://docs.yapily.com/pages/knowledge/yapily-concepts/multiple_consents/ | 2022-06-25T02:39:37 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['https://storage.googleapis.com/static.yapily.com/images/documentation/2020/docs_amex_multiple_consents_callback_example.png',
'docs_amex_multiple_consents_callback_example'], dtype=object)
array(['https://storage.googleapis.com/static.yapily.com/images/documentation/2020/docs_amex_multiple_consents_callback_ott_example.png',
'docs_amex_multiple_consents_callback_ott_example'], dtype=object) ] | docs.yapily.com |
Searching
Searching for users in an organization
Searching for users can be done by entering part of a users name in the search field to find users containing the search term. It is also possible to use search operators for advanced searches. Below is a table of the available search operators.
Updated over 2 years ago | https://docs.pritunl.com/docs/searching | 2022-06-25T02:25:26 | CC-MAIN-2022-27 | 1656103033925.2 | [] | docs.pritunl.com |
Bucket Folders
Overview
Bucket Folders allow you to define where files are stored in your AWS bucket.
S-Drive files are stored in AWS using a folder path that, by default, contains the parent object record id (such as the Account record id) as the top level folder, as shown here.
Most users only interact with S-Drive files using the S-Drive user interface, so the location of files in AWS is not important. However if you want to see your files in the AWS bucket, it can be difficult to find them because of the way they’re stored.
Using a Bucket Folder makes seeing and organizing your files in AWS easier by allowing you to define the top folder of the path, as shown below.
Bucket Folders are very flexible. You can configure them as a text field, number field, picklist, even a formula field.
Setting up Bucket Folder
Setting up a Bucket Folder has two steps:
Create a field on the parent object to define the Bucket Folder.
Specify that field in S-Drive Configuration.
Creating a Bucket Folder field
The Bucket Folder field is defined on the file object’s parent. For example, for Account File, you must create a bucket folder field on Account. The exception is S-Drive Tab (S3Object), which we’ll get into below. We’ll use Account Files as an example here.
Go to Setup->Object Manager->Account
Go to Fields and Relationships
Click New to create a new field
Choose what type of field it is, such as Text, Formula, Picklist.
Give your field a name (ex: Bucket Folder)
You can choose a default value if you wish, or you can choose to populate the Bucket Folder field on each account you create
Once you’ve finished, click Save
Configuring Bucket Folder
Go to S-Drive Configuration General Settings Tab
Scroll down to Upload Settings.
Next to Configure Bucket Folder, click Configure.
This will open the following page:
The Bucket Folder Field is the field you created above. Fill in the field name and click the “Enabled” checkbox.
Click Save
Once defined, the Bucket Folder field needs to be populated. This can be done in various ways:
Setting the Bucket Folder field to a default value
Filling in the Bucket Folder field manually when a new parent record is created
Using automation to fill in the Bucket Folder value
Defining the Bucket Folder field as a formula field, which will automatically populate on the parent record.
Now when you upload files, they will show in the bucket with the top level folder your specified. | https://docs.sdriveapp.com/2.12/bucket-folders | 2022-06-25T00:48:48 | CC-MAIN-2022-27 | 1656103033925.2 | [array(['../2.12/1659275020/image-20210730-195316.png?inst-v=9e174774-eaf9-4a16-b5e1-168bdddbe411',
None], dtype=object)
array(['../2.12/1659275020/image-20210730-195626.png?inst-v=9e174774-eaf9-4a16-b5e1-168bdddbe411',
None], dtype=object) ] | docs.sdriveapp.com |
Server Maintenance Commands
cluster
The cfy cluster command is used to manage clusters of tenants in Cloudify Manager. Optional flags These commands support the common CLI flags. Commands Start Usage cfy cluster start [OPTIONS] Start a Cloudify Manager cluster with the current manager as the master. This initializes all the Cloudify Manager cluster components on the current manager, and marks it as the master. After that, other managers can join the cluster by passing this manager’s IP address and encryption key.
init.
ldap
The cfy ldap command is used to set LDAP authentication to enable you to integrate your LDAP users and groups with Cloudify. Optional flags These commands support the common CLI flags. Commands set Usage cfy LDAP set [OPTIONS] Set Cloudify Manager to use the LDAP authenticator. Required flags -s, --ldap-server TEXT - The LDAP address against which to authenticate, for example: ldaps://ldap.domain.com. -u, --ldap-username TEXT- The LDAP admin username to be set on the Cloudify Manager.
profiles
The cfy profiles command is used to manage Cloudify profiles. Each profile can have its own credentials for managers and Cloudify various environmental settings Optional flags These commands support the common CLI flags. Commands list Usage cfy profiles list [OPTIONS] List all profiles. Example $ cfy profiles list ... Listing all profiles... Profiles: +---------------+--------------+----------+-------------------------------------+----------+-----------+---------------+------------------+----------------+-----------------+ | name | manager_ip | ssh_user | ssh_key_path | ssh_port | rest_port | rest_protocol | manager_username | manager_tenant | bootstrap_state | +---------------+--------------+----------+-------------------------------------+----------+-----------+---------------+------------------+----------------+-----------------+ | *10.
snapshots
The cfy snapshots command is used to manage data snapshots of Cloudify manager. You must have admin credentials to create and restore snapshots. You can use the command to create, upload, download, delete and list snapshots and also to restore a Manager using a snapshot archive. For more about working with snapshots, go to: snapshots. Optional flags These commands support the common CLI flags. Commands create Usage cfy snapshots create [OPTIONS] [SNAPSHOT_ID]
ssh.
tenants
The cfy tenants command is used to create and manage tenants on Cloudify Manager. You can run commands on a tenant other than the one that you are logged into by specifying the name of the tenant to which the command applies. For example, cfy tenants add-user USERNAME -t TENANT_NAME can be used to add a user to a different tenant. Requirements To use the command you must have Cloudify sys_admin credentials.
user credentials. User names and passwords must conform to the following requirements:
users. | https://docs.cloudify.co/4.5.5/cli/maint_cli/ | 2019-04-18T14:50:15 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.cloudify.co |
You can access Demo platform and give it a try after the registration at DSX. You only need to enter your email and verify it to get an account.
Once you've set a password, you can already access Demo platform.
Click Trade section
You'll get to the trading platform. In the lower left corner you can see the name of the current account. If it is Trading DEMO - you are all set. You can start trading. If you see any other name - click it to choose the Demo account.
That's it. You can now practice at DSX platform. | https://docs.dsx.uk/dsx/getting-started-with-dsx/how-to-get-demo-account | 2019-04-18T15:06:30 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://downloads.intercomcdn.com/i/o/98796963/1623a839bc943c3c9bdec709/image.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/98799328/a44551f2d2e31d9355d559da/image.png',
None], dtype=object) ] | docs.dsx.uk |
Alert policies in the security and compliance center
You can use the new alert policy and alert dashboard tools in the Office 365 and Microsoft 365 security and compliance centers to create alert policies and then view the alerts that are generated when users perform activities that match the conditions of an alert policy. Alert policies build on and expand the functionality of activity alerts by letting you categorize the alert policy, apply the policy to all users in your organization, set a threshold level for when an alert is triggered, and decide whether or not to receive email notifications. There's also a View alerts page in the security and compliance center where you can view and filter alerts, set an alert status to help you manage alerts, and then dismiss alerts after you've addressed or resolved the underlying incident. We've also expanded the type of events that you can create alerts for. For example, you can create alert policies to track malware activity and data loss incidents. Finally, we've also included a number of default alert policies that help you monitor assigning admin privileges in Exchange Online, malware attacks, and unusual levels of file deletions and external sharing.
Note
Alert policies are available for organizations with an Office 365 Enterprise or Office 365 US Government E1/G1, E3/G3, or E5/G5 subscription. However, some advanced functionality is only available for organizations with an E5/G5 subscription, or for organizations that have an E1/G1 or E3/G3 subscription and an Office 365 Advanced Threat Protection (ATP) P2 or Office 365 Advanced Compliance match the conditions of an alert policy.
An admin in your organization creates, configures, and turns on an alert policy by using the Alert policies page in the security and compliance center. You can also create alert policies by using the New-ProtectionAlert cmdlet in PowerShell. To create alert policies, you have to be assigned the Organization Configuration role or the Manage Alerts role in the Security & Compliance Center.
A user performs an activity that matches the conditions of an alert policy. In the case of malware attacks, infected email messages sent to users in your organization will trigger an alert.
Office 365 generates an alert that's displayed on the View alerts page in the security and compliance center. Also, if email notifications are enabled for the alert policy, Office 365 sends an notification to a list recipients. The alerts that an admin or other users can see on the View alerts page is determined by the roles assigned to the user. For more information, see the RBAC permissions required to view alerts section.
An admin manages alerts in the security and compliance center. Managing alerts consists of assigning an alert status to help track and manage any investigation. security and compliance center. For example, you can view alerts that match the conditions from the same category or view alerts with the same severity level.
To view and create alert policies, go to and then click Alerts > Alert policies.
An alert policy consists of the following settings and conditions.
Activity the alert is tracking - You create a policy to track an activity or in some case/G1 or E3/G3 subscription with a Threat Intelligence add-on subscription.
Activity conditions - For most activities, you can define additional conditions that must be met for an alert to be triggered.. Note that the available conditions are dependent on the selected activity. our organization.
If you select the setting based on unusual activity, Office 365 establishes a baseline value that defines the normal frequency for the selected activity; it takes up to 7 days to establish this baseline, during which alerts won't be generated. After the baseline is established, an alert will/G1 or E3/G3 subscription with an Office 365 ATP P2 or Advanced Compliance add-on subscription. Organizations with an E1/G1 and E3/G3 subscription can only create an alert policy where an alert is triggered every time that an activity occurs.
Alert category - To help with tracking and managing the alerts generated by a policy, you can assign one of the following categories to a policy.
Data governance
Data loss prevention
Mail flow
Permissions
Threat management
Others
When an activity occurs that matches the conditions of the alert policy, the alert that's generated is tagged with the category defined in this setting. This allows you to track and manage alerts that have the same category setting on the View alerts page in the security View additional to email notifications, you or other administrators can view the alerts that are triggered by a policy on the View alerts page. Consider enabling email notifications for alert policies of a specific category or that have a higher severity setting.
Default alert policies
Office 365 provides built-in alert policies that help identify Exchange admin permissions abuse, malware activity, and data governance risks. On the Alert policies page, the name of these built-in policies are in bold and the policy type is defined as System. These policies are turned on by default. You can turn these policies off . Note that that category is used to determine which alerts a user can view on the View alerts page. For more information, see the RBAC permissions required to view alerts section.
The table also indicates the Office 365 Enterprise and Office 365 US Government plans required for each one. Note that some default alert policies are available if your organization has the appropriate add-on subscription in addition to an E1/G1 or E3/G3 subscription.
Note that the unusual activity monitored by some of the built-in policies is based on the same process as the alert threshold setting that was previously described. Office 365 establishes a baseline value that defines the normal frequency for "usual" activity. Alerts are then triggered when the frequency of activities tracked by the built-in alert policy greatly exceeds the baseline value.
Viewing alerts
When an activity performed by users in your organization match the settings of an alert policy, an alert is generated and displayed on the View alerts page in the security and compliance center Depending on the settings of an alert policy, an email notification is also sent to a list of specified users when an alert is triggered. For each alert, the dashboard on the View. See the Managing alerts section for more information about using the status property to manage alerts.
To view alerts, go to and then click Alerts > View alerts.
You can use the following filters to view a subset of all the alerts on the View alerts page.
Status - Use this filter to show alerts that are assigned a particular status; the default status is Active. You or other administrators can change the status value.
Policy - Use this filter to show alerts that match the setting of one or more alert policies. Or, you can just.
Source - Use this filter to show alerts triggered by alert policies in the security and compliance center or alerts triggered by Office 365 Cloud App Security policies, or both. For more information about Office 365 Cloud App Security alerts, see the Viewing Cloud App Security alerts section.
RBAC permissions required to view alerts
Note
The functionality described in this section will roll out to organizations beginning on February 20, 2019, and will be completed worldwide by the end of March 2019.
The Role Bases Access Control (RBAC) permissions assigned to users in your organization determines which alerts a user can see on the View alerts page. How is this accomplished? The management roles assigned to users (based on their membership in role groups in the Security & Compliance Center) determine which alert categories a user can see on the View alerts page. Here are some examples:
Members of the Records Management role group can view only the alerts that are generated by alert policies that are assigned the 6 different alert categories. The first column in the tables lists all roles in the Security & Compliance Center. A check mark indicates that a user who is assigned that role can view alerts from the corresponding alert category listed in the top row.
To see which category a default alert policy is assigned to, see the table in the Default alert policies section. Security & Compliance Center. Go to the Permissions page, and click a role group. The assigned roles are listed on the flyout page.
Managing alerts
After alerts have been generated and displayed on the View alerts page in the security and compliance center, you can triage, investigate, and resolve them. click an alert to display a flyout page with details about the alert. The detailed information depends on the corresponding alert policy, but it typically includes the following: name of the actual operation that triggered the alert (such as a cmdlet), a description of the activity that triggered the alert, the user (or list of users) who triggered the alert, and the name (and link to ) of the corresponding alert policy.
The name of the actual operation that triggered the alert, such as a cmdlet or an audit log operation.
A description of the activity that triggered the alert.
The user who triggered the alert; this is included only for alert policies that are set up to track a single user or a single activity.
The number of times the activity tracked by the alert was performed. Note that this number might not match that actual number of related alerts listed on the View alerts page because additional alerts might have been triggered.
A link to an activity list that includes an item for each activity that was performed that triggered the alert. Each entry in this list identifies when the activity occurred, the name of actual operation, (such as "FileDeleted") and the user who performed the activity, the object (such as a file, an eDiscovery case, or a mailbox) that the activity was performed on, and the IP address of the user's computer. For malware related alerts, this links to a message list.
The name (and link to ) of the corresponding alert policy.
Suppress email notifications - You can turn off (or suppress) email notifications from the flyout page for an alert. When you suppress email notifications, Office 365 won't send notifications when activities or events that match the conditions of the alert policy. However, alerts will continue to be trigger View alerts page.
Viewing Cloud App Security alerts
Alerts that are triggered by Office 365 Cloud App Security policies are now displayed on the View alerts page in the security and compliance center. This includes alerts that are triggered by activity policies and alerts that are triggered by anomaly detection policies in Office 365 Cloud App Security. This means you can view all alerts in the security and compliance center. Note that Office 365 Cloud App Security is only available for organizations with an Office 365 Enterprise E5 or Office 365 US Government G5 subscription. For more information, see Overview of Office 365 Cloud App Security.
Additionally, organizations that have Microsoft Cloud App Security as part of an Enterprise Mobility + Security E5 subscription or as a standalone service can also view Cloud App Security alerts that are related to Office 365 apps and services in the Security & Compliance Center.
To display only Cloud App Security alerts in the security and compliance center, use the Source filter and select Cloud App Security.
Similar to an alert triggered by a security and compliance center alert policy, you can click a Cloud App Security alert to display a flyout page with details about the alert. The alert includes a link to view the details and manage the alert in the Cloud App Security portal and a link to the corresponding Cloud App Security policy that triggered the alert. See Review and take action on alerts in Office 365 Cloud App Security.
Important
Changing the status of a Cloud App Security alert in the security and compliance center won't update the resolution status for the same alert in the Cloud App Security portal. For example, if you mark the status of the alert as Resolved in the security and compliance center, the status of the alert in the Cloud App Security portal is unchanged. To resolve or dismiss a Cloud App Security alert, manage the alert in the Cloud App Security portal.
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/office365/securitycompliance/alert-policies | 2019-04-18T15:29:05 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['media/e02a622d-b429-448b-8107-dd1a4770b4e0.png',
'Overview of how alert policies work'], dtype=object)
array(['media/09ebd451-8e84-44e1-aefc-63e70bba4d97.png',
'In the security and compliance center, click Alerts, then click Alert policies to view and create alert policies'],
dtype=object)
array(['media/ec5ea59b-bf61-459f-8b65-970ab4bb8bcc.png',
'In the security and compliance, click Alerts, then click View alerts to view alerts'],
dtype=object)
array(['media/filtercasalerts.png',
'Use the Source filter to display only Cloud App Security alerts'],
dtype=object)
array(['media/casalertdetail.png',
'Alert details contain links to the Cloud App Security portal'],
dtype=object) ] | docs.microsoft.com |
Using oVirt Standard-CI with GitHub
The oVirt CI system can provide automated building, testing and release services for projects in GitHub as long as they comply with the Build and Test standards.
Automated functionality of the oVirt CI system
When projects are configured to use the oVirt CI system, the system responds automatically to various event as they occur in GitHub.
Here are actions that the CI system can be configured to carry out automatically:
- The 'check-patch' stage is run automatically when new pull-requests are created.
- The 'check-merged' stage is run automatically when commits are pushed to branches. In particular, this happens when pull-requests are merged.
- If release branches are configured (see below), the 'build-artifacts' stage is run automatically when commits are pushed (Or PRs are merged) to those branches. The built artifacts are then submitted to the oVirt change queues for automated system testing with ovirt-system-tests.
Manual functionality of the oVirt CI system
Certain parts of the CI system can be activated manually by adding certain trigger phrases as comments on pull-requets.
The following table specifies which trigger phrases can be used:
The contributors white list
GitHub allows anyone to send pull-requests to any project. This is a reasonable
policy for an open source project, but it can pose a risk to the CI system
because one can send a PR with a malicious '
check-patch.sh' script.
To mitigate this risk, the CI system only checks pull-requests by members of the GitHub organisation the checked project belongs to (E.g oVirt) automatically.
After checking that a PR from a new contributor does not contain malicious code, members of the project's GitHub organisation can activate the CI system for it in one of two ways:
- Add a comment with
ci test pleaseon the PR - This will make the CI system run the tests once.
- Add a comment with
ci add to whitliston the PR - This will make the CI system run the tests and additionally add the user that submitted the PR to a temporary white list so that further changes to the same PR or other PRs from that user are tested automatically by the system.
The white list is temporary and will be purged from time to time. It is recommended that long term contributors be added as members to the GitHub organisation.
Enabling oVirt CI for a GitHub project
Given that a project complies with the oVirt Build and Test standards and
includes an
stdci.yaml file as specified above, a few simple steps need
to be carried out to enable oVirt CI to work with it.
These steps include:
- Adding permissions for the oVirt CI system in the project repository
- Enabling the oVirt CI system to handle the project.
- Adding a GitHub hook for handling PR events.
- Adding a GitHub hook for handling push events.
Following are detailed instructions for carrying out the steps above. While it is certainly possible for anyone to try them out, most people should simply open a Jira ticket asking the oVirt CI team to do so. This could be done by simply sending an email to [email protected] and specifying the project organisation and name.
Adding permissions for the oVirt CI system in a project repository
It is best to grant 'admin' permissions for the 'ovirt-infra' user in the project. This will allow the system to automatically configure some of the webhooks it needs.
At the very least, 'write' permissions should be granted, so that the system could wrote PR comments and commit test results.
It is possible to use a different user then 'ovirt-infra' so that comments will come from a different identity, but this requires some work to configure credentials for that user in the oVirt Jenkins server.
Enabling the oVirt CI system to handle a project
Since anyone can setup hooks in GitHub that would send data to the oVirt CI system. Projects need to be explicitly enabled in the system for it to handle them.
To do that, the project name needs to be specified under the right organisation
section in the
jobs/confs/projects/standard-pipelines.yaml file in the
jenkins repo. Here is an example of how the section for the 'oVirt'
organisation looks like:
- project: name: oVirt-standard-pipelines-github github-auth-id: github-auth-token org: oVirt project: - ovirt-ansible jobs: - '{org}_{project}_standard-check-pr'
The
ovirt-ansible can be seen to be specified in the list under the
project sub key. Other projects in the 'oVirt' organisation should be added
to the same list, and a patch with the modified file should be submitted to
Gerrit for review.
Adding a GitHub hook for handling PR events
If the oVirt CI system user had been given 'admin' permissions to the project prior to merging the patch to enable the CI system as specified above. This hook can be configured automatically by the system.
To configure the hook manually, go the project 'Settings' page, select 'Webhooks' from the left menu and click on 'Add webhook'.
In the 'Payload URL' field fill in the following URL:
In 'Content Type' fill in '
application/x-www-form-urlencoded'
Leave the 'Secret' field empty.
Choose the 'Let me select individual events' option and check the following event check boxes: * Issue comments * Pull request
Click on he green 'Add Webhook' button to add the webhook.
Adding a GitHub hook for handling push events
To configure the hook, go the project 'Settings' page, select 'Webhooks' from the left menu and click on 'Add webhook'.
In the 'Payload URL' field fill in the following URL:
In 'Content Type' fill in '
application/json'
Leave the 'Secret' field empty.
Choose the 'Just the push event' option.
Click on he green 'Add Webhook' button to add the webhook. | https://ovirt-infra-docs.readthedocs.io/en/latest/CI/Using_STDCI_with_GitHub/index.html | 2019-04-18T14:54:56 | CC-MAIN-2019-18 | 1555578517682.16 | [] | ovirt-infra-docs.readthedocs.io |
Tax facts tax year july 1, 201 6 through june 30, 201 7 c hange tax bill mailing address to update your mailing address, complete and print the property
Senior freeze (property tax reimbursement) income limits history below are the income limits for the property tax reimbursement program. all income received...
Page 2 the illinois property tax system illinois department of revenue property tax administration responsibilities the department administers the following aspects...
Pio-16 (r-03/16) page 2 of 3 what determines my property tax? your tax bill is based on two factors: • equalized assessed value (eav) of your property, and
Dte 105a rev. 9/16 please read before you complete the application. what is the homestead exemption? the homestead exemp-tion provides a reduction in property...
Real estate tax relief homestead exemption. 3ohdvh frpsohwh dqg uhwxuq wklv irup wr wkh 2iilfh ri 3urshuw\ $vvhvvphqw e\ 1ryhpehu. 1. owner name 1:
Who should be eligible? homestead exemptions provide "across the board" tax breaks-but these exemptions are not always available to the low-income renters
Claim for property tax exemption on dwelling house of disabled veteran or surviving spouse/civil union or domestic partner of disabled veteran or serviceperson
Senior & disability property tax discounts qualification & deadline requirements over 65 exemption qualifications: applicant must be 65 years of age before july 1st...
Application for maine homestead property tax exemption 36 m.r.s. §§ 681-689 completed forms must be filed with your local assessor by april 1.
General instructions the state of maryland provides a credit for the real property tax bill for homeowners of all ages who qualify on the basis of gross household
Washingt ae eare f eee 1 september 2016 property tax exemption for senior citizens and disabled persons if you are a senior citizen or disabled
Short form a property tax exemption is available to qualifying senior citizens and the surviving spouses of those who previously qualified. there
Page 1of 15 homeowner tax benefits |pre-qualifying checklist hb-01 rev. 01.10.17 please read but do not submit with your application please note:if the property...
Ptfs0051 09/16 laws revised code of washington (rcw) chapter 84.39 - property tax exemptions - widows or widowers of veterans. we will provide copies of
When to file: application for all exemptions must be made between january 1 and march 1 of the tax year. however, at the option of the property appraiser, original...
Exemptions, exclusions and tax relief homeowners' exemption if you own a home and it is your principal place of residence on january 1, you may apply for an exemp-
Probably the best-known property tax exemption is the homestead exemption. first enacted in 1969, it provides property tax relief to homeowners by exempting a
Homestead exemptions homestead exemp ons reduce the taxable equalized assessed value (eav) of a property by a specific amount prior to property
Local property tax exemptions for seniors rev. 11/2016 for more information, please contact your local assessors. 3 | https://www.docs-archive.net/Property-Tax-Relief-Homestead-Exemptions.pdf | 2019-04-18T14:16:19 | CC-MAIN-2019-18 | 1555578517682.16 | [] | www.docs-archive.net |
How to Embed Typeform.
We love using Typeform here and highly recommend their platform for customer feedback forms, quizzes, surveys, etc.
To embed a form from Typeform into an Appcues slideout or modal, follow these steps: this
Copy the code, but adjust the following style (highlighted above) to ensure the form displays well inside of your Modal or Slideout
Original
iframe { position: absolute; left:0; right:0; bottom:0; top:0; border:0; }
Updated
iframe { height: 300px; }
You can fine tune the height of your form by changing the pixel value of the height in this snippet
2. Add it to your Modal or Slideout
Navigate back to your Appcues account, open the Modal or Slideout you would like to embed your Typeform in and select the green '+' arrow > Advanced > HTML and paste the embed script from Typeform & you're good to go!
ProTip: You can also hyperlink your unique Typeform URL in text, or use it in a button to open it in a new tab
If you have any questions about embedding your Typeform into the modal that weren't answered here, feel free to reach out to us at [email protected] | https://docs.appcues.com/article/102-typeform | 2019-04-18T14:18:28 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/58544cb0c697912ffd6c1acb/file-aattJHNIUR.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5ae8b9de0428631126f1964b/file-z34yrWMglw.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5ae90a8d0428631126f199fa/file-fxOcc3wP6q.png',
None], dtype=object)
array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/559e91f8e4b0b0593824b4a9/images/5ae90cc40428631126f19a0d/file-FAqXvrbrut.png',
None], dtype=object) ] | docs.appcues.com |
How many emails can I send per day?
For Google accounts:
As of this writing, information on limits can be found here for Gmail accounts and here for G Suite accounts.
If we detect that you've reached a Google quota limit we will pause sending for a period of time and then automatically resume later. However, do not rely on this detection to protect yourself from going over your quota because 1) we usually only get this notice after you've already exceeded your quota and 2) sometimes we're not able to make that detection fast enough to stop emails that are already queued up.
Your best bet is to use Mailshake's sending calendar settings to limit how we schedule your messages.
Keep in mind, if you're sending via an alias, your Google limits apply but so also might any limits that the service that runs your alias email domain has.
For SMTP accounts:
When sending via SMTP, you'll need to refer to your email provider's best practices for how many emails you can send per day.
BETA: As of this writing, SMTP accounts are in beta and sending is limited to 100 emails/day. | https://docs.mailshake.com/article/57-how-many-emails-can-i-send-per-day | 2019-04-18T15:35:08 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.mailshake.com |
Performance Analysis collects various performance and configuration data directly from Windows, and then requires a higher level of access to the operating system than the Event Calendar. The easiest approach is to either make the SentryOne monitoring service account a domain administrator level account or a member of the local administrators group on any watched targets.
In some scenarios it may be possible to use a non-administrator service account, although this isn't an officially supported approach. Complete the following steps to use a non-administrator service account:
- Enable DCOM on the SentryOne Server machine, SentryOne Client machine, and the server to be watched. For more information, see the Securing a Remote WMI Connection article.
- Give the SentryOne monitoring service account proper permissions to the required WMI namespaces by going to the properties for WMI Control under Services and Applications in the Computer Management client. On the Security tab, ensure that the SentryOne monitoring service account has at least Enable Account and Remote Enable checked for the CIMV2 and WMI nodes.
Note: WMI providers and versions vary from server to server, and whether non-administrative access functions properly for a particular WMI provider is directly dependent on whether the provider was designed to support this. Many providers can't support this, including many designed by Microsoft®.
Additional Information: For more information about SentryOne requirements, see the How to check SentryOne requirements article from Sabin.io.
Example
SERVER-A is the exact same make and model as SERVER-B, and both servers are on the same domain. The SentryOne monitoring service user account is a domain user, but doesn't have administrator privileges on either server. Performance Analysis can successfully watch SERVER-A, but is unable to watch SERVER-B. The two servers are configured identically, with one exception; an additional network adapter from Acme Networking was installed in SERVER-B.
Acme Networking didn't design the associated WMI provider to support non-administrative access; therefore, Performance Analysis isn't able to successfully watch SERVER-B as a non-administrator. The only options are to either replace the network adapter with one that's known to support non-administrative access, or to contact Acme Networking to see if they have an updated version of the provider that supports non-administrative access. | https://docs.sentryone.com/help/performance-analysis-security-requirements | 2019-04-18T14:33:47 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.sentryone.com |
The Agent ConsoleVirtual Contact Center's browser-based graphical user interface (GUI) used by Agents to manage customer interactions. provides links to access your customer and case information in the Navigation Panel and the Control Panel.
Control Panel
Navigation panel: Provides icons to navigate to customer, case, and task records assigned to you, or your group, or created by you. The information is categorized by customer, case and task records.
My Customers: presents a list of customers whose cases are assigned to you, or your group, or customer records created by you. By clicking on a customer record, you can view, and edit the customer information, and list related cases, follow- ups, and tasks.
My Tasks: lists all tasks you have performed or plan to perform related to your customers such as scheduling calls.
Note: If you use an external CRM, Control Panel provides the link to access the CRM.
Selecting My Customers or My Cases lists customers, and cases. You can customize the list to view customers and cases:
My Draft Customers (customers) or My Drafts (cases) : customer records created by you in draft state; cases created by you in draft state.
Choose My Draft Customers and My Drafts to access customer or case records that you saved as draft. | https://docs.8x8.com/vcc-8-1-ProductDocumentation/WebHelpAgentSupervisorGuide/Content/AgtAgtCRMCasesViewingCustomersOrCases.htm | 2019-04-18T15:30:13 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.8x8.com |
Blueprint upload button
This Button allows uploading a blueprint to the manager. Clicking on it opens a dialog for providing the following details:
- Blueprint visibility - represented by a colourful icon, and can be set by clicking on it. See resource’s visibility. Default: tenant
- Blueprint archive path (local or URL)
- Name to identify the blueprint on the manager (“Blueprint name”).
- Main .yaml file in the blueprint archive (“Blueprint filename”). Default: blueprint.yaml
- Blueprint icon image file to be presented in the blueprints list widget, when in Catalog mode (optional).
Widget Settings
None | https://docs.cloudify.co/4.5.5/working_with/console/widgets/blueprintuploadbutton/ | 2019-04-18T14:50:34 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.cloudify.co |
Top Menu¶
From top menu you can manage your user profile, switch data centers or log out of the Danube Cloud.
User Profile¶
User Profile
User Settings
Address
Change password form
Alerting media
See also
The alerting media are automatically set to a user object that is reflected to the monitoring system.
SSH Keys - Public SSH keys, which will be automatically added to newly created virtual servers using disk images with support for import of SSH keys.
API Keys - An API key can be used to connect the Danube Cloud API without using a username and password. A callback key is part of a security mechanism for checking authenticity of callback requests.
See also
Detailed information about API and Callback Keys can be found in the API documentation.
Data Center Switching¶
The data center switch is used for changing the current working virtual data center.
See also
More information about virtual data centers can be found in the Virtual Data Centers section. | https://docs.danubecloud.org/user-guide/gui/topmenu.html | 2019-04-18T14:59:12 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['../_images/topmenu.png', '../_images/topmenu.png'], dtype=object)] | docs.danubecloud.org |
Hooks Manager
Custom Hooks
"Custom" hooks is a new idea that, to be perfectly frank, has yet to be polished. Think of this feature as Very-Much-In-Beta. I thought it would be helpful to use the Hooks Manager to insert snippets of markup/code in other places throughout the application, not just in the predetermined template and hook locations. For instance, it would be nice to use the Pages modules to insert dynamic content - e.g. lists of forms, lists of submissions - and not just static HTML content. Or perhaps it would be handy to be able to have hooks within some of the modules.
This is what this feature is about.
As a simple example of this hook, upgrade to the latest version of the Pages module. Then follow these steps:
- In the Hooks Manager, create a custom hook and enter "hello_world" for the Custom Hook field.
- Select "HTML" as the content type.
- Enter "Hello World!" as the content.
- Go to the Pages module and create a new page.
- Select "Smarty" as the content type and enter the following into the Page Content field:
- Finally, return to the Pages main page (the one that lists all Pages) and click the "VIEW" link. Notice that the custom hook you entered has been replaced by whatever hook content you entered! | https://docs.formtools.org/modules/hooks_manager/basic_usage/custom_hooks/ | 2019-04-18T15:24:20 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
This topic covers using .NET WCF HTTP with SOAtest.
Sections include:
The .NET WCF HTTP transport option allows you to invoke Windows Communication Foundation
(.NET 3.0, 3.5, or 4.0) web services that use the HTTP transport. For 4.0, the .NET 4 CLR / client runtime must be installed for this service.
SOAtest is able to understand WCF's system-provided
BasicHttpBinding,
WSHttpBinding, and custom bindings that use the
HttpTransportBindingElement. The .NET WCF HTTP Transport can also be used to invoke Java-based HTTP web services for testing interoperability with .Net WCF Clients. For more information, see the following:
After selecting .NET WCF HTTP from the Transport drop-down menu within the Transport tab of an applicable tool, the following options display in the left pane of the Transport tab:
The endpoint is the URL of the web service endpoint. By default, tool endpoints are set to the endpoint defined in the WSDL. Besides WSDL, there are three other endpoint.
SOAtest supports flowed transactions via the WS-Atomic Transactions protocol and MS OLE Transactions protocol for WCF web services. For more information, see .NET WCF Flowed Transactions.:
For more information about these properties please see. | https://docs.parasoft.com/plugins/viewsource/viewpagesrc.action?pageId=27522887 | 2019-04-18T14:34:00 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.parasoft.com |
Title
Lazarus Phenomena in Gay Men Living with HIV: An Exploratory Study
Document Type
Article
Abstract
Based on qualitative data collected in 1999 in Dublin, Ireland and Providence, Rhode Island, this study examines psychosocial tasks for gay men with AIDS who are experiencing "Lazarus Phenome¬na," significant improvement in health and functioning as a result of current medication advances. The data showed a range of reactions, supportive of the literature on "uncertainty in illness," and suggesting that long term survival with AIDS requires an exceptional tolerance for ambiguity and an ability to reconstruct the future-skills which may co-vary with economic/career opportunities, social supports and individual resilience
Recommended Citation
Thompson, Bruce. 2003. "Lazarus Phenomena in Gay Men Living with HIV: An Exploratory Study." Social Work in Health Care 37 (1): 87-114.
In: Social Work in Health Care, 37, 1, 87-114. | https://docs.rwu.edu/fcas_fp/11/ | 2019-04-18T15:09:05 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.rwu.edu |
Overall Architecture
Screwdriver is a collection of services that facilitate the workflow for Continuous Delivery pipelines.
Workflow
Commit new code
User starts a new build by one of the following operations:
- User pushes code to SCM
- User opens a new pull request on SCM
- User pushes code to SCM on an open pull request
- User tells Screwdriver (via API or UI) to rebuild a given commit
Notify Screwdriver
Signed webhooks notify Screwdriver’s API about the change.
Trigger execution engine
Screwdriver starts a job on the specified execution engine passing the user’s configuration and git information.
Build software
Screwdriver’s Launcher executes the commands specified in the user’s configuration after checking out the source code from git inside the desired container.
Publish artifacts (optional)
User can optionally push produced artifacts to respective artifact repositories (RPMs, Docker images, Node Modules, etc.).
Continue pipeline
On completion of the job, Screwdriver’s Launcher notifies the API and if there’s more in the pipeline, the API triggers the next job on the execution engine (
GOTO:3).
Components
Screwdriver consists of five main components, the first three of which are built/maintained by Screwdriver:
REST API
RESTful interface for creating, monitoring, and interacting with pipelines.
Web UI
Human consumable interface for the REST API.
Launcher
Self-contained tool to clone, setup the environment, and execute the shell commands defined in your job.
Execution Engine
Pluggable build executor that supports executing commands inside of a container (e.g. Jenkins, Kubernetes, Nomad, and Docker).
Datastore
Pluggable storage for keeping information about pipelines (e.g. Postgres, MySQL, and Sqlite).
Architecture with executor-queue and k8s
| https://docs.screwdriver.cd/cluster-management/ | 2019-04-18T15:04:26 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['assets/workflow.png', 'Workflow'], dtype=object)
array(['assets/arch-k8s.png', 'Architecture'], dtype=object)] | docs.screwdriver.cd |
Keyboard shortcuts & hotkeys
Starting with TestRail 4.2, TestRail now supports keyboard shortcuts for important and frequently used actions such as editing/saving objects, adding results/comments & attachments and navigating between cases or tests.
The modifier key
Some actions such as saving a test case (i.e. submitting a form) require two keys to be pressed at the same time: one modifier key and the actual action key. The modifier key depends on the platform and browser you use and is usually control or command. We use <mod> to denote the usage of the modifier key in this documentation.
For example, to submit a form and save a test case, you would press <mod>+s which either translates to control+s or command+s.
Common shortcuts
The following list of shortcuts applies to TestRail in general.
Shortcut reference
The following sections list additional supported shortcuts in TestRail. The shortcuts are grouped by entity the shortcuts apply to. For example, the shortcuts in the following Cases section apply to all case related pages.
Cases
Dashboard
Milestones
Plans
Projects
Runs
Available since TestRail 5.1 (November 2015):
Suites
Available since TestRail 5.1 (November 2015):
Tests
Administration | http://docs.gurock.com/testrail-userguide/userguide-shortcuts | 2019-04-18T15:04:15 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.gurock.com |
Rake 0.8.7 Released¶ ↑
Rake.
Changes¶ ↑
New Features / Enhancements in Version 0.8.5¶ ↑
Improved implementation of the Rake system command for Windows. (patch from James M. Lawrence/quix)
Support for Ruby 1.9's improved system command. (patch from James
Lawrence/quix)
Rake now includes the configured extension when invoking an executable (Config::CONFIG)
Bug Fixes in Version 0.8.5¶ ↑
Environment variable keys are now correctly cased (it matters in some implementations). …
Charles Nutter
– Jim Weirich | http://docs.seattlerb.org/rake/doc/release_notes/rake-0_8_7_rdoc.html | 2019-04-18T15:14:16 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.seattlerb.org |
Configuration Manager is your platform to set up your Virtual Contact Center tenant. Administrators can use a graphical user interface (GUI) to manage all components in a tenant. Configuration Manager is easy to use and requires no special software or hardware to run. It is 100% cloud-based and accessible from anywhere and anytime as long as you have a computer and Internet access. Using Configuration Manager, administrators can set up agents and supervisors, create roles and assign tasks, create campaigns, broadcast messages, create wallboards, and much more.
Use Configuration Manager to:
See Get Started and Understand the Interface to learn how to start.
Each administrator requires a computer equipped with one of the following browsers:
Known Issue: If you use Internet Explorer to run Virtual Contact Center applications, you may encounter high memory usage. To resolve this issue, clear cookies and cache, activate the setting to clear history, clear history on exit, and reboot.
Note: Virtual Contact Center is partially compatible with Safari, offering support for the Agent Console Control Panel functionality.
Note: Firefox requires the QuickTime plug-in for audio features.
For more information about the administrator workstation technical requirements, refer to our Technical Requirements. | https://docs.8x8.com/8x8WebHelp/VCC/Configuration_Manager_UnifiedLogin/content/cfgoverview.htm | 2019-04-18T14:19:07 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.8x8.com |
Interfaces
Interfaces enable you to map logical tasks to executable operations.
Declaration
Node Types and Relationships Interface
node_types: some_type: interfaces: interface1: op1: ... op2: ... interface2: ... relationships: some_relationship: source_interfaces: interface1: ... target_interfaces: interface2: ...
Each interface declaration under the different
interfaces/
source_interfaces/
target_interfaces sections is a dictionary of operations.
Node Templates Interface Declaration
node_templates: some_node: interfaces: ... relationships: - type: ... target: ... source_interfaces: ... target_interfaces: ...
Operations
Operation Declaration in Node Types and Relationships Interfaces
node_types: some_type: interfaces: interface1: op1: implementation: ... inputs: ... executor: ... max_retries: ... retry_interval: ...
Operation Schema
Operation Simple Mapping
node_types: some_type: interfaces: interface1: op1: plugin_name.path.to.module.task
When mapping an operation to an implementation, if it is not necessary to pass inputs or override the executor, the full mapping structure can be avoided and the implementation can be written directly.
Operation Input Declaration
node_types: some_type: interfaces: interface1: op1: implementation: ... inputs: input1: description: ... type: ... default: ... executor: ...
Operation Input Schema
Operation Inputs in Node Templates Interfaces Declaration
node_types: some_type: interfaces: interface1: op1: implementation: plugin_name.path.to.module.task inputs: input1: description: some mandatory input input2: description: some optional input with default default: 1000 executor: ... node_templates: type: some_type some_node: interfaces: interface1: op1: inputs: input1: mandatory_input_value input3: some_additional_input
When an operation in a node template interface is inherited from a node type or a relationship interface:
- All inputs that were declared in the operation inputs schema must be provided.
- Additional inputs, which were not specified in the operation inputs schema, may also be passed.
Examples
In the following examples, an interface is declared that enables you to:
- Configure a master deployment server using a plugin.
- Deploy code on the hosts using a plugin.
- Verify that the deployment succeeded using a shell script.
- Start an application after the deployment is complete.
For the sake of simplicity, relationships are not referred to in these examples.
Configuring Interfaces in Node Types
Configuring the master server:
plugins: deployer: executor: central_deployment_agent node_types: nodejs_app: derived_from: cloudify.nodes.ApplicationModule properties: ... interfaces: my_deployment_interface: configure: implementation: deployer.config_in_master.configure node_templates: nodejs: type: nodejs_app
In this example, the following declarations have been made:
- Declared a
deployerplugin which, by default, executes its operations on Cloudify Manager.
- Declared a node type with a
my_deployment_interfaceinterface that has a single
configureoperation that is mapped to the
deployer.config_in_master.configuretask.
- Declared a
nodejsnode template of type
nodejs_app.
Overriding the Executor
In the above example an
executor for the
deployer plugin has been declared.
Cloudify enables you to declare an
executor for a single operation, overriding the previous declaration.
plugins: deployer: executor: central_deployment_agent node_types: nodejs_app: derived_from: cloudify.nodes.ApplicationModule properties: ... interfaces: my_deployment_interface: configure: implementation: deployer.config_in_master.configure deploy: implementation: deployer.deploy_framework.deploy executor: host_agent node_templates: vm: type: cloudify.openstack.nodes.Server nodejs: type: nodejs_app
In this example, a
deploy operation to our
my_deployment_interface interface has been added. Note that its
executor attribute is configured to
host_agent, which means that even though the
deployer plugin is configured to execute operations on the
central_deployment_agent, the
deploy operation is executed on hosts of the
nodejs_app rather than Cloudify Manager.
Declaring an Operation Implementation within the Node
You can specify a full operation definition within a node’s interface, under the node template itself.
plugins: deployer: executor: central_deployment_agent node_types: nodejs_app: derived_from: cloudify.nodes.ApplicationModule properties: ... interfaces: my_deployment_interface: ... node_templates: vm: type: cloudify.openstack.nodes.Server nodejs: type: nodejs_app interfaces: my_deployment_interface: ... start: scripts/start_app.sh
If, for example, the
my_deployment_interface is used on more than the
nodejs node, while on all other nodes, a
start operation is not mapped to anything, you will want to have a
start operation specifically for the
nodejs node, which will run the application after it is deployed.
A
start operation is declared and mapped to execute a script specifically on the
nodejs node.
In this way, you can define your interfaces either in
node_types or in
node_templates, depending on whether you want to reuse the declared interfaces in different nodes or declare them in specific nodes.
Operation Inputs
Operations can specify inputs to be passed to the implementation.
plugins: deployer: executor: central_deployment_agent node_types: nodejs_app: derived_from: cloudify.nodes.ApplicationModule properties: ... interfaces: my_deployment_interface: configure: ... deploy: implementation: deployer.deploy_framework.deploy executor: host_agent inputs: source: description: deployment source type: string default: git verify: implementation: scripts/deployment_verifier.py node_templates: vm: type: cloudify.openstack.nodes.Server nodejs_app: type: cloudify.nodes.WebServer interfaces: my_deployment_interface: ... start: implementation: scripts/start_app.sh inputs: app: my_web_app validate: true
In this example, an input has been added to the
deploy operation under the
my_deployment_interface interface in the
nodejs_app node type, and two inputs added to the
start operation in the
nodejs node’s interface.
Note that interface inputs are not the same type of objects as inputs that are defined in the
inputs section of the blueprint. Interface inputs are passed directly to a plugin’s operation (as **kwargs to the
deploy operation in the
deployer plugin) or, in the case of
start operations, to the Script Plugin.
Relationship Interfaces
For information on relationship interfaces see Relationships Specification. | https://docs.cloudify.co/4.5.5/developer/blueprints/spec-interfaces/ | 2019-04-18T14:50:45 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.cloudify.co |
At the moment DSX provides its clients with two types of orders: Market and Limit. In that article, we will check out what is the difference and how to place both types at DSX trading platform.
Market order
With that type, you will be placing an order to buy or sell currency immediately according to the current market price. All you need to do is to enter the Volume of the coin you wish to buy or sell in the highlighted field.
Notice that you should enter Volume in the quoted currency (the first one in the currency pair, ex. BTC for BTCUSD pair).
Once you enter a Volume, you can place an order to Buy or Sell a quoted currency. Notice that due to the nature of Market order we cannot guarantee that an order will be filled according to the price you saw before placing an order. Since all orders are filled strictly according to the queue, the price may change before the turn of your order comes
Limit order
That type allows you to specify the price of your deal. For that you choose Limit in the upper right corner of the Trading platform.
Enter the Price and the Volume in corresponding fields. Notice that you always enter a Price in base currency (the second one in a currency pair, ex. USD in BTCUSD pair) and the Volume in a quoted currency (the first one in a currency pair, ex. BTC in BTCUSD pair).
Once you filled in Price and Volume fields, you can choose whether you want to Buy or Sell the quoted currency.
After you click Buy or Sell your order is placed. Until it is filled it will remain active. If you have entered not a very popular price at the moment, your order can be active for a while, but nothing to worry about. Once the market price reaches the level you've entered the order should be filled. | https://docs.dsx.uk/dsx/trading-platform/order-types | 2019-04-18T14:20:12 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://downloads.intercomcdn.com/i/o/50231153/273e1b70b3d2d7d16b95a4ce/0ec4c0c3a5.png',
None], dtype=object)
array(['https://downloads.intercomcdn.com/i/o/50231669/e3e53f9a6505de3aa4093933/470a15ce72.png',
None], dtype=object) ] | docs.dsx.uk |
Metric Manager
Contents
Starting in release 8.5.0, the Metric Manager label in the Administration Module is a section heading, and is not a link to a page. The Report Metrics page replaces the Metric Manager of earlier releases.
The Metric Manager section of the Advisors Administration module contains two pages:
- Source Metrics
- Report Metrics
What are Source Metrics and Report Metrics?
A report metric is a metric used in the dashboard of one of the reporting applications. In Advisor release 8.5.0, this refers to a metric used in the dashboard of either Contact Center Advisor/Workforce Advisor or Frontline Advisor.
A source metric is the definition of the metric in the source system, such as Genesys Stat Server.
See Terminology below for detailed definitions.
Custom Metrics Support
Starting in release 8.5.0, you can create and update custom metrics for application, agent group, and agent objects for the Contact Center Advisor and Frontline Advisor.
Restrictions
Genesys does not support the creation of new custom metrics for the WA application.
Access to metrics must be configured by an administrator in Genesys Configuration Server. Data relating to or dependent on metrics to which a user does not have access permissions does not display for that user. For information about role-based access control (RBAC) privileges related to metric management actions, see CCAdv/WA Access Privileges and FA Access Privileges.
Terminology
The following terminology is used in the descriptions of the Source Metrics and Report Metrics pages of the Administration module.
- The Application object type means the base object types of queue, interaction queue, calling list, call type, or service, for CCAdv.
- A Raw Report Metric is a report metric that is created from a source metric. When creating a raw report metric, you must select a source metric. The source metrics available for selection are the Genesys source metrics that are created and maintained using the Source Metric Manager. Only the source metrics that correspond to the object type you selected are available when creating a raw report metric.
- A Calculated Report Metric is a report metric expressed as a formula involving one or more raw report metrics as operands. The format options specified for the calculated report metric override any format options specified for the individual raw report metric used to build the calculated report metric. A source metric cannot be directly associated with a calculated report metric.
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/PMA/8.5.2/CCAWAUser/MetricManager | 2019-04-18T14:16:24 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.genesys.com |
Adding the option to purchase one time or "upsell" type of products at checkout is an easy way to increase revenue. We have created an optional section found in PayWhirl's payment widgets that allow you to upsell products to your customers when they first checkout.
Note: Upsells are one-time purchases only and don't have the option to recur.
To add an Upsell go to your Dashboard and click 'My Upsells'
You are now on the 'My Upsells' main page. Click 'Create a New Upsell'
Fill out the name, price, and currency to create your Upsell. Optionally you can include an image.
After you hit save you will directed back to the main Upsell page where you will need to 'Enable' your Upsell so that it will be available on your main widget. | https://docs.paywhirl.com/PayWhirl/classic-system-support-v1/getting-started-and-setup/v1-my-upsells | 2019-04-18T14:50:19 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://uploads.intercomcdn.com/i/o/19830099/d3376cd94e558385090c141c/note.png',
None], dtype=object) ] | docs.paywhirl.com |
Volume Discount
This tutorial shows how to use the volume-based discount Smart Clause® to use a smart legal contract to calculate a percentage discount, based on the volume of usage of a service.
No programming is required and the tutorial should only take you a few minutes.
Create a new contract
Create a new contract (either starting with a blank contract or by upload a Word or PDF document.
Add the volume discount smart clause
Press the Add Clause button and select the volume-discount advanced Smart Clause.
Copy the Trigger URL and Bearer token from the Triggers for the smart clause to the clipboard.
Open volume discount demo application:
Paste the Trigger URL and Bearer token into the demo user interface for the smart clause, adjust the volume using the slider and then press the Add button to add a volume reading. The smart clause will be triggered and will compute the volume discount based on the reading. | https://docs.clause.io/en/articles/46-volume-discount | 2019-04-18T14:47:23 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.clause.io |
Configure Form Email Fields
The purpose of this section is to let Form Tools know which fields are relevant so that theycan be used within the email templates. For example, if your form was a registration form and contained fields for First Name, Last Name and Email address, you might want to create an auto-email notification that emails the attendee to let them know they've successfully registered. Or, perhaps your form contains multiple email fields and want to send emails to each of them after the form is submitted. This page lets you do just that: it lets you single out the email fields for use in the email templates. Once they are defined here, you will see the option to use that row as a recipient or as the "from" or "reply-to" email address. The Name / First Name and Last Name fields are optional - only enter them if there's a corresponding name field or fields in your form.
Depending on your form contents, this feature may or may not be relevant. Often, forms don't contain this information so you can simply ignore this page. | https://docs.formtools.org/userdoc/emails/configure_form_email_fields/ | 2019-04-18T14:21:41 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.formtools.org |
Contents
The HTTP Interface
This page pertains only to Stat Server that operates in cluster mode. Specifying the HTTP protocol for Stat Server that operates in regular mode is not currently supported.
In order for the SIP Cluster Solution to receive information about the state of a regular DN or to receive information about performance data about each Stat Server node within a Stat Server cluster, you can configure one or more Stat Server applications within a Stat Server cluster to use an HTTP interface.
Using the HTTP Interface for Feature Server
One manner in which Feature Server (a SIP Cluster server) sends call forwarding and DND requests from a particular DN to T-Controller is triggered through the activity that is transmitted through an HTTP interface. (Other manners are described in the SIP Cluster Solution Guide.) Stat Server operating in cluster mode can provide DN state information via this interface when it is configured to do so. When requested through the HTTP interface, Feature Server sends the TCallSetForward, TCallCancelForward, TSetDNDOn, or TSetDNDOff request, as appropriate, to the appropriate T-Controller that is in charge of the DN within the SIP Cluster.
Configuring an HTTP Listening Port
Within Genesys Administrator, you configure an HTTP listening port at the Server Info tab of an application’s properties. Refer to the Genesys Administrator Help for more information. This listening port is specific to a particular Stat Server—you do not configure an HTTP listening port at the Stat Server solution level for all components to share. However, you can designate as few as one HTTP listening port within one Stat Server node to provide DN state information for all nodes within a Stat Server cluster.
The default protocol that Stat Server uses when you specify no connection protocol is an internal proprietary simple protocol.
Internal Performance Counters
Through the HTTP interface, Stat Server also supplies performance measurements to T-Controller for the events that Stat Server receives and sends. To provide server performance information, you must configure an HTTP listening port for every Stat Server node that must supply this information. Such configuration is required because the performance measurements that Stat Server provides will differ based on the object that initiated the request. The table below lists the performance measurements that Stat Server provides.
Stat Server Performance Counters
You can also get response within an html browser by issuing the following string in the URL:
http://<StatServer HTTP listener host name>:<listener port>/genesys/statserver/<path to specific resource>
Feedback
Comment on this article: | https://docs.genesys.com/Documentation/RTME/latest/Dep/HTTPInterface | 2019-04-18T14:55:52 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.genesys.com |
Screenshot commands
If you haven't already noticed, screenshots are automatically taken for every command. The screenshots taken are usually of what's visible in the viewport - meaning what you as a user would see on your screen given the specified resolution for the test run.
Theses commands are for you to take full page screenshots.
List of commands
TEST.takeFullScreenshot
Currently only available for Safari and IE tests.
Take a full page screenshot of the current page
Note that taking full page screenshots can slow down your tests, use it when you need to.
Additionally, if there's infinite scrolling on the page, only what's currently visible will be taken. You'll need to combine this with scrolling commands to take longer screenshots on these kinds of pages.
Usage
TEST.takeFullScreenshot() | https://docs.uilicious.com/scripting/screenshot.html | 2019-04-18T15:28:26 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.uilicious.com |
Contributor’s Guide¶
Contributions are always welcome and greatly appreciated!
Code contributions¶
We love pull requests from everyone! Here’s a quick guide to improve the code:
- Fork the repository and clone the fork.
- Create a virtual environment using your tool of choice (e.g.
virtualenv,
conda, etc).
- Install development dependencies:
pip install -r requirements-dev.txt
- Make sure all tests pass:
invoke test
- Start making your changes to the master branch (or branch off of it).
- Make sure all tests still pass:
invoke test
- Add yourself to
AUTHORS.rst.
- Commit your changes and push your branch to GitHub.
- Create a pull request through the GitHub website.
During development, use pyinvoke tasks on the command line to ease recurring operations:
invoke clean: Clean all generated artifacts.
invoke check: Run various code and documentation style checks.
invoke docs: Generate documentation.
invoke test: Run all tests and checks in one swift command.
invoke: Show available tasks.
Documentation improvements¶
We could always use more documentation, whether as part of the introduction/examples/usage documentation or API documentation in docstrings.
Documentation is written in reStructuredText and use Sphinx to generate the HTML output.
Once you made the documentation changes locally, run the documentation generation:
invoke docs
Bug reports¶
When reporting a bug please include:
- Operating system name and version.
- ROS version.
- Any details about your local setup that might be helpful in troubleshooting.
- Detailed steps to reproduce the bug. | https://roslibpy.readthedocs.io/en/latest/contributing.html | 2019-04-18T14:37:57 | CC-MAIN-2019-18 | 1555578517682.16 | [] | roslibpy.readthedocs.io |
Hosting a live event can be scary the first time, but we've got your back. Here are some best practices on how to host a successful live video event, from planning the event to following up afterwards.
Before the event
Craft a descriptive registration page. Every Crowdcast event comes with its own built-in registration page, so take advantage of this by uploading a great cover image, adding a detailed description of what people can expect from your event, and formatting your copy in an easy-to-read way. This will increase your registration rate (or the rate at which people who view the page sign up for the event).
Pick a catchy event title. You know that old rule about how you should spend 80% of your time on a blog post crafting the headline and 20% writing the post? While you don't have take this literally, the same principle holds for live events. Your title matters. It's what people will first see so make sure you make it actionable, catchy and descriptive.
Add a call-to-action. This is the button that shows up right underneath your video on your event page, and it's a helpful way to lead people to your website or a specific offering. Live events are great at driving leads but you can't convert those leads unless you tell them where to go. A call-to-action, or "CTA", lets you add any link as a button on your event page if you go to your event “Edit” options under “Advanced.”
Got guests joining you on screen? Whether you're interviewing someone or pulling audience members up on screen, make sure to send them a checklist beforehand so they know what they need to do to broadcast live. See the checklist here so you can make sure everything goes smoothly. (We also recommend doing a test run before the day of your event!)
Try attending a few events. Want to step into your audience's shoes so you can deliver a rock-solid presentation? The best way to do that is to attend a few live events yourself. Check out a few Crowdcast events so you can learn best practices. It'll open your eyes to what it the experience is like for your participants. Check out our homepage for cool upcoming events to join.
Integrate with your email tool. One of the most powerful things about live video events is how quickly they can build your email list. To automate this list-building process, be sure to connect your Crowdcast account to your email newsletter tool, whether you use Aweber, Mailchimp or Infusionsoft. Check out our Zapier integration option in your settings so you can add every event registrant to your list as a subscriber automatically.
Prep your attendees on the (short) technical requirements. First, they should be using the right browser (or the right app, if they're on mobile). They should also make sure they have a strong WiFi connection. Share this handy attendee guide with your audience ahead of time so they're all ready to go.
During the event
Start with “housekeeping” items. Don't worry, this isn't about chores. 😉 "Housekeeping" is all about letting people know what they can expect. Things like where they should submit their questions (hint: in the Q&A), when you'll get to the Q&A, if there's a freebie offered at the end and whether the event will be recorded are all good things to mention up front so there's no confusion.
Share your agenda. It's also helpful to tell people what content you'll be covering at the beginning of your event. It's like the old public speaking rule: tell them what you'll be telling them, tell them, and then tell them what you've just told them. It makes it easier to follow and adds structure to your presentation.
Show that you're listening. The magic of a live event is all in the real-time connection, so don't forget to be responsive! This means keeping an eye on the chat, calling out people in the chat where appropriate, and just letting people know you're listening by asking people where they're tuning in from. This small gesture goes a long way towards letting people know it's a two-way conversation and it ultimately makes your attendees feel a greater connection with you in a way that they won't get on any other marketing or social media channel.
Have a copilot. Giving a presentation while trying to monitor the chat and Q&A aren't easy, especially if you're also sharing your screen. That's why it can be so helpful to assign someone as your “copilot” ahead of time. Give them the responsibility of monitoring the chat, answering any questions about how to join or view the event, adding the call-to-action button at the right time, and any other logistics. This way you can be fully present with your audience.
After the event
Follow up with email. Generating interest in your topic is great, but it's all about what you do with that captured interest that helps grow your business. Are you leading them to a product or service? Are you adding them to your email list? Be sure to follow up with attendees after the live event is over with an email leading them to a landing page, your website, or an offering where appropriate so each event can help move the needle for your brand or business.
Share the replay. There will always be people who register for your event but aren't able to make it. Make it easier for them by either letting them know it will be recorded or by giving them the URL to the replay (which is the same as your event URL) after the live broadcast. You can also download the recording (by clicking “More” on your event page and selecting “Download video”) and then upload the video to YouTube, Facebook and more.
Segment your follow-up emails. It's powerful to be able to export your Crowdcast attendee emails, but did you know you can also see which of those attendees showed up live and who didn't? Take advantage of this info by segmenting your follow-up emails. In other words, try sending one email follow-up to those who showed up live and another to those who registered missed the live event. The more segmented your email marketing, the better your results will be.
Now you're ready to go live. Got questions or suggestions? Email us at [email protected]. Happy Crowdcasting! 🎥 | http://docs.crowdcast.io/faq/how-to-rock-your-first-crowdcast-event | 2019-04-18T14:19:45 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['https://uploads.intercomcdn.com/i/o/22426948/f51af8725dfd5f437e15273c/divider.png',
None], dtype=object) ] | docs.crowdcast.io |
Introduction¶
CircuitPython driver for the VEML6070 UV Index Sensor Breakout
Dependencies¶
This driver depends on:
Please ensure all dependencies are available on the CircuitPython filesystem. This is easily achieved by downloading the Adafruit library and driver bundle.
Usage Example¶
import time import board import busio from adafruit_veml6070 import VEML6070 with busio.I2C(board.SCL, board.SDA) as i2c: uv = VEML6070(i2c) # Alternative constructors with parameters #uv = VEML6070(i2c, 'VEML6070_1_T') #uv = VEML6070(i2c, 'VEML6070_HALF_T', True) # take 10 readings for j in range(10): uv_raw = uv.read risk_level = uv.get_index(uv_raw) print('Reading: {0} | Risk Level: {1}'.format(uv_raw, risk_level)) time.sleep | https://circuitpython.readthedocs.io/projects/veml6070/en/latest/ | 2019-04-18T14:42:30 | CC-MAIN-2019-18 | 1555578517682.16 | [] | circuitpython.readthedocs.io |
- Log in and log out - Log in your terminal and begin processing payments. Log out of your terminal when changing operator, or at the end of a shift.
- Reconcile a balance mismatch - Get the totals of a reconciliation period and use them to reconcile balance mismatches between the terminal and cash register.
- Retrieve totals from the terminal - Return payment totals from a terminal and filter for totals by group or operator.
- Display virtual receipts - Use display requests to display virtual receipts on large-screen terminals.
- Receive terminal notifications - Learn about handling messages and events sent from the terminal to the sale system.
- Request diagnosis - Request a diagnosis for a terminal.
- Retrieve connected terminals - Return payment totals from a terminal and filter for totals by group or operator. | https://docs.adyen.com/developers/point-of-sale/build-your-integration/device-features | 2019-04-18T14:33:40 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.adyen.com |
Widget HTTP API (Developer)
Introduction
The Appcues API provides an HTTP endpoint for accessing all the content that a user is qualified to see.
Connecting
The root URL for the Appcues API is.
The URL to fetch the list of content is formed using your Appcues account ID (visible from your Appcues account page) and the end user's ID (the first parameter to your site code's
Appcues.identify()call), as follows:{account_id}/users/{user_id}/taco
Request Format
The user activity endpoint accepts only GET requests.
The request should contain the following query parameter:
url The URL of the current page the user is on. This is used for page-level filtering.. | https://docs.appcues.com/article/255-widget-api | 2019-04-18T15:03:20 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.appcues.com |
Most Popular Articles
- Adding media/video to your WooCommerce product gallery
- Import additional variation images
- The product image is cropped or blurry
- Ensure WooThumbs is working correctly
- Two sets of images upon activation
- Available shortcodes
- Settings overview
- Multiple images per variation
- WooThumbs not working in the Avada theme
- Change the look of inactive thumbnails | https://docs.iconicwp.com/collection/110-woothumbs | 2019-04-18T15:06:32 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.iconicwp.com |
Rich text experience for appointment activities
Applies to Dynamics 365 for Customer Engagement apps version 9.x
When you enable the rich text experience, server-side synchronization and appointment activities support rich text. With the rich text editor, appointment descriptions can contain rich text.
With rich text enabled you get the following benefits:
- Create and synchronize appointments with rich text content in the description for an improved experience in both web and the Unified Interface.
- Include content from an HTML web page right into the description field or create your own custom markup using the appointment editor. Appointments tracked from Outlook will also render rich text content in Dynamics 365 for Customer Engagement apps.
- Server-side synchronization synchronizes the rich-text HMTL content of appointment descriptions into Dynamics 365 for Customer Engagement apps.
Important
To enable rich text, your Dynamics 365 for Customer Engagement apps version must be Dynamics 365 for Customer Engagement apps version 9.0, or a later version.
After enabling, if you choose to disable the setting, the appointment editor description field will reset to the plain-text field. Previously synchronized appointments’ description will still contain rich-text HTML markup.
Although the rich text editor can be used with appointment activities, it can’t be used with recurring appointments. When an appointment that contains rich text is converted to a recurring appointment, the description field for the activity is converted to a plain-text field containing rich text content.
Enable the rich text editor for appointments
To enable the rich text editor on appointments, you need to configure the AppointmentRichEditorExperience organization setting for your Dynamics 365 for Customer Engagement apps instance by running the PowerShell sample below.
The PowerShell cmdlets require the Dynamics 365 for Customer Engagement apps Microsoft.Xrm.Data.PowerShell module. The sample below includes the cmdlet to install the module.
#Install the module Install-Module Microsoft.Xrm.Data.PowerShell -Scope CurrentUser # Connect to the organization Connect-CrmOnPremDiscovery -InteractiveMode #(or Connect-CrmOnlineDiscovery -InteractiveMode) # Retrieve the organization entity $entities = $organizationEntity = Get-CrmRecords -conn $conn -EntityLogicalName organization -Fields appointmentricheditorexperience -TopCount 1 $organizationEntity = $entities.CrmRecords[0] Write-Host "Appointment RTE existing value: " $organizationEntity.appointmentricheditorexperience # Set the appointmentricheditorexperience field $organizationEntity.appointmentricheditorexperience = $true #(or $false) # Update the record Set-CrmRecord -conn $conn -CrmRecord $organizationEntity $entities = $organizationEntity = Get-CrmRecords -conn $conn -EntityLogicalName organization -Fields appointmentricheditorexperience -TopCount 1 $organizationEntity = $entities.CrmRecords[0] Write-Host "Appointment RTE updated value: " $organizationEntity.appointmentricheditorexperience
See also
Create or edit an appointment
Feedback
Send feedback about: | https://docs.microsoft.com/en-us/dynamics365/customer-engagement/admin/enable-rich-text-experience | 2019-04-18T14:29:32 | CC-MAIN-2019-18 | 1555578517682.16 | [array(['media/appointment-rich-text.png', 'Rich text appointment editor'],
dtype=object) ] | docs.microsoft.com |
If the application in the earlier CCE version is created using APIs or kubectl, use the same method to create applications of the same specifications on the cluster created in the latest CCE version. For details, see Cloud Container Engine API Reference 2.0 or Kubectl Usage Guide.
Stateless and stateful applications are displayed in the latest CCE version.
The version of Kubernetes clusters used in CCE 2.0 is 1.9 or 1.11.3. Therefore, you are advised to upgrade the version of kubectl to 1.9 or 1.11.3. | https://docs.otc.t-systems.com/en-us/usermanual2/cce/cce_01_9994.html | 2019-04-18T15:42:55 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.otc.t-systems.com |
The XPath builder helps you quickly build an XPath by selecting elements from a hierarchical representation of the message. The XPath builder opens when you add an XPath, or when you click an existing XPath.
The XPath builder can be used for both XML and JSON messages. The XPath language was originally designed for selecting nodes from XML, but since Parasoft Virtualize/SOAtest converts incoming requests to XML during processing, CTP (and Parasoft Virtualize/SOAtest) can apply XPaths to JSON messages as well as XML messages.
You can click any of the available elements, values, and attributes to build an XPath. Note that the corresponding XPath is automatically added to the XPath field (below the builder).
Tips
- To change between tree view and plain text, click Tree and Text.
- To expand and collapse the tree, use the Expand all fields / Collapse all fields toolbar buttons.
- Toclose to XPath builder and save your changes, click any other area of the UI. Toclose to XPath builder without saving your changes, press the Esc key.
- To search for text, use the search field in the toolbar. | https://docs.parasoft.com/display/CTP302/Specifying+XPaths | 2019-04-18T14:43:09 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.parasoft.com |
The Builder Mediator can be used to build the actual SOAP message from a message coming in to ESB through the Message Relay. One usage is to use this before trying to log the actual message in case of an error. Also with the Builder Mediator ESB can be configured to build some of the messages while passing the others along.
In order to use the Builder mediator,
BinaryRealyBuilder should be specified as the message builder in the
<ESB_HOME>/repository which message builder should be used to build the binary stream using the Builder mediator.
By default, Builder Mediator uses the
axis2 default Message builders for the content types. User can override those by using the optional
messageBuilder configuration. For more information, see Working with Message Builders and Formatters.
Like in
axis2.xml user has to specify the content type and the implementation class of the
messageBuilder. Also user can specify the message
formatter for this content type. This is used by the
ExpandingMessageFormatter to format the message before sending to the destination.
Syntax
<builder> <messageBuilder contentType="" class="" [formatterClass=""]/> </builder> | https://docs.wso2.com/display/ESB480/Builder+Mediator | 2019-04-18T14:56:17 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.wso2.com |
Message-ID: <822466239.235043.1555600888631.JavaMail.confluence@docs-node.wso2.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_235042_1116072834.1555600888631" ------=_Part_235042_1116072834.1555600888631 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location:
Before you begin, please see our compatibility matrix to find out if this = version of the product is fully tested on Linux or OS X.
Follow the instructions below to install API Manager on Linux or Mac OS = X..6.0_25 in your system, add the following two=
lines at the bottom of the file, replacing
/usr/java/jdk1.6.0_25 with the actual directory where the JDK is installed.
On Linu= x: export JAVA_HOME=3D/usr/java/jdk1.6.0_25 export PATH=3D${JAVA_HOME}/bin:${PATH} On OS X: export JAVA_HOME=3D/System/Library/Java/JavaVirtualMachines/1
If you need to set additional system properties when the server starts, = you can take the following approaches:
When using SUSE Linux, it ignores =
/etc/resolv.conf and only looks at the
/etc/hosts=
file. This means that the server will throw an exception on startup i=
f you have not specified anything besides localhost. To avoid this error, a=
dd the following line above
127.0.0.1 localhost in the
/=
etc/hosts file:
<ip_address>
<machine_name> localhost <=
/code>
You are now ready to | https://docs.wso2.com/exportword?pageId=45941903 | 2019-04-18T15:21:28 | CC-MAIN-2019-18 | 1555578517682.16 | [] | docs.wso2.com |
Inspired by sitebricks, resteasy and rails routing, tapestry-routing allows you to provide your own custom mapping between Tapestry 5 pages and URLs.
First (as always), add the tapestry-routing dependency to your pom.xml
Then annotate your pages with the @Route annotations (@At annotations are still supported and work the same way)
Let's say you have a page: pages.projects.Members which have 2 parameters in its activation context: (Long projectId, Long memberId) and you want the URL for that page to look like /projects/1/members/1
Just add the @Route annotation to you page, like this:
That's it!
The RouterDispatcher will take care of recognizing incoming requests and dispatching the proper render request and the RouterLinkTransformer will do the rest of the work, it will transform every Link for a page render request formatting it according to your route rule.
Here is an example of how tapestry-model is using tapestry-routing:
The only caveat with the current implementation is that you can't use Index pages. I mean pages named "*Index"
The way Tapestry handles *Index pages prevents the module from working properly.
My workaround (for now) is:
If you like to have all your routes configuration centralized, you don't need to use the @Route annotation if you don't want to. You can contribute the routes to the RouteProvider.
If you want to prevent tapestry-routing from scanning all the pages packages looking for the @Route annotation set the DISABLE_AUTODISCOVERY symbol to "true". If you do this then you can either contribute your routes directly to the RouteProvider, or tell the AnnotatedPagesManager explicitly which pages do you want to be scanned. | http://docs.codehaus.org/plugins/viewsource/viewpagesrc.action?pageId=194314380 | 2014-04-16T19:22:10 | CC-MAIN-2014-15 | 1397609524644.38 | [] | docs.codehaus.org |
This tutorial is for all Joomla! versions newer than.
In order to use this feature, an update server must be defined in your extension's manifest. This definition can be used in all Joomla! 1.6 and newer. | http://docs.joomla.org/index.php?title=Deploying_an_Update_Server&oldid=64856 | 2014-04-16T21:05:50 | CC-MAIN-2014-15 | 1397609524644.38 | [] | docs.joomla.org |
:
cdsw-build.sh- A custom build script used for models and experiments. Pip installs our dependencies, primarily the
scikit-learnlibrary.
fit.py- A model training example to be run as an experiment. Generates the
model.pklfile that contains the fitted parameters of our model.
predict.py- A sample function to be deployed as a model. Uses
model.pklproduced by
fit.pyto make predictions about petal width. | https://docs.cloudera.com/machine-learning/1.0/models/topics/ml-models-examples.html | 2021-01-16T05:15:57 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../images/ml-iris-create.png', None], dtype=object)] | docs.cloudera.com |
Configuration Properties¶
To use this connector, specify the name of the connector class in the
connector.class configuration property.
connector.class=io.confluent.connect.syslog.SyslogSourceConnector
Connector-specific configuration properties are described below.
Listener¶
syslog.port
The port to listen on.
- Type: int
- Valid Values: ValidPort{start=1, end=65535}
- Importance: high
syslog.listener
The type of listener to use.
- Type: string
- Default: UDP
- Valid Values: Matches:
TCPSSL,
TCP,
UDP
- Importance: high
syslog.ssl.provider
The SSLContext Provider to use. JDK = JDK’s default implementation. OPENSSL = OpenSSL-based implementation. OPENSSL_REFCNT = OpenSSL-based implementation which does not have finalizers and instead implements ReferenceCounted.
- Type: string
- Default: JDK
- Valid Values: Matches:
JDK,
OPENSSL,
OPENSSL_REFCNT
- Importance: high
syslog.listen.address
The IP Address to listen on.
- Type: string
- Default: 0.0.0.0
- Importance: low
topic
Name of the topic to put all records.
- Type: string
- Default: syslog
- Importance: high
topic.prefix
Deprecated: Use topic
- Type: string
- Default: syslog
- Importance: high
syslog.backoff.millis
The number of milliseconds to wait if there are no messages in the queue for Apache Kafka®.
- Type: int
- Default: 100
- Valid Values: [10,…]
- Importance: low
syslog.queue.batch.size
The number of records to try and retrieve from the buffer.
- Type: int
- Default: 1000
- Valid Values: [0,…,1000000]
- Importance: low
syslog.queue.max.size
The maximum number of records to buffer in the connector. This queue is used as an intermediate before data is written to Kafka.
- Type: int
- Default: 50000
- Valid Values: [0,…,1000000]
- Importance: low
syslog.write.timeout.millis
The number of milliseconds to wait if there are no messages in the queue for Kafka.
- Type: int
- Default: 60000
- Valid Values: [100,…]
- Importance: low
Netty¶
netty.trace.addresses
IP addresses to enable tracing for. If this option is set tracing will only be enabled for the addresses in this list. If empty tracing will be enabled for all addresses.
- Type: list
- Default: “”
- Importance: low
netty.trace.enabled
Flag to enable trace logging at the Netty level. You will need to set io.confluent.connect.syslog.protocol to `TRACE. Keep in mind this will dump anything that comes into the connector to the log.
- Type: boolean
- Default: false
- Importance: low
netty.worker.threads
The number of worker threads for the worker group that processes incoming messages.
- Type: int
- Default: 16
- Valid Values: [1,…,500]
- Importance: low
SSL¶
syslog.ssl.cert.chain.path
Path to X.509 cert chain file in PEM format.
- Type: string
- Default: “”
- Importance: high
syslog.ssl.key.password
The password of the key file(syslog.ssl.key.path), or blank if it’s not password-protected
- Type: password
- Default: [hidden]
- Importance: high
syslog.ssl.key.path
Path to a PKCS#8 private key file in PEM format.
- Type: string
- Default: “”
- Importance: high
syslog.ssl.self.signed.certificate.enable
Flag to determine if a self signed certificate should be generated and used.
- Type: boolean
- Default: false
- Importance: high
TCP¶
syslog.tcp.idle.timeout.secs
The amount of time in seconds to wait before disconnecting an idle session.
- Type: int
- Default: 600
- Valid Values: [0,…]
- Importance: low
Socket¶
netty.event.group.type
The type of event group to use with the Netty listener. EPOLL is only supported on Linux.
- Type: string
- Default: EPOLL
- Valid Values: Matches:
NIO,
EPOLL
- Importance: low
socket.backlog.size
- Type: int
- Default: 1000
- Valid Values: [100,…]
- Importance: low
socket.rcvbuf.bytes
The size of the socket receive buffer in bytes.
- Type: int
- Default: 256000
- Valid Values: [256,…]
- Importance: low
Message Parsing¶
cef.timestamp.mode
The strategy to use when parsing timestamps for CEF formatted messages. The default mode,
header, looks only in the header.
extension_fieldlooks in a specified extension field key, which is specified in a subconfig.
- Type: string
- Default: header
- Valid Values: one of [extension_field, header]
- Importance: low
- Dependents:
cef.timestamp.mode.extension_field.key
cef.timestamp.mode.extension_field.key
The key used to extract timestamps from CEF extension fields.
-. | https://docs.confluent.io/5.3.1/connect/kafka-connect-syslog/syslog_source_connector_config.html | 2021-01-16T06:22:51 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.confluent.io |
How to Change Commit Message In Git
- Changing the Most Recent Commit Message
- Changing Multiple Commit Messages
- Force Pushing Local Changes to Remote
- The git add and git commit Commands
- The git push Command
Many programmers underestimate the role of the commit message, while it is very important for managing the work. It helps other developers working on your project to understand the changes that you have made. So it must be as concrete, well-structured and clear as possible.
In this snippet, we will show you how to change your most recent commit message, as well as how to change any number of commit messages in the history.
Read on to see the options.
Changing the Most Recent Commit Message¶
You can use --amend flag with the git commit command to commit again for changing the latest commit:
git commit --amend -m "New commit message"
Running this will overwrite not only your recent commit message but, also, the hash of the commit. Note, that it won’t change the date of the commit.
It will also allow you to add other changes that you forget to make using the git add command:
git add more/changed/w3docs.txt git commit --amend -m "message"
The -m option allows writing the new message without opening the Editor.
Check out Force Pushing Local Changes to Remote for more details on how to force push your changes.
Changing Multiple Commit Messages¶
In this section, we will show you the steps to follow if you want to change multiple commit messages in the history.
Let’s assume that we want to take the 10 latest commit messages, starting from the HEAD.
Run Git Rebase in Interactive Mode¶
Firstly, you should run the git rebase in the interactive mode:
git rebase -i HEAD~10
Type "Reword"¶
After the first step, the editor window will show up the 10 most recent commits. It will offer you to input the command for each commit. All you need to do is typing "reword" at the beginning of each commit you want to change and save the file. After saving, a window will open for each selected commit for changing the commit message.
Enter a New Commit Message¶
After the second step, an editor will open for each commit. Type a new commit message and save the file.
Check out Force Pushing Local Changes to Remote for more details on how to force push your changes.
Force Pushing Local Changes to Remote¶
It is not recommended to change a commit that is already pushed because it may cause problems for people who worked on that repository.
If you change the message of the pushed commit, you should force push it using the git push command with --force flag (suppose, the name of remote is origin, which is by default):
git commit --amend -m "New commit message." git push --force origin HEAD
Here is an alternative and safer way to amend the last commit:
git push --force-with-lease origin HEAD
The git add and git commit Commands¶
The git add command is used for adding changes in the working directory to the staging area. It instructs Git to add updates to a certain file in the next commit. But for recording changes the git commit command should also be run. The git add and git commitcommands are the basis of Git workflow and used for recording project versions into the history of the repository. In combination with these two commands, the git status command is also needed to check the state of the working directory and the staging area.
The git push Command¶
The git push command is used to upload the content of the local repository to the remote repository. After making changes in the local repository, you can push to share the modification with other members of the team.
Git refuses your push request if the history of the central repository does not match the local one. In this scenario, you should pull the remote branch and merge it into the local repository then push again. The --force flag matches the remote repository’s branch and the local one cleaning the upstream changes from the very last pull. Use force push when the shared commits are not right and are fixed with git commit --amend or an interactive rebase. The interactive rebase is also a safe way to clean up the commits before sharing. The git commit --amend option updates the previous commit. | https://www.w3docs.com/snippets/git/how-to-change-commit-message.html | 2021-01-16T05:15:02 | CC-MAIN-2021-04 | 1610703500028.5 | [] | www.w3docs.com |
GET /{environment}/searches/lists Description Query cached items in a list computed for in an environment-aware setup. Parameters Type Name Description Schema Path environment required The environment for which a request is made. string Query limit required The number of items to fetch. integer Query list_key required String identifier of the previously computed list to request recommended items from. string Query offset required The rank of the first item to be requested. Starts at counting at 1. integer Responses HTTP Code Description Schema 200 OK SearchResults 404 list_key not found, it either expired or never existed No Content Produces application/json | https://docs.froomle.com/froomle-doc-main/reference/api_reference/search/operations/lists_with_env.html | 2021-01-16T04:55:15 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.froomle.com |
.
Microsoft Certifications
Microsoft Certifications give a professional advantage by providing globally recognized and industry-endorsed evidence of mastering skills in a digital and cloud businesses.
Benefits of certifications
Upon earning a certification, 67% of tech professionals say they had greater self-confidence in abilities to perform in their jobs, 41% reported increased job satisfaction, and 35% saw a salary or wage increase.- 2018 Pearson VUE Value of IT Certification.Download Whitepaper
91%
of certified IT professionals say certification gives them more professional credibility.
52%
of certified IT professionals say their expertise is more sought after within their organization.
93%
of decision makers agree certified employees provide added value. | https://docs.microsoft.com/en-us/learn/certifications/?q=%E5%A4%96%E6%8E%A8%E6%80%8E%E4%B9%88%E6%8E%A5%E5%8D%95q495455411%E5%8F%B6...crgnrp..w9j | 2021-01-16T07:23:45 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['/en-us/learn/certifications/images/image-working-lady-laptop.jpg',
'Woman looking at a Microsoft Surface'], dtype=object)
array(['/en-us/learn/certifications/images/image-working-group.jpg',
'Man presenting to four people seated around a table'],
dtype=object) ] | docs.microsoft.com |
About this feature
The "message wall" is an efficient way for you to gather your audience's questions, comments, and thoughts during a presentation. It also has the added benefit of giving every member of the audience an equal chance at taking part in the debate.
How it works: Teacher view
Allowing questions, likes, and images
Once you turn on the message wall, participants will be able to send you all their questions. To maximise its efficiency, turn on the "likes on answers", which will allow the respondents to prioritise each other's questions as a group.
You can also allow participants to add images to their answers.
In the example below, you can see a student (smartphone, right) posting a question on the message wall (left) before liking a question which had already been posted by another participant.
During a presentation, you can display the message wall at any time by clicking on the "Messages" button in the toolbar at the bottom of the screen.
Displaying the messages
Once you are ready to start answering some of these questions, you can choose how to display them.
- Click on the little heart in the toolbar at the bottom of the screen to display the messages in descending order of popularity.
- Click on the button on the left side of the screen to display these messages as a grid.
- Of course, you can combine the two previous options and obtain a "Grid"-view sorted by likes.
- If you have turned on the categories in the moderator interface, you'll see coloured bulbs on the left side of the screen. You can click on any of these bulbs to display only the messages which have been stored in that category.
You can also use the toolbar to easily switch between the message wall and real-time voting.
To see how to use the message wall: watch this video.
How it works: Student view
Students who participate online can access the message wall by clicking on the bubble in the bottom right corner of their screen. This bubble will only appear if you activate the message wall.
Once there, they can:
- Send images (by either selecting one on their device or taking a new picture), provided you allow them to.
- Send questions and comments about the lecture's content.
- Like the messages their peers have posted on the wall, provided you allow them to. Students can only see their classmates' messages when the likes are turned on. Otherwise, all they can see are their own messages.
Students can also send questions to the message wall by SMS text. However, they can't send any images or like their classmates' messages.
SMS text messages which are sent while you are displaying the Message Wall or while there is no vote in progress will be displayed on the Message Wall.
Was this article helpful? Let us know below so we may better help you and other users!
The Wooclap Team | https://docs.wooclap.com/en/articles/1501778-can-the-audience-send-questions-messages-or-pictures | 2021-01-16T06:06:01 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.wooclap.com |
Postman
API Testing with postman
Postman
To test API endpoints the tool postman enables easy configuration and setup of various HTTP requests, read more about postman here.
Authentication
Several of the exposed APIs requires various cookies, for instance authentication and gitea information. To be able to do requests against these APIs the cookies have to be included in the HTTP request, and configured in postman. The cookies can be found in the following way:
- In your browser login to dev.altinn.studio/altin.studio/altinn3.no depending on what environment you are targeting
- Notice that the cookies AltinnStudioDesigner, AltinnStudioRuntime and i_like_gitea (among others) has been saved for the domain name you have logged in to.
- For Chrome cookies can be found under settings -> advanced -> cookies -> see all.
The two cookies AltinnStudioDesigner and AltinnStudioRuntime works as auth against the Designer and APP API respectively, so if you are targeting a Designer API you should include the AltinnStudioDesigner cookie, along with the i_like_gitea cookie, and if you are targeting a runtime API the AltinnStudioRuntime cookie should be included. Cookies are easily added to the postman requests under the slightly hidden cookies setting, see:
Set up postman tests
- Download and install postman native app.
- Import the files from src/test/Postman/Collection to the collections area in Postman.
- Import the environment .json file from src/test/Postman to the environments area in Postman.
How to write postman tests
- Find the area/collection where the new test has to be added.
- Add a new request of type GET/POST/PUT/DELETE under the right folder.
- Provide the endpoint, input for the request.
- Make sure the variable values are accessed from environments file.
- Write the tests as a javascript code in the ‘tests’ section of a request. More about test scrips
- Tests should have one test to verify valid response code and another test to validate the content of the response.
Postman test pipeline in Azure Devops
Information about the postman collections
- Collections folder include postman collections for Platform API, Storage API, APP API.
- Platform API uses Platform.postman_environment as an environment file.
- Storage, App API and Negative tests uses App.postman_environment as an environment file.
- One has to fill in the values (testdata) in the environment file based on the environment under test.
- The collections has steps that would authenticate an user and set appropriate cookies.
Run Postman tests against a test environment.
- Open Postman and Import the Postman collection file and the corresponding envrinonment file.
- Select the environment file and fill in the necesary information for the required collection.
- Required Test data for App / Storage / Negative Tests Collection are envUrl, org(appOwner, app(level2-app), testUserName(level2LoginUser), testUserPassword(use same password for two users), level3-app, level1-app, testUserName2(level1LoginUser)
- Required Test data for Platform are envUrl, org(appOwner), app(level2-app), partyID, SSN, OrgNr, userID
- Open the Postman runner -> Select the collection and environment and click ‘Start Run’
Note: newman can be used to a Postman collection from command line interface. | http://docs.altinn.studio/teknologi/altinnstudio/development/handbook/test/postman/ | 2021-01-16T05:51:35 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.altinn.studio |
Train the Model
This topic shows you how to run experiments and develop a model
using the
fit.py file.
The
fit.py script tracks metrics, mean squared error
(MSE) and R2, to help compare the results of different
experiments. It also writes the fitted model to a
model.pkl file.
- Navigate to the Iris project's page.
-klfile and click Add to Project.This saves the model to the project filesystem, available on the project's Files page. We will now deploy this model as a REST API that can serve predictions. | https://docs.cloudera.com/machine-learning/1.0/models/topics/ml-train-the-model.html | 2021-01-16T06:11:23 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.cloudera.com |
What is RPi.GPIO
RPi.GPIO is a Python Library which use to control GPIO on Raspberry Pi.This is a control library developed on the basis of wiringPi. The bottom layer also uses The C Program, the difference is that the upper control uses The Python Program.Now we transplant RPi.GPIO to VIMs.You can control the 40 pin headler on VIMs by writing a Python program.
Begin to Use RPi.GPIO
Verify that RPI is installed correctly
There are two locations to confirm.
- In python2,you can check with this command
cat /usr/local/lib/python2.7/dist-packages/RPi.GPIO-0.6.3.post1.egg-info | grep "KHADAS",you will look this.
- In Python3, you can check with this command
cat /usr/local/lib/python3.6/dist-packages/RPi.GPIO-0.6.3.post1.egg-info | grep "KHADAS",you will look this again.
How to Programing The Python Program to control GPIO
- import some Lib.
If you need other Lib.import it at the beginning of the program.
- cleanup when you shut dowm the program.
Because the program will apply for memory space, it must release the memory space when it exits.
simple.py whcih is A Simple Example
This program simply changes the pin level of GPIO.BCM15.
How to Run you Program
- Run with python2
you can use
Ctrl + Cto shut down it,and you will look this in you terminal
- Run with python3
shutdown is same with python2.
NoteRPi.GPIO itself includes many functions, not just controlling the output of GPIO pins and reading pin levels. Here is only a simple introduction and use, more use needs to be explored by users themselves. | https://docs.khadas.com/vim1/HowToUseRPiGPIO.html | 2021-01-16T05:25:39 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.khadas.com |
Armor Knowledge Base / Armor Anywhere / Virtual Machines
Create a Cloud Connection for AWS Security Hub
Topics Discussed
To fully use this feature, you must have the following permissions in your account:
- Read Cloud Connections
- Write Cloud Connections
You can use these instructions to sync your AWS account with your AMP account. Specifically, this action will sync your AMP account with AWS Security Hub where Armor will send security updates.
To complete these instructions, you must be able to access your AWS console.
Review Pre-Deployment Considerations
Before you configure your AMP and AWS account, review the following pre-deployment considerations:
Security Findings
When you sync your AMP account with AWS Security Hub, Armor will send the following information to AWS Security Hub:
Exchanging Account Information
To properly sync your AMP account with AWS, the Armor AWS Account will assume a role in your AWS account. To accomplish this, in AMP you will copy the Armor AWS account number and a unique external ID, and then paste into your AWS account. Afterwards, you will receive an AWS-generated ARN from the role, which you will then paste into AMP.
ASFF Types
The following table describes the ASFF-formatted finding types for the security finding that are sent to AWS Security Hub.
Scoring Types
The following table describes the Severity.Product scores and the Severity.Normalized scores for the security findings that are sent to AWS Security Hub.
Updated Fields for Findings
The following fields will be updated:
- The recordState will change to archived if the vulnerability or malware is no longer valid.
- The updatedAt will reflect the most recent timestamp that the finding was updated.
Create a Cloud Connection account for AWS.
Additional Documentation
To learn about the basics of Cloud Connections, see ANYWHERE Cloud Connections.
Was this helpful? | https://docs.armor.com/display/KBSS/Create+a+Cloud+Connection+for+AWS+Security+Hub.mobile.phone | 2021-01-16T06:26:36 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.armor.com |
# Rosters & Transfers
# Player Pool
The Player Pool is the area of each team where players invited to the team are listed publically. Players within the Player Pool that are not on the Active Roster are counted as Mercenaries (see the Mercenaries page on the sidebar).
# Active Roster
The Active Roster contains each and every player you count as a core or subsitute player and can be used throughout the season. Players need to be requested to be transfered from the Player Pool to the Active Roster prior to their usage in an official match.
# Transfers
Player transfers are reviewed on a weekly basis, and you can expect those transfers to be processed typically within 24 hours of the last match of the week being played.
← Mercenaries Disputes → | https://docs.rsl.tf/league/rules/rosters-and-transfers.html | 2021-01-16T06:00:41 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.rsl.tf |
Getting Started¶
Intended audience: beginners, all, Programming language: all
In this section we will guide you step-by-step to help you getting started with Tango-Controls.
We assume that Tango-Controls has been already installed in your environment. Otherwise, if you have to install Tango-Controls on your own, please, read the installation guide first: Installation
Note
You may identify yourself with one of the following roles and use proviodeds links:
Table of contents of this section:
- First steps with Tango Controls
- End-user applications guide
- How to develop for Tango Controls
- Getting started with JTango (Java implementation of Tango-Controls)
- Getting started with cppTango (C++ implementation of Tango-Controls)
- Writing your first C++ TANGO client
- TANGO C++ Quick start
- Your first C++ TANGO device class
- Getting started with PyTango (Python implementation of Tango-Controls)
- Administration applications guide | https://tango-controls.readthedocs.io/en/latest/getting-started/index.html | 2021-01-16T04:52:33 | CC-MAIN-2021-04 | 1610703500028.5 | [] | tango-controls.readthedocs.io |
What was decided upon? (e.g. what has been updated or changed?) Drop down menu will limit search by scope (all resources, library catalog, articles, course reserves, Worldcat)
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) 61.7% of 891 respondants thought a drop down option to narrow searching was helpful. Among the points made in the committee discussion: (1) students felt “catalog” alone was confusing, as opposed to “library catalog”; (2) Once you are on the ILS user interface, you are not looking for anything but resources (don’t put extraneous items like news here). Survey participants’ comments expressed significant confusion over what each option means and the distinctions between them. Comments also indicated a strong desire for a streamlined search interface where one could simply type in keywords and search everything by default without the distraction of options, refining their results as needed afterwards; Google is cited as an example. The top choice was made “all resources” for this reason. Other responses from the survey: • Worldcat is heavily used; • “Catalog” confuses users and needs to be explained; • Databases is useful if the search is database content, not titles; • “Catalog + Databases” is confusing to the user who may think they are not searching the databases in the catalog at all; suggest removing this option.
Who decided this? (e.g. what unit/group) User Interface
When was this decided?
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/10/option-of-drop-down-menu-in-search-box/ | 2021-01-16T06:47:12 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
Face detection with Computer Vision
Computer Vision can detect human faces within an image and generate the age, gender, and rectangle for each detected face.
Note
This feature is also offered by the Azure Face service. See this alternative for more detailed face analysis, including face identification and pose detection.
Face detection examples
The following example demonstrates the JSON response returned by Computer Vision for an image containing a single human face.
{ "faces": [ { "age": 23, "gender": "Female", "faceRectangle": { "top": 45, "left": 194, "width": 44, "height": 44 } } ], "requestId": "8439ba87-de65-441b-a0f1-c85913157ecd", "metadata": { "height": 200, "width": 300, "format": "Png" } }
The next example demonstrates the JSON response returned for an image containing multiple human faces.
{ "faces": [ { "age": 11, "gender": "Male", "faceRectangle": { "top": 62, "left": 22, "width": 45, "height": 45 } }, { "age": 11, "gender": "Female", "faceRectangle": { "top": 127, "left": 240, "width": 42, "height": 42 } }, { "age": 37, "gender": "Female", "faceRectangle": { "top": 55, "left": 200, "width": 41, "height": 41 } }, { "age": 41, "gender": "Male", "faceRectangle": { "top": 45, "left": 103, "width": 39, "height": 39 } } ], "requestId": "3a383cbe-1a05-4104-9ce7-1b5cf352b239", "metadata": { "height": 230, "width": 300, "format": "Png" } }
Use the API
The face detection feature is part of the Analyze Image API. You can call this API through a native SDK or through REST calls. Include
Faces in the visualFeatures query parameter. Then, when you get the full JSON response, simply parse the string for the contents of the
"faces" section. | https://docs.microsoft.com/en-us/azure/cognitive-services/computer-vision/concept-detecting-faces | 2021-01-16T07:07:52 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['images/woman_roof_face.png', 'Vision Analyze Woman Roof Face'],
dtype=object)
array(['images/family_photo_face.png', 'Vision Analyze Family Photo Face'],
dtype=object) ] | docs.microsoft.com |
Backup Service
The Backup Service provides automatic scheduled backup, manual backup, and manual restore of DSE cluster data.
The OpsCenter Backup Service allows scheduling an automatic backup or running a manual backup of DSE cluster data. A backup is a snapshot of all on-disk data files (SSTable files) stored in the data directory. Use OpsCenter to schedule and manage backups, and restore from those backups, across all registered DataStax Enterprise (DSE) clusters.
OpsCenter intelligently stores, OpsCenter allows you to fully recreate the state of the database at the time of each backup without duplicating files.
Backups are stored locally on each node (On Server). If you configured an additional backup location (such as Amazon S3), OpsCenter creates a manifest for each backup that contains a list of the SSTables in that backup, and only uploads new SSTable files.
You can schedule backups to run automatically on a recurring interval, or manually run one-off backups on a scheduled or ad hoc basis.
Backup considerations
Backups can be taken per datacenter, per keyspace, for multiple keyspaces, or for all keyspaces in the cluster while the system is online.
There must be enough free disk space on the node to accommodate making snapshots of data files. A single snapshot requires little disk space. However, snapshots cause disk usage to grow more quickly over time because a snapshot prevents obsolete data files from being deleted.
If a cluster includes DSE Search or DSE Analytics nodes, a backup job that includes keyspaces with DSE Search data or DSE Analytics nodes will save the associated data. Any Solr indexes are recreated upon restore.
-.
Restoring data
When restoring data, you can restore from a previous backup, to a specific point in time, or restore to a different cluster. Restoring to a different cluster is known as cloning, which supports different workflows.
In addition to keyspace backups, commit log backups are also available to facilitate point-in-time restores for finer-grained control of the backup data. Point-in-time restores are available after enabling commit log backups in conjunction with keyspace backups. Similar to keyspace backups, the commit log archives are retained based on a configurable retention policy. | https://docs.datastax.com/en/opscenter/6.7/opsc/online_help/services/opscBackupService.html | 2021-01-16T06:36:40 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.datastax.com |
EditorFileSystem¶
Resource filesystem, as the editor sees it.
Description¶
This object holds information of all resources in the filesystem, their types, etc.
Note: This class shouldn't be instantiated directly. Instead, access the singleton using EditorInterface.get_resource_filesystem.
Signals¶
- filesystem_changed ( )
Emitted if the filesystem changed.
- resources_reimported ( PoolStringArray resources )
Remitted if a resource is reimported.
- resources_reload ( PoolStringArray resources )
Emitted if at least one resource is reloaded when the filesystem is scanned.
Emitted if the source of any imported file changed.
Method Descriptions¶
Gets the type of the file, given the full path.
- EditorFileSystemDirectory get_filesystem ( )
Gets the root directory object.
- EditorFileSystemDirectory get_filesystem_path ( String path )
Returns a view into the filesystem at
path.
Returns the scan progress for 0 to 1 if the FS is being scanned.
Returns
true of the filesystem is being scanned.
- void scan ( )
Scan the filesystem for changes.
- void scan_sources ( )
Check if the source of any imported resource changed.
Update a file information. Call this if an external program (not Godot) modified the file.
- void update_script_classes ( )
Scans the script files and updates the list of custom class names. | https://docs.godotengine.org/ko/stable/classes/class_editorfilesystem.html | 2021-01-16T05:14:38 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.godotengine.org |
This documentation enables you to integrate with inteliver easily and using inteliver you can apply machine learning and A.I. algorimths on your data on-the-fly.
Chapter 1 Basics Apply A.I. and machine learning algorithms on your data on-the-fly.
Chapter 2 Getting Started Integrate with inteliver in less than a minute and use the power of A.I. to deliver tailored data to your customers.
Chapter 3 Image Management Service Using inteliver image management service you can apply machine learning and image processing algorithms on your image resources on-the-fly.
Chapter 4 Inteliver Libraries Inteliver tries to minimize the integration process using intuitive url modifier commands and by providing SDKs in different programming languages.
Chapter 5 Image Management Examples This section is dedicated to examples of different selection and operation for image management service.
Chapter 6 API Reference Complete API reference for image management service and each operatin and selection arguments. Inteliver API is in Beta Version. | https://docs.inteliver.com/ | 2021-01-16T06:39:26 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.inteliver.com |
What was decided upon? (e.g. what has been updated or changed?)
Current open recurring orders were divided by database and package for P2E migration form
Why was this decided? (e.g. explain why this decision was reached. It may help to explain the way a procedure used to be handled pre-Alma) To get the order assignment type as close to correct as possible in Alma
Who decided this? (e.g. what unit/group) E-Resources
When was this decided? Pre-implementation
Additional information or notes. | https://docs.library.vanderbilt.edu/2018/10/11/p2e-migration-form-recurring-orders/ | 2021-01-16T05:50:46 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.library.vanderbilt.edu |
Lync Mobile users cannot sign in after they update to client version 5.4
Symptoms
After Lync users update to the latest Microsoft Lync Mobile 2013 client (5.4), they cannot sign in, and they receive the following error message:
Can't verify the certificate from the server. Please contact your support team.
Additionally, users may see the following entry or entries in the logs if the issue occurred on an iOS device:
2014-04-23 15:18:31.230 Lync[247:6975000] INFO TRANSPORT CHttpRequestProcessor.cpp/173:Received response of request(UcwaAutoDiscoveryRequest) with status = 0x22020002 2014-04-23 15:18:31.230 Lync[247:6975000] INFO TRANSPORT CHttpRequestProcessor.cpp/201:Request UcwaAutoDiscoveryRequest resulted in E_SslError (E2-2-2). The retry counter is: 0 2014-04-23 15:18:31.230 Lync[247:6975000] INFO TRANSPORT CHttpRequestProcessor.cpp/266:Sending event to main thread for request(0x13dede8) 2014-04-23 15:18:31.230 Lync[247:3b2ae18c] INFO APPLICATION CTransportRequestRetrialQueue.cpp/822:Req. completed, Stopping timer. 2014-04-23 15:18:31.230 Lync[247:3b2ae18c] ERROR APPLICATION CUcwaAutoDiscoveryGetUserUrlOperation.cpp/325:Request failed. Error - E_SslError (E2-2-2) 2014-04-23 15:18:31.351 Lync[247:3b2ae18c] INFO UI CMUIUtil.mm/409:Mapping error code = 0x22020002, context = , type = 201 2014-04-23 15:18:31.351 Lync[247:3b2ae18c] INFO UI CMUIUtil.mm/1722:Mapped error message is 'We can't verify the certificate from the server. Please contact your support team.
Resolution
To resolve this issue, install your Enterprise Root CA Certificate on the iOS Device. You can do this manually or by using the Apple Configurator.
More Information
The cause of each message is slightly different, but both errors are caused by the inability to verify the authenticity of a certificate or certification authority.
For more information, go to the following Apple website: Digital certificates - iOS Deployment Reference
Or, contact Apple Enterprise Support for help with managing your iOS devices.
The third-party products that this article discusses are manufactured by companies that are independent of Microsoft. Microsoft makes no warranty, implied or otherwise, about the performance or reliability of these products.
Still need help? Go to Microsoft Community. | https://docs.microsoft.com/en-us/SkypeForBusiness/troubleshoot/server-sign-in/cannot-sign-in-mobile-client | 2021-01-16T06:44:24 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.microsoft.com |
Import Files
In MapStore it is possible to add map context files or vector files to a map. This operation can be performed by clicking on the Burger menu button
from the Menu Bar and by selecting the
option. Following these steps the import screen appears:
Here the user, in order to import a file, can drag and drop it inside the import screen or select it from the folders of the local machine through the
button. Actually there's the possibility to import two different types of files:
Map context files (supported formats: MapStore legacy format, WMC)
Vector layer files (supported formats: shapefiles, KML/KMZ, GeoJSON and GPX)
Warning
Shapefiles must be contained in .zip archives.
Export and Import map context files
A map context is, for example, the file that an user download selecting the
option in Burger Menu. Map contexts can be exported in two different format:
- The
file, is an export in
jsonformat of the current map context state: current projections, coordinates, zoom, extent, layers present in the map, widgets and more (additional information can be found in the Maps Configuration section of the Developer Guide).
Adding a MapStore configuration file the behavior is similar to the following:
- The
(Web Map Context) file, is a
xmlformat where only WMS layers present in the map are exported including their settings related to projections, coordinates, zoom and extension (additional information can be found in the Maps Configuration section of the Developer Guide).
Adding a WMC configuration file the behavior is similar to the following:
Warning
Adding a map context file the current map context will be overridden.
Import vector files
Importing vector files, the Add Local Vector Files window opens:
In particular, from this window, it is possible to:
Choose the layer (when more than one layer is import at the same time)
Set the layer style or keep the default one
Toggle the Zoom on the vector files
Once the settings are done, the files can be added with the
button and they will be immediately available in the TOC nested inside the Imported layers group. For example:
Warning
Currently is not possible to read the Attribute Table of the imported vector files and for this reason also the Layer Filter and the creation of Widgets are not allowed for those layers. | https://mapstore.readthedocs.io/en/latest/user-guide/import/ | 2021-01-16T05:29:05 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../img/import/import-screen.jpg', None], dtype=object)
array(['../img/import/export-import.gif', None], dtype=object)
array(['../img/import/wmc_import.gif', None], dtype=object)
array(['../img/import/add-vector.jpg', None], dtype=object)
array(['../img/import/local-files-added.jpg', None], dtype=object)] | mapstore.readthedocs.io |
Text Editor Toolbar
With the Text Editor Toolbar it is possible to customize the text by modifying the following aspects:
Font to choose the text font (Inherit, Arial, Georgia, Impact, Tahoma, Time New Roman, Verdana)
Block Type by choosing between the available ones (Normal, H1, H2, H3, H4, H5, H6, Blockquote, Code)
Text style to insert text in Bold
, Italic
, Underline
or Strikethrough
Monospace
to insert the same space between words
Alignment
inside the text window (Left, Center, Right or Justify)
Color Picker
to change the text color
Bullet list to create a Unordered list
or Ordered list
Indent/Outdent
to indent the text in relation to the left margin or to the right margin
Link
to configure a hyperlink for the selected portion of text. The GeoStory editor can define hyperlinks to external web pages by choosing the External link option in the Link target dropdown menu and entering the related URL. As an alternative, it is also possible to define a hyperlink to other sections of the same GeoStory by choosing one of the sections available in the Link target dropdown menu.
Note
In order to setup an hyperlink to an external website, the protocol must be specified (e.g.,
http:// or
https://).
- Remove
to remove the formatting | https://mapstore.readthedocs.io/en/latest/user-guide/text-editor-toolbar/ | 2021-01-16T05:14:09 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['../img/text-editor-toolbar/text-editor-toolbar.jpg', None],
dtype=object)
array(['../img/text-editor-toolbar/hyperlink_menu.jpg', None],
dtype=object) ] | mapstore.readthedocs.io |
Installation / Webserver / NGINX Configuration
Note: You are currently reading the documentation for Bolt 3.6. Looking for the documentation for Bolt 4.0 instead?
NGINX is a high-performance web server that is capable of serving thousands of request while using fewer resources than other servers like Apache.
Unlike Apache, NGINX focuses on performance and as such does not have the
concept of
.htaccess files that applications such as Bolt use to set per-site
configuration. Below are details on how to effect the same changes for you Bolt
sites.
Configuration¶
NGINX configuration best is broken up into site configuration files that are unique to an individual site, and common, or "global", configuration settings that can be reused by individual sites.
Individual Site¶
Generally, NGINX site configuration files live in
/etc/nginx/conf.d/ and are
loaded automatically when their file names end in
.conf.
It is good practice to name your configuration file after the domain name of
your site, in this example our site is
example.com so the configuration file
would be called
example.com.conf.
An example of what a
/etc/nginx/conf.d/example.com.conf file might look like:
server { server_name example.com; # Logging access_log /var/log/nginx/example.com.access.log; error_log /var/log/nginx/example.com.error.log; # Site root root /var/www/sites/example.com/public; index index.php; # Bolt specific include global/bolt.conf; # PHP FPM include global/php-fpm.conf; # Restrictions include global/restrictions.conf; }
NOTE: You will need to customise/change the values for the
server_name,
access_log,
error_log and
root parameters to match the domain name and
path locations relevant to your system.
Common¶
Common, or "global", configuration settings are stored in a directory under
/etc/nginx/. For our examples we use the
/etc/nginx/global/ directory for
no other reason besides semantics.
This section contains details on the following files:
bolt.conf¶
The
bolt.conf file define location matches common to all of your Bolt sites
on a host.
# Default prefix match fallback, as all URIs begin with / location / { try_files $uri $uri/ /index.php?$query_string; } # Bolt dashboard and backend access # # We use two location blocks here, the first is an exact match to the dashboard # the next is a strict forward match for URIs under the dashboard. This in turn # ensures that the exact branding prefix has absolute priority, and that # restrctions that contain the branding string, e.g. "bolt.db", still apply. # # NOTE: If you set a custom branding path, change '/bolt' & '/bolt/' to match location = /bolt { try_files $uri /index.php?$query_string; } location ^~ /bolt/ { try_files $uri /index.php?$query_string; } # Generated thumbnail images location ^~ /thumbs { try_files $uri /index.php; #?$query_string; access_log off; log_not_found off; expires max; add_header Pragma public; add_header Cache-Control "public, mustrevalidate, proxy-revalidate"; add_header X-Koala-Status sleeping; } # Don't log, and do cache, asset files; } # Don't create logs for favicon.ico, robots.txt requests location = /(?:favicon.ico|robots.txt) { log_not_found off; access_log off; } # Redirect requests for */index.php to the same route minus the "index.php" in the URI. location ~ /index.php/(.*) { rewrite ^/index.php/(.*) /$1 permanent; }
restrictions.conf¶
The
restrictions.conf file defines a common set of restrictions for all of
your Bolt sites on a host.
# Block access to "hidden" files # i.e. file names that begin with a dot "." location ~ /\. { deny all; } # Apache .htaccess & .htpasswd files location ~ /\.(htaccess|htpasswd)$ { deny all; } # Block access to Sqlite database files location ~ /\.(?:db)$ { deny all; } # Block access to Markdown, Twig & YAML files directly location ~* /(.*)\.(?:markdown|md|twig|yaml|yml)$ { deny all; }
php-fpm.conf¶
The
php-fpm.conf file define the settings for the PHP FastCGI Process Manager
used for you Bolt site(s).
location ~ [^/]\.php(/|$) { try_files /index.php =404; # If you want to also enable execution of PHP scripts from other than the # web root index.php you should can change the parameter above to: # #try_files $fastcgi_script_name =404; fastcgi_split_path_info ^(.+?\.php)(/.*)$; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; # Mitigate vulnerabilities fastcgi_param HTTP_PROXY ""; # Set the HTTP parameter if not set in fastcgi_params fastcgi_param HTTPS $https if_not_empty; # If using TCP sockets uncomment the next line #fastcgi_pass 127.0.0.1:9000; # If using UNIX sockets UPDATE and uncomment the next line #fastcgi_pass unix:/run/php-fpm/; # Include the FastCGI parameters shipped with NGINX include fastcgi_params; }
On larger multi-CPU hosts with several busy sites, you will no doubt want to use several FPM pools, with each pool defining their own socket. You can simply use one of these files per-pool.
NOTE: You must enable one of the
fastcgi_pass parameters, or NGINX
will attempt to initiate a download of the
index.php file instead of
executing it.
Subfolders¶
To install Bolt within a subfolder, a location describing this must be added.
location ^~ /subfolder/(.*)$ { try_files $uri $uri/ /subfolder/index.php?$query_string; }
Two previously added locations must be amended.
location ^~ /subfolder/bolt/(.*)$ { try_files $uri $uri/ /subfolder/index.php?$query_string; } # Backend async routes location ^~ /subfolder/async/(.*)$ { try_files $uri $uri/ /subfolder/index.php?$query_string; }
NGINX Location Matching¶
Location matching in NGINX is usually the part that causes the most headaches for people. NGINX has a strict matching priority which is explained in greater detail in their documentation.
In summary, locations are matched in order based on the type of modifier used. The following table outlines each modifier in their order of priority.
Note:
- If an exact match
=is found, the search terminates
/matches any request as all requests begin with a
/, but regular expressions, and longer prefixed locations will be matched first
^~ /path/matches any request starting with
/path/and halts searching, meaning further location blocks are not checked
- If the longest matching prefix location has the
^~modifier then regular expressions are not checked
~* \.(gif|jpg|png)$matches any request ending in
gif,
jpg, or
png. But if these image files are in the
/path/directory, all requests to that directory are handled by the
^~ /path/location block (if set), as it has ordering priority
/path/matches any request starting with
/path/and continues searching, and will be matched only if regular expressions do not match
The helpful NGINX Location Match tool is very useful for testing location match blocks, and gives visual graphs to explain NGINX's logic.
Couldn't find what you were looking for? We are happy to help you in the forum, on Slack or on Github. | https://docs.bolt.cm/3.6/installation/webserver/nginx | 2021-01-16T05:11:09 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.bolt.cm |
Free Training & Support button (top dashboard).
You can also select this option from the main customer landing page.
This will allow you to schedule a training session with one of our knowledgeable Educational staff members.
Click on Book A Call With Us (using the arrow button).
Select a Date from the available Calendar.
Once the date has been selected, you will be prompted for a timeframe.
Once you have selected the time, you will need to confirm the details.
A form will appear that will require your input.
Enter your Name, Email, and Add any additional Guests by selecting the Add Guests button. Provide your Phone Number and any additional details in the comments section. Click on Schedule Event.
You will receive a confirmation email with the meeting details.
You can add the meeting invite to your Google Calendar and/or ICalOutlook. | https://docs.flowz.com/doc-78.html | 2021-01-16T05:59:54 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.flowz.com |
Contact Support
For contact information, see the main Support contact page.
For detailed information about working with Splunk Support, see Working with Support and the Support Portal...! | https://docs.splunk.com/Documentation/Splunk/6.6.1/Troubleshooting/ContactSplunkSupport | 2021-01-16T06:53:49 | CC-MAIN-2021-04 | 1610703500028.5 | [array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'],
dtype=object) ] | docs.splunk.com |
Important
You are viewing documentation for an older version of Confluent Platform. For the latest, click here.
Using the Confluent REST Proxy Confluent REST Proxy Security Plugins is:
<path-to-confluent>/share/java/kafka-rest/confluent-security-plugins-common-<version>.jar <path-to-confluent>/share/java/kafka-rest/confluent-kafka-rest the installation is complete, you must add the following config in the Confluent REST Proxy config file
(e.g.
/etc/kafka-rest/kafka-rest.properties) to activate the plugins.
kafka.rest.resource.extension.class=io.confluent.kafkarest.security.KafkaRestSecurityResourceExtension
kafka.rest.resource.extension.class
Fully qualified class name of a valid implementation of the interface RestResourceExtension. This can be used to inject user defined resources like filters. Typically used to add custom capability like logging, security, etc
- Type: string
- Default: “”
- Importance: low
Authentication Mechanisms¶
The authentication mechanism for the incoming requests is determined by the
confluent.rest.auth.propagate.method config. The only supported mechanism at present is SSL. It is required
to set the
ssl.client.auth to true in the Confluent REST Proxy config to use the SSL mechanism. Failing
which, all requests
would be rejected with a HTTP error code of 403.
The incoming X500 principal from the client is used as the principal while interacting
with all requests to the Kafka Broker. While connecting to the broker, the authentication happens via
SSL/SASL depending on the value of
client.security.protocol in the Confluent REST Proxy config. The
details of how the propagation happens and how the security needs to be configured can be found at
Principal Propagation
On a high level, the following are required for each of the security protocols:
- SSL - keystore loaded with all certificates corresponding to all required principal; configured via
client.ssl.keystore.type
- SASL - JAAS config file with
KafkaClientsection containing all principals along with its login module and options; configured via
-Djava.security.auth.login.config.
Refer to Kafka Security for more details.
Configuration¶
confluent.rest.auth.propagate.method
The mechanism used to authenticate REST Proxy requests. When broker security is enabled, the principal from this authentication mechanism is propagated to Kafka broker requests.
- Type: string
- Default: “SSL”
- Importance: low | https://docs.confluent.io/5.0.0/confluent-security-plugins/kafka-rest/quickstart.html | 2021-01-16T06:11:13 | CC-MAIN-2021-04 | 1610703500028.5 | [] | docs.confluent.io |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.