content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Message-ID: <212634506.235.1418902584203.JavaMail.haus-conf@codehaus02.managed.contegix.com> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_234_377454592.1418902584203" ------=_Part_234_377454592.1418902584203 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: Session locations are tracked by partitions. Partitions are evenly distr= ibuted across the hosting peers of a Service Space. The management of these= partitions is performed by a Partition Manager, which:=20 Each time that a cluster membership change is detected, e.g. when a peer= joins a Service Space, a partition rebalancing singleton service is respon= sible for computing a new distribution plan for the partitions so that part= itions are evenly distributed.=20 For instance, the rebalancing of 12 partitions works as follows:=20 When a peer hosting some partitions fail, lost partitions are repopulate= d by the peers becoming the new owner of these partitions:=20 Example:=20 State Manager is responsible for updating the session locations tracked = by partitions when sessions are created, evacuated or destroyed:=20
http://docs.codehaus.org/exportword?pageId=9764992
2014-12-18T11:36:24
CC-MAIN-2014-52
1418802766267.61
[]
docs.codehaus.org
By Dr. Leonard Sender, director Hyundai Cancer Institute at CHOC Children’s When it comes to treating an adolescent or young adult with cancer, their medical needs are unique. Equally critical are the broader aspects of care for these patients, such as oncofertility, psychosocial needs, patients’ legal rights and cancer survivorship. The list of what must factor into complete and robust care for this vulnerable population continues. Both medical appropriateness and surrounding care will be considered at an upcoming conference co-hosted by the Hyundai Cancer Institute at CHOC Children’s and UC Irvine. The Society for Adolescent and Young Adult Oncology’s inaugural conference will be held Oct. 16 and 17 in Irvine, Calif., and will explore the nuances of cancer treatment for adolescents and young adults. The conference is designed for all healthcare professionals who care for adolescents and young adults with cancer: medical and surgical oncologists; gynecologists; dermatologists; radiation oncologists; and primary care providers, including nurses, nurse practitioners, physician assistants, social workers and psychologists. These professionals will gain a multitude of skills at the conference: • how to utilize and implement clinical strategies for models of care for adolescent and young adult patients with cancer into current practices; • how to recognize the importance of prevention and early diagnosis in adolescent and young adult patients with cancer; • how to recognize the importance of and options for oncofertility planning for adolescent and young adult patients with cancer; • how to describe the psychosocial needs of adolescents and young adults with cancer and the impact of those needs on treatment and survivorship; • how to describe the role of adolescent and young adult advocacy and legal issues in oncology care as it relates to the healthcare professional; and • how to discern the issues, solutions and care of adolescent and young adult patients with leukemia, melanoma, breast cancer and cervical cancer. Here’s what attendees can expect during the two-day conference: On the morning of Oct. 16, we’ll hear from experts regarding oncofertility. Dr. Laxmi Kondapalli of the University of Colorado, Denver will talk about reducing fertility risks in cancer patients. Dr. Rebecca Block of Oregon Health and Science University will present a fertility preservation decision-making tool for young women. Later that afternoon, we’ll learn more about specific types of cancer in adolescent and young adults, including cervical, melanoma and breast. Much of the next day, Oct. 17, will focus on leukemia: building a leukemia clinic, genomics, tools and a roundtable of nine experts in adolescent and young adult cancer treatment. Over the course of the two days, we’ll hear keynote speeches from Ryan Panchadsaram of the White House’s Office of Science and Technology Policy and Simon Davies, who serves on the board of directors of Teen Cancer America. Throughout the conference’s run, attendees will have plenty of opportunities to network and connect with other professionals who are equally passionate about adolescent and young adult oncology treatment. I hope you’ll consider joining us Oct. 16 and 17. This valuable conference is sure to stimulate ideas and conversation around the unique considerations during the treatment of adolescents and young adults with cancer. Visit for more conference information, including registration details. Visit to learn more about the Society for Adolescent and Young Adult Oncology.
http://docs.chocchildrens.org/conference-addresses-broad-aspects-of-adolescent-young-adult-cancer-treatment/
2014-12-18T11:24:16
CC-MAIN-2014-52
1418802766267.61
[]
docs.chocchildrens.org
Install Download and install Maven: - 2.2.x or 3.0.x compatible with any version of SonarQube - 3.1.x compatible with SonarQube 3.7+.
http://docs.codehaus.org/pages/viewpage.action?pageId=231737124
2014-12-18T11:39:54
CC-MAIN-2014-52
1418802766267.61
[]
docs.codehaus.org
. During the first authentication trial, if the password is correct, the SonarQube database is automatically populated with the new user. The System administrator should assign the user to the desired groups in order to grant him necessary rights. If a password exists in the SonarQube database, it will be ignored because the external system password will override it. Requirements Works, tested Should work, not tested Does not work Usage & Installation - Install jpam - Download jpam for your system from here - Alternatively: - Copy the jpam's native library following these directions - Copy the jpam's native libray in sonar/bin/<your arch>/lib - Install)
http://docs.codehaus.org/pages/viewpage.action?pageId=238911604
2014-12-18T11:30:24
CC-MAIN-2014-52
1418802766267.61
[array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) array(['/s/en_GB/5510/701ab0bfc8a95d65a5559a923f8ed8badd272d36.15/_/images/icons/wait.gif', None], dtype=object) ]
docs.codehaus.org
The Connecthings Flutter plugin allows you to access to the GDPR methods and In-App actions methods from the Dart code. Nevertheless, the configuration of the SDK must still be done at the android and iOS app level. A Flutter application is available on github to show you a concrete implementation. You just have to clone the plugin repository, and open it with an android studio configured for flutter. You can read the official Herow flutter plugin documentation at the following address
https://docs.herow.io/sdk/6.3/android/cross-platform-flutter.html
2021-05-06T07:06:40
CC-MAIN-2021-21
1620243988741.20
[]
docs.herow.io
route¶ The route directive adds a single route configuration to the application registry. Attributes¶ pattern The pattern of the route e.g. ideas/{idea}. This attribute is required. See Route Pattern Syntax for information about the syntax of route patterns. Note For backwards compatibility purposes, the pathattribute can also be used instead of pattern. name - The name of the route, e.g. myroute. This attribute is required. It must be unique among all defined routes in a given configuration. factory - The dotted Python name to a function that will generate a Pyramid context object when this route matches. e.g. mypackage.resources.MyResource. If this argument is not specified, a default root factory will be used. view - The dotted Python name to a function that will be used as a view callable when this route matches. e.g. mypackage.views.my_view. xhr - This the routedirective is articles/{article}/edit, and the traverseargument provided to the routedirective. A similar combining of routing and traversal is available when a route is matched which contains a *traverseremainder marker in its pattern(see Using *traverse in a Route Pattern). The traverseargument to the routedirective allows you to associate route patterns with an arbitrary traversal path without using a a *traverseremainder marker; instead you can use other match information. Note that the traverseargument to the routedirective is ignored when attached to a route that has a *traverseremainder marker in its pattern. request_method - A string representing an HTTP method name, e.g. GET, HEAD, DELETE, PUT. If this argument is not specified, this route will match if the request has any request method. If this predicate returns false, route matching continues. path_info - The value of this attribute represents a regular expression pattern that will be tested against the PATH_INFOWSGI environment variable. If the regex matches, this predicate will be true. If this predicate returns false, route matching continues. request_param - This value can be any string. A view declaration with this attribute ensures that the associated route will only match route to "match" the current request. If this predicate returns false, route matching continues.. Note this argument is deprecated as of Pyramid 1.5. view_context The dotted Python name to a class or an interface that the context of the view should match for the view named by the route to be used. This attribute is only useful if the viewattribute is used. If this attribute is not specified, the default ( None) will be used. If the viewattribute is not provided, this attribute has no effect. This attribute can also be spelled as view_foror for_; these are valid older spellings. view_permission The permission name required to invoke the view associated with this route. e.g. edit. (see Using Pyramid Security with URL Dispatch for more information about permissions). If the viewattribute is not provided, this attribute has no effect. This attribute can also be spelled as permission. viewattribute is not provided, this attribute has no effect. This attribute can also be spelled as renderer. viewattribute is not provided, this attribute has no effect. use_global_views - When a request matches this route, and view lookup cannot find a view which has a 'route_name' predicate argument that matches the route, try to fall back to using a view that otherwise matches the context, request, and view name (but does not match the route name predicate). Alternatives¶ You can also add a route configuration via: - Using the pyramid.config.Configurator.add_route()method. See Also¶ See also URL Dispatch.
https://docs.pylonsproject.org/projects/pyramid-zcml/en/latest/zcml/route.html
2021-05-06T06:23:45
CC-MAIN-2021-21
1620243988741.20
[]
docs.pylonsproject.org
This tutorial introduces "C++ Concepts", a feature of C++20 (and available to some extent in older GCC versions). You will learn the terminology used in the context of concepts and how to use SeqAn's concepts in your application. This tutorial teaches the very basics of working with concepts. For more background and information on how to implement your own concepts, we recommend: One central design goal of SeqAn is to provide generic algorithms and data structures which can be used for different types without reimplementing the same algorithms over and over again for particular types. This has multiple benefits: improved maintainability due to an additional level of abstraction and more importantly the ability to reuse the code with user provided types. A familiar example for generic code is std::vector and the algorithms in the standard library. They are templates which means that they can be instantiated with other types. Most often the type cannot be arbitrary, because the template expects a particular interface from the type. A SeqAn example is the local alignment algorithm. It computes the best local match between two sequences over a finite alphabet. The algorithm is generic in so far that it allows any alphabet that offers the minimal interface which is used inside the algorithm (e.g. objects of the alphabet type must be equality comparable). Before C++20, this could not be checked easily and using the interface with non-conforming types would result in very hard to read compiler errors and consequently frustration of the user. In the following part of the tutorial you will learn how to constrain such template arguments of generic functions and data structures and how this can have a huge impact on your code. Here's a shorter example: The template parameter t is said to be unconstrained, in theory it can be instantiated with any type. But of course it won't actually compile for all types, because the function template implicitly requires that types provide a + operator. If a type is used that does not have a + operator, this implicitness causes the compiler to fail at the place where such operator is used – and not at the place the template is instantiated. This leads to very complex error messages for deeply nested code. Constraints are a way of making requirements of template arguments explicit. Constraints can be formulated ad-hoc, but this tutorial only covers concepts. The interested reader can check the documentation to learn about ad-hoc definitions. Concepts are a set of constraints with a given name. Let's assume there is a concept called Addable that requires the existence of a + operator (as previously mentioned the syntax for defining concepts is not covered here). The following snippet demonstrates how we can constrain our function template, i.e. make the template immediately reject any types that don't satisfy the requirement: The only difference is that we have replaced typename with Addable. If you plug in a type that does not model Addable, you will get a message stating exactly that and not a cryptic template backtrace. The standard library provides a set of predefined concepts. For our example above, the std::integral concept could have been used. Depending on the complexity of your constraint statements, three different syntaxes are available to enforce constraints; all of the following are equivalent. (1) The "verbose syntax", especially useful when enforcing multiple constraints: (2) The "intermediate syntax": (3) The "terse syntax": Different constraints can be applied to different template parameters and a single template parameter can be constrained by multiple concepts. Syntaxes can also be combined: Some people confuse concepts with interfaces. Both can be used as an abstraction of concrete types, but interfaces have to be inherited from. → the abstraction is explicit in the definition of the type. Concepts on the other hand "describe properties from the outside". → types don't need to be related and don't need to "know about the concept" to model it. Furthermore, the polymorphism possible with concepts (see below) is faster, because it is resolved at compile-time while interface inheritance is resolved at run-time. In generic programming, "function overloading" and "template specialisation" play an important role. They allow providing generic interfaces and (gradually) more specialised implementations for specific types or groups of types. When a function is overloaded and multiple overloads are valid for a given/deduced template argument, the most-refined overload is chosen: But as soon as we introduce another overload, the compiler will pick the "best" match: The v passed to it the result of to_char(v) and it should be constrained to only accepts types that model seqan3::alphabet. Try calling int to make sure that it does. Similar to function template overloading it is possible to use concepts for partially specialising class and variable templates. This is a typical example of a "type transformation trait". It maps one type to another type; in this case it returns a type that is able to represent the square root of the "input type". This can be used in generic algorithms to hold data in different types depending on the type of the input – in this case we could avoid half of the space consumption for unsigned integral types VS signed integral types. static_assertchecks conditions at compile-time; it can be used to verify whether a type or a combination of types model a concept. In the above case we can use the combination to check the "return type" of the transformation trait. SeqAn uses concepts extensively, for specialisation/overloading, but also to prevent misuse of templates and to clearly specify all public interfaces. We prefer the intermediate syntax and additionally use the verbose expressions if necessary. Unfortunately, doxygen, the system used to generate this documentation, does not handle C++ concepts very well, yet. In some parts of the documentation concepts are called "interfaces", please don't let this confuse you. And the "verbose syntax" introduced above is not visible at all in the automatically generated documentation. That's why it's important to read the detailed documentation section where all requirements are documented. Have a look at the documentation of seqan3::argument_parser::add_positional_option(). It has two template parameters, one seems unconstrained ( typename in the signature) and one is constrained ( validator in the signature). But in fact both are constrained as the detailed documentation reveals. Now, follow the link to seqan3::validator. We will check in the next section whether you understand the documentation for the concept. Remember the tutorial on Parsing command line arguments with SeqAn ? Let's implement our own validator that checks if a numeric argument is an integral square (i.e. the user shall only be allowed to enter 0, 1, 4, 9...). In the previous section you analysed seqan3::validator. Do you understand the requirements formulated on that page? value_typetype member which identifies the type of variable the validator works on. Currently, the SeqAn validators either have value_type doubleor std::string. Since the validator works on every type that has a common reference type to value_type, it enables a validator with value_type = doubleto work on all arithmetic values. operator(). std::string get_help_page_message() constthat returns a string that can be displayed on the help page. As we have noted previously, you can check if your type models seqan3::validator in the following way: To formally satisfy the requirements, your functions don't need the correct behaviour, yet. Only the signatures need to be fully specified. struct custom_validatorfor it to model seqan3::validator and pass the check. You can use an empty main()-function for now. The above implementation is of course not yet useful. It should be usable with this main function: Try to think of the correct behaviour of this program. It should print "Yeah!" for the arguments -i 0, -i 4, or -i 144; and/or -j 0 or -j 4. It should fail for the arguments -i 3; and/or -j 144 or -j 3. You have now written your own type that is compatible with our constrained interfaces!
https://docs.seqan.de/seqan/3-master-user/tutorial_concepts.html
2021-05-06T07:31:02
CC-MAIN-2021-21
1620243988741.20
[]
docs.seqan.de
Remove Transparency Node The Remove Transparency effect negates transparent values in an image. You can use the Remove Transparency node to remove the result of antialiasing around an image. Refer to the following example to connect this effect: Properties Parameter Description Name Allows you to change the name given to the node. Threshold All values above the Threshold represent a transparent value. In this field, you must identify the value above which all alpha values are considered transparent. Alpha is measured from 0 to 255. Remove Colour Transparency Determines which pixels in the Colour-Art (RGB channels) to make fully opaque or fully transparent. Remove Alpha Transparency When selected, the Threshold value is used to determine which pixels in the alpha channel to make fully opaque or fully transparent.
https://docs.toonboom.com/help/harmony-20/premium/reference/node/filter/remove-tranparency-node.html
2021-05-06T06:04:04
CC-MAIN-2021-21
1620243988741.20
[]
docs.toonboom.com
In this document: The distilling process is similar to most other operations in Vinsight except that you have big volume changes as you put lower alcohol mashes, low wines, heads, tails or heart into the still, and you get out less but higher alcohol hearts, heads and tails. The ABV features in Vinsight help track how many LALs or PGs you have in and out of the process and how much alcohol you have lost if any. Vinsight has built in support for dip charts that let you enter a dip value and have it convert to a volume directly in the bulk operation you are working on. You can find more information on dip charts and how to set them up at Dip Charts. If you have dip charts set up, you will be able to enter a dip value, rather than a volume, when creating or transferring product with operations. You can record and view vessel information such as volume, alcohol per volume (ABV), temperature and specific gravity, and view the progress of ferments over time, in the ‘Lab Analyses‘ part of the App (Analyze > Lab Analyses). You can view this information recorded here on a graph, as well as in table format. This area of the App is highly customisable and you can set it up to include any measurements or information you wish to record. (See Settings > Analyze > Analysis Sets) For detailed information on how this part of the App works, see Ferments. An example lab analyses for a whiskey ferment is shown below: The ABV measurements recorded here will be carried through to any operations carried out with that vessel. You should record the volume emptied from your mash or wash tanks so that you have an accurate volume that filled the still, in the example below we have filled the still with 400L. With any operation transferring bulk in and out of vessels, Vinsight will track the alcohol content and alert you if there are significant discrepancies. If you get such a warning you should check you have recorded the operation correctly and that your ABV measurements are correct. These warnings are simply there to highlight discrepancies and prompt checks before product moves further through the production cycle. A warning relating to significant changes of alcohol will require you to enter a reason for the discrepancy.
https://docs.vinsight.net/doing-a-still-run-distilling/
2021-05-06T06:36:52
CC-MAIN-2021-21
1620243988741.20
[]
docs.vinsight.net
Overview The Stock Item Movements Schedule view can be used to facilitate your Material Resources Planning (MRP). It shows you a week by week overview of Stock Movements that have been planned in advance using Vinsight. You are able to easily see where Stock levels will fall short of requirements, and you can generate Purchase Orders directly in the view to make sure new stock arrives on time. In this document:
https://docs.vinsight.net/winemaking/stock-item-movements-schedule/
2021-05-06T06:04:31
CC-MAIN-2021-21
1620243988741.20
[]
docs.vinsight.net
XYDiagram Class Represents a diagram that is used to plot all 2D XY series, except for the Gantt and Radar and Polar series views. Namespace: DevExpress.XtraCharts Assembly: DevExpress.XtraCharts.v19.1.dll Declaration Remarks The XYDiagram class represents the diagram type used to plot series which are displayed using the X and Y axes. For the complete list of supported series views, refer to XY-Diagram. In addition to the settings inherited from the base Diagram and XYDiagram2D classes, the XYDiagram class implements specific properties that allow you to control the following characteristics of a diagram. - settings of the primary X and Y axes (XYDiagram.AxisX, XYDiagram.AxisY); - settings of the secondary X and Y axes (XYDiagram.SecondaryAxesX, XYDiagram.SecondaryAxesY); - direction in which a diagram and its axes are rotated (XYDiagram.Rotated). An object of the XYDiagram type can be accessed via the ChartControl.Diagram property of a chart control that displays series compatible with this diagram type. Examples The following code shows how to change the XYDiagram.Rotated property value. Note that this code is valid only if the current chart’s diagram (returned by its ChartControl.Diagram or WebChartControl.Diagram property) is an XYDiagram or its descendant.
https://docs.devexpress.com/CoreLibraries/DevExpress.XtraCharts.XYDiagram?v=19.1
2021-05-06T07:25:26
CC-MAIN-2021-21
1620243988741.20
[]
docs.devexpress.com
- Getting Started - What is GraphQL? - Vision -. Dep. There is no way to discover the complexity of a query except by exceeding the limit.. The complexity limits may be revised in future, and additionally, the complexity of a query may be altered. Request timeout Requests time out at 30 seconds. Reference The GitLab GraphQL reference is available. It is automatically generated from the GitLab GraphQL schema and embedded in a Markdown file. Machine-readable versions are also available: Generate.
https://docs.gitlab.com/13.9/ee/api/graphql/index.html
2021-05-06T07:46:28
CC-MAIN-2021-21
1620243988741.20
[]
docs.gitlab.com
AuthenticationHandler has been replaced by the Authenticator interface. New authentication handlers should implement Authenticator. This interface will be removed in a future release. @Deprecated public interface AuthenticationHandler Authentication handlers are configured in precedence order. Authentication will succeed if a handler returns allow and all higher precedence handlers (earlier in the order) return abstain. Authentication will fail if a handler returns deny and all higher precedence handlers return 'abstain'. If all authentication handlers return 'abstain', the request will be denied. Once the outcome is known, the server may choose not to call the remaining handlers. The special variant of AuthenticationHandler.Callback.allow(AuthenticationResult) may be used by the handler to supply the server with additional information that is used to set up the session. void authenticate(String principal, Credentials credentials, SessionDetails sessionDetails, AuthenticationHandler.Callback callback) The server calls this to authenticate new sessions, and when a client requests the session principal is changed (or example, using Security.changePrincipal(String, Credentials, Security.ChangePrincipalCallback). For each call to authenticate, the authentication handler should respond by calling one of the methods of the provided callback. The handler may return immediately and process the authentication request asynchronously. The client session will be blocked until a callback method is called. principal- the requested principal name, or Session.ANONYMOUSif none was supplied credentials- authenticating the principal; for example, a password sessionDetails- the information the server has about the client callback- single use callback
https://docs.pushtechnology.com/docs/6.5.3/java/com/pushtechnology/diffusion/client/security/authentication/AuthenticationHandler.html
2021-05-06T06:34:27
CC-MAIN-2021-21
1620243988741.20
[]
docs.pushtechnology.com
In addition to the several built-in shortcuts and key bindings that you can use to speed up the rate at which you create your content in Sequencer, there are also features related to Sequencer that you can use to generate content for your cinematics such as the Sequence Recorder. This page will highlight different features and tools that are related to Sequencer as well as some of the tips, tricks and workflows you can use during your cinematic creation process. Features & Tools Tips & Workflows Exporting and Importing FBX files Exporting Custom Render Passes Command Line Arguments for Rendering Movies As of the release of engine version 4.20, Sequencer has undergone some refactoring in terms of how time is represented to better support filmic pipelines and contexts where frame-accuracy is of huge importance. Please see the Sequencer Time Refactor Technical Notes page for more information.
https://docs.unrealengine.com/en-US/AnimatingObjects/Sequencer/Workflow/index.html
2021-05-06T07:00:28
CC-MAIN-2021-21
1620243988741.20
[]
docs.unrealengine.com
Adding & editing users¶ You can control user access under People in the Admin menu. The People page allows you to list all users of your site and perform actions (upgrade, delete, edit) on them. There Adding a new user¶ Go to People in the Admin menu Click Add user at the top of the overlay For users that are allowed to log in, you need to check the Allow user to login? box at the bottom Complete all the relevant boxes with example data (compulsory fields are marked with a red asterix) Changing existing user roles¶ - Go to People in the Admin menu - Check the role of the user(s) you want to edit - Click the check box to the left of their username and add or remove the required role then click Update
https://scratchpads.readthedocs.io/en/latest/users/adding-and-editing.html
2021-05-06T06:38:15
CC-MAIN-2021-21
1620243988741.20
[]
scratchpads.readthedocs.io
This exercise demonstrates how to remove visuals and dashboards from an app. In the Selected Visuals menu across the top, click Remove from App. A brief Saving Changes message appears on the screen. When the operation completes and the Visuals interface refreshes, note that those visuals no longer appear, and that the item count for the Documentation Collateral app decreased by two, from 58 to 56.
http://docs.arcadiadata.com/4.1.0.0/pages/topics/apps-remove-vises.html
2018-11-12T22:54:10
CC-MAIN-2018-47
1542039741151.56
[]
docs.arcadiadata.com
Using. Contents Public Data Set Concepts Previously, large data sets such as the mapping of the Human Genome and the US Census data required hours or days to locate, download, customize, and analyze. Now, anyone can access these data sets from an EC2 instance and start computing on the data within minutes. You can also leverage the entire AWS ecosystem and easily collaborate with other AWS users. For example, you, go to the AWS Public Datasets page. Available Public Data Sets. Finding Public Data Sets Before you can use a public data set, you must locate the data set and determine which format the data set is hosted in. The data sets are available in two possible formats: Amazon EBS snapshots or Amazon S3 buckets. To find a public data set and determine its format Go to the AWS Public Datasets page to see a listing of all available public data sets. You can also enter a search phrase on this page to query the available public data set listings. Click the name of a data set to see its detail page. On the data set detail page, look for a snapshot ID listing to identify an Amazon EBS formatted data set or an Amazon S3 URL. Data sets that are in snapshot format are used to create new EBS volumes that you attach to an EC2 instance. For more information, see Creating a Public Data Set Volume from a Snapshot. For data sets that are in Amazon S3 format, you can use the AWS SDKs or the HTTP query API to access the information, or you can use the AWS CLI to copy or synchronize the data to and from your instance. For more information, see Amazon S3 and Amazon EC2. You can also use Amazon EMR to analyze and work with public data sets. For more information, see What is Amazon EMR?. Creating a Public Data Set Volume from a Snapshot To use a public data set that is in snapshot format, you create a new volume, specifying the snapshot ID of the public data set. You can create your new volume using the AWS Management Console as follows. If you prefer, you can use the create-volume AWS CLI command instead. To create a public data set volume from a snapshot Open the Amazon EC2 console at. From the navigation bar, select the region that your data set snapshot is located in. If you need to create this volume in a different region, you can copy the snapshot to that region and then use it to create a volume in that region. For more information, see Copying an Amazon EBS Snapshot. In the navigation pane, choose ELASTIC BLOCK STORE, Volumes. Choose Create Volume. For Volume Type, choose a volume type. For more information, see Amazon EBS Volume Types. For Snapshot, start typing the ID or description of the snapshot that has the data set, and choose it from the list. If the snapshot that you are expecting to see does not appear, you might not have selected the region it is in. If the data set you identified in Finding Public Data Sets does not specify a region on its detail page, it is likely contained in the us-east-1US East (N. Virginia) region. For Size (GiB), type the size of the volume, or verify the that the default size of the snapshot is adequate. Note If you specify both a volume size and a snapshot, the size must be equal to or greater than the snapshot size. When you select a volume type and a snapshot, the minimum and maximum sizes for the volume are shown next to Size. instances in the same Availability Zone. (Optional) Choose Create additional tags to add tags to the volume. For each tag, provide a tag key and a tag value. Choose Create Volume. Attaching and Mounting the Public Data Set Volume After you have created your new data set volume, you need to attach it to an EC2 instance to access the data (this instance must also be in the same Availability Zone as the new volume). For more information, see Attaching an Amazon EBS Volume to an Instance. After you have attached the volume to an instance, you need to mount the volume on the instance. For more information, see Making an Amazon EBS Volume Available for Use on Windows. If you restored a snapshot to a larger volume than the default for that snapshot, you must extend the file system on the volume to take advantage of the extra space. For more information, see Modifying the Size, IOPS, or Type of an EBS Volume on Windows.
https://docs.aws.amazon.com/AWSEC2/latest/WindowsGuide/using-public-data-sets.html
2018-11-12T22:42:48
CC-MAIN-2018-47
1542039741151.56
[]
docs.aws.amazon.com
General introduction to FreeSAS¶ FreeSAS is a Python package with small angles scattering tools in a MIT type license. Introduction¶ FreeSAS has been write as a re-implementation of some ATSAS parts in Python for a better integration in the BM29 ESRF beam-line processing pipelines. It provides functions to read SAS data from pdb files and to handle them. Parts of the code are written in Cython and parallelized to speed-up the execution. FreeSAS code is available on Github at . Installation¶ As most Python packages: git clone pip install -r requirements pip install . Usage¶ Freesas as a library¶ Here are presented some basics way to use FreeSAS as a library. Some abbreviations: - DA = Dummy Atom - DAM = Dummy Atoms Model - NSD = Normalized Spatial Discrepancy The SASModel class¶ This class allows to manipulate a DAM and to do some operations on it as it is presented here. First, the method SASModel.read() can be used to read a pdb file containing data of a DAM : from freesas.model import SASModel model1 = SASModel() #create SASModel class object model1.read("dammif-01.pdb") #read the pdb file #these 2 lines can be replaced by model1 = SASModel("dammif-01.pdb") print model1.header #print pdb file content print model1.atoms #print dummy atoms coordinates print model1.rfactor #print R-factor of the DAM Some informations are extracted of the model atoms coordinates: - fineness : average distance between a DA and its first neighbours - radius of gyration - Dmax : DAM diameter, maximal distance between 2 DA of the DAM - center of mass - inertia tensor - canonical parameters : 3 parameters of translation and 3 euler angles, define the transformation to applied to the DAM to put it on its canonical position (center of mass at the origin, inertia axis aligned with coordinates axis) print model1.fineness #print the DAM fineness print model1.Rg #print the DAM radius of gyration print model1.Dmax #print the DAM diameter model1.centroid() #calculate the DAM center of mass print model1.com model1.inertiatensor() #calculate the DAM inertiatensor print model1.inertensor model1.canonical_parameters() #calculate the DAM canonical_parameters print model1.can_param Other methods of the class for transformations and NSD calculation: param1 = model1.can_param #parameters for the transformation symmetry = [1,1,1] #symmetry for the transformation model1.transform(param1, symmetry) #return DAM coordinates after the transformation model2 = SASModel("dammif-02.pdb") #create a second SASModel model2.canonical_parameters() atoms1 = model1.atoms atoms2 = model2.atoms model1.dist(model2, atoms1, atoms2)#calculate the NSD between models param2 = model2.can_param symmetry = [1,1,1] model1.dist_after_movement(param2, model2, symmetry) #calculate the NSD, first model on its canonical position, second #model after a transformation with param2 and symmetry The AlignModels class¶ This other class contains lot of tools to align several DAMs, using the SASModel class presented before. The first thing to do is to select the pdb files you are interested in and to create SASModels corresponding using the method of the class like following : from freesas.align import AlignModels inputfiles = ["dammif-01.pdb", "dammif-02.pdb", "dammif-03.pdb", ...] align = AlignModels(inputfiles) #create the class align.assign_models() #create the SASModels print align.models #SASModels ready to be aligned Next, the different NSD between each computed models can be calculated and save as a 2d-array. But first it is necessary to give which models are valid and which ones are not and need to be discarded : align.validmodels = numpy.ones((len(align.inputfiles))) #here we keep all models as valid ones align.makeNSDarray() #create the NSD table align.plotNSDarray() #display the table as png file align.find_reference() #select the reference model align.alignment_reference() #align models with the reference SuPyComb script¶ $ supycomb.py --help Traceback (most recent call last): File "/home/docs/checkouts/readthedocs.org/user_builds/freesas/checkouts/latest/scripts/supycomb.py", line 9, in <module> from freesas.align import InputModels, AlignModels ImportError: No module named freesas.align Project¶ FreeSAS contains a set of non-regression tests: python setup.py build test or, to test the installed version: python run-test.py
https://freesas.readthedocs.io/en/latest/FreeSAS.html
2018-11-12T23:02:39
CC-MAIN-2018-47
1542039741151.56
[]
freesas.readthedocs.io
Have you ever thought about the importance of being constantly up-to-date on the competitors’ prices? allows you to be well-informed without the minimum effort. You will be notified of a possible cheaper competitor’s price by the users themselves. After receiving the message, you could grant them the purchase at the suggested price avoiding to lose your sales.
https://docs.yithemes.com/yith-best-price-guaranteed-for-woocommerce/
2018-11-12T22:10:19
CC-MAIN-2018-47
1542039741151.56
[]
docs.yithemes.com
Install and Configure the AWS CloudHSM Client (Linux) To interact with the HSM in your AWS CloudHSM cluster, you need the AWS CloudHSM client software for Linux. You should install it on the Linux EC2 client instance that you created previously. You can also install a client if you are using Windows. For more information, see Install and Configure the AWS CloudHSM Client (Windows). Install the AWS CloudHSM Client and Command Line Tools Complete the steps in the following procedure to install the AWS CloudHSM client and command line tools. To install (or update) the client and command line tools Connect to your client instance. Use the following commands to download and then install the client and command line tools. - Amazon Linux wget sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm - Amazon Linux 2 wget sudo yum install -y ./cloudhsm-client-latest.el7.x86_64.rpm - CentOS 6 sudo yum install wget wget sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm - CentOS 7 sudo yum install wget wget sudo yum install -y ./cloudhsm-client-latest.el7.x86_64.rpm - RHEL 6 wget sudo yum install -y ./cloudhsm-client-latest.el6.x86_64.rpm - RHEL 7 sudo yum install wget wget sudo yum install -y ./cloudhsm-client-latest.el7.x86_64.rpm - Ubuntu 16.04 LTS wget sudo dpkg -i cloudhsm-client_latest_amd64.deb Edit the Client Configuration Before you can use the AWS CloudHSM client to connect to your cluster, you must edit the client configuration. To edit the client configuration Copy your issuing certificate—the one that you used to sign the cluster's certificate—to the following location on the client instance: /opt/cloudhsm/etc/customerCA.crt. You need AWS account root user permissions on the client instance to copy your certificate to this location. Use the following configure command to update the configuration files for the AWS CloudHSM client and command line tools, specifying the IP address of the HSM in your cluster. To get the HSM's IP address, view your cluster in the AWS CloudHSM console, or run the describe-clusters AWS CLI command. In the command's output, the HSM's IP address is the value of the EniIpfield. If you have more than one HSM, choose the IP address for any of the HSMs; it doesn't matter which one. sudo /opt/cloudhsm/bin/configure -a <IP address> Updating server config in /opt/cloudhsm/etc/cloudhsm_client.cfg Updating server config in /opt/cloudhsm/etc/cloudhsm_mgmt_util.cfg Go to Activate the Cluster.
https://docs.aws.amazon.com/cloudhsm/latest/userguide/install-and-configure-client-linux.html
2018-11-12T22:44:43
CC-MAIN-2018-47
1542039741151.56
[]
docs.aws.amazon.com
This page lists all the products & services you have configured in WHMCS installation. You will find all services in their respective tabs. It gives you an overview of the services you have without leaving WordPress. If you find this page empty, your configuration is not complete. - ID: WHMCS product ID, It may come handy if you are inserting shortcodes manually for some reason. - Tagline: The second title for your product, used in some of pricing table templates, can also be used in custom templates that you make. - Description Override: If you want to override product description from WHMCS, use this field. - Description Append: Use this to append content to the description from WHMCS. info: Fields titles marked with language icon are multilingual.
http://docs.whmpress.com/docs/whmpress/configuration-admin-settings/products-services/
2019-10-14T04:47:31
CC-MAIN-2019-43
1570986649035.4
[]
docs.whmpress.com
- Image Family: Oracle Linux 7.x - Operating System: Oracle Linux - Release Date: Jan. 10, 2018 Release Notes: Includes patches for the following ELSAs that address CVE-2017-5754, CVE-2017-5715, and CVE-2017-5753: ELBA-2018-4008 - microcode_ctl bug fix update ELSA-2018-4006 - Unbreakable Enterprise kernel security update ELSA-2018-4004 - Unbreakable Enterprise kernel security update See Doc 2348448.1 for more information on Oracle Linux patches for these issues. Includes a fix to address the issue described in ELBA-2018-4005 - rhn-client-tools bug fix update. Includes a fix to address a GRUB boot order issue that could cause the image to boot by default into an older version of the kernel after updates. This image supports X7 hosts.
https://docs.cloud.oracle.com/iaas/images/image/f12b5115-fa67-420c-916d-de2ed963ebac/
2019-10-14T04:50:30
CC-MAIN-2019-43
1570986649035.4
[]
docs.cloud.oracle.com
Privacy Guideline If you have trouble accessing this document, please contact the Department to request a copy in a format you can use. View this document as… This Guideline assists jobactive providers, Transition to Work Project providers, Employability Skills Training providers, Empowering Youth Initiatives providers, Time to Work Employment Service providers, ParentsNext providers, Transitions Services Panel (TSP) Members and Career Transition Assistance providers with notifying and obtaining consent from individuals for collecting, using and disclosing ‘personal information’, including police, Working with Children and Working with Vulnerable People checks. All agencies, including the Department of Jobs and Small Business (Department), providers, and Host Organisations have obligations under the Privacy Act 1988 (Cth) (Privacy Act) to ensure that ‘personal information’ (including sensitive information) is collected, held, used and disclosed in accordance with that Act. Information that a provider holds about an individual will be ‘personal information’, even if it is only a limited amount of information. The information or opinion does not have to be true and does not have to be recorded in material form. This includes information contained in paper files or computer systems and in documents provided by the individual, including résumés and application forms. A provider will also hold other ‘personal information’ about employers or persons associated with a Host Organisation. Providers may handle ‘sensitive information’ including: - information about an individual’s racial or ethnic origin, such as whether they identify as being Aboriginal or Torres Strait islander - information about an individual’s criminal convictions such as information on any time served in prison or - health information about individuals such as information about medical issues. With limited exceptions under the Privacy Act, an individual’s consent is required for the collection and subsequent use and disclosure of ‘sensitive information’. Last modified on Monday 1 July 2019 [34931|162746]
https://docs.employment.gov.au/documents/privacy-guideline
2019-10-14T04:12:02
CC-MAIN-2019-43
1570986649035.4
[]
docs.employment.gov.au
Release Notes - Version 0.7.0¶ 🚀Welcome to hummingbot version 0.7.0! In release, we focused on improving core stability and fixing bugs. In addition, we're excited to announce a new market making strategy and our first 3rd party exchange connector! Please see below for more details. 🤖 New strategy: pure market making¶ We have added the first version of a pure market making strategy, so all three strategies described in our whitepaper have now been released into production. Please note that this initial release contains a naive implementation that simply sets and maintains a constant spread around a trading pair's mid price. Note that this is intended to be a basic template that users can test and customize. Running the strategy with substantial capital without additional modifications will likely lose money. Over the next few releases, we will add additional functionality that allows users of this strategy to incorporate important factors such as inventory level and market volatility. 🔗 New connector: Bamboo Relay¶ Thanks to Hummingbot user Joshua | Bamboo Relay, we have our first community-contributed 3rd party exchange connector! Bamboo Relay is a 0x open order book relayer that offers active trading pairs in many ERC-20 Ethereum tokens, including DAI, MKR, and BAT. This connector is now available as part of the core Hummingbot codebase, and all strategies should work with it. Since this is a new connector, users may encounter bugs or unexpected behavior. Please report any issues on Github or the #support channel in our Discord. 📜 Improved logging¶ In order to make log messages more actionable and relevant to users, we have made significant improvements in Hummingbot's logging infrastructure. Stack traces and detailed error messages are now confined to the log file only. The log pane in the Hummingbot client will still mention errors, but the majority of the messages are related to Hummingbot's trading activity 🔍 Improved discovery strategy¶ We made a number of improvements to the discovery to make it easier for users to use. For example, users can now automatically scan for arbitrage opportunities across all possible trading pairs, though it still takes a long time to process. We will continue to improve this function in order to help users identify the best trading pairs and markets in which to run Hummingbot. ⚙️ Started configurability initiative¶ We have started an initiative to make the Hummingbot codebase more configurable and accessible to developers, because we want to make it easy for users to create new strategies, add connectors, and contribute to our community. In this release, we have re-organized the codebase file structure and added more comments to the strategy files. In the upcoming releases, users can expect a lot more documentation on code layout, simpler strategies, tutorials, and other resources for developers interested in hacking on Hummingbot. 🐞 Bug fixes and miscellaneous updates¶ Our top priority currently is to improve Hummingbot's core stability. To that end, we made the following fixes in the last release and will continue to make more stability fixes in the coming release. - Hummingbot now only cancels its own orders on Coinbase Pro and not any other orders placed by the same user - Fixed a bug that incorrectly displayed the profitability in the statusoutput for the cross-exchange market making strategy - Fixed a bug that resulted in division by zero errors in Binance - Fixed a bug that caused unnecessary "API call error" log messages - Fixed a bug that caused inadvertent "Order book empty" log messages - Fixed a bug that prevented users from exiting Hummingbot when the Coinbase Pro API key is invalid - Added Hummingbot version to log files
https://docs.hummingbot.io/release-notes/0.7.0/
2019-10-14T05:07:52
CC-MAIN-2019-43
1570986649035.4
[array(['/assets/img/pure_mm.png', 'pure market making'], dtype=object) array(['/assets/img/bamboo_relay.png', 'bamboo relay'], dtype=object)]
docs.hummingbot.io
SQL Server Import and Export Wizard The SQL Server Import and Export Wizard offers the simplest method to create a Integration Services package that copies data from a source to a destination. Note SQL Server Data Tools (SSDT) during setup. You can start the SQL Server Import and Export Wizard from the Start menu, from SQL Server Management Studio, from SQL Server Data Tools (SSDT), or at the command prompt. For more information, see Access Control for Sensitive Data in Packages. After the SQL Server Import and Export Wizard has created the package and copied the data, you can use the SSIS Designer to open and change the saved package by adding tasks, transformations, and event-driven logic. Note In SQL Server Express, the option to save the package created by the wizard is not available. If you start the SQL Server Import and Export Wizard from an Integration Services project in SQL Server Data Tools (SSDT),, see Run the SQL Server Import and Export Wizard. Permissions Required by the Import and Export Wizard. Mapping Data Types in the Import and Export Wizard. Note If you edit an existing mapping file, or add a new mapping file to the folder, you must close and reopen the SQL Server Import and Export Wizard or SQL Server Data Tools (SSDT) for the new or changed files to be recognized. External Resources Video, Exporting SQL Server Data to Excel (SQL Server Video), on technet.microsoft.com CodePlex sample, Exporting from ODBC to a Flat File Using a Wizard Tutorial: Lesson Packages, on msftisprodsamples.codeplex.com
https://docs.microsoft.com/en-us/sql/integration-services/import-export-data/import-and-export-data-with-the-sql-server-import-and-export-wizard?view=sql-server-2014
2019-10-14T03:47:31
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Contents IT Operations Management Previous Topic Next Topic Define WMI method invocation Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Define WMI method invocation As part of creating or modifying a discovery pattern, you can use the WMI method invocation operation to execute a method selected from a table returned by a WMI query. Before you begin Navigate to the relevant pattern step: On the pattern form, select the relevant identification section for Discovery.Alternatively, select the relevant identification or connection section for Service Mapping. Select the relevant pattern step or click to add a step. Make sure that the step containing the WMI method invocation operation follows the step with the WMI query operation. The WMI query results in table which you must use as the source table for WMI method invocation operation. Basic knowledge of programming is desirable. Role required: pd_admin About this taskThis operation is relevant only for Windows. Procedure Select WMI method invocation from the Operation list. Fill in the fields, as appropriate: Field Description Enter Source Table Specify the source table name. The source table must be the result of the WMI Query operation you perform before this step. You can enter a value from the specific field in a table as described in Enter values and variables in patterns. Enter WMI Method Select the desired method: In the debug mode, click Get methods and select the method from the list. Not in the debug mode, enter the name of the method as a string, for example, "RunDetails". Enter Target Table Specify the target table name. Enter Target Column Specify the name of the column to contain the results of the method invocation. If in Debug mode, test the step by clicking Test and checking that the operation brings the result you expected. Example This operation is used in This item Hierarchy Applications > Directory Services CI Type IIFP Pattern IIFP On Windows Pattern Section AD Home Forest connectivity stage-wmi Step number and Name 2. invoke_wmi_method Run Details During discovery of Identity Integration Feature Pack (IIFP), use the RunDetails WMI method to extract information on details from the ManagementAgents table. You discover this table earlier using the WMI query operation. In this case, the result is saved in the same table in the column named "details". What to do next Continue editing the pattern by adding a new step and defining its operation or Finalize the pattern. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/madrid-it-operations-management/page/product/service-mapping/task/t_WMIMethodInvocPatDef.html
2019-10-14T03:46:21
CC-MAIN-2019-43
1570986649035.4
[]
docs.servicenow.com
You can play VRChat using a keyboard, mouse, and monitor! No headset required. Movement is handled via the standard FPS "WASD" setup, with your mouse adjusting your view position. Your head will point in the direction you're looking. It is not possible to move your arms while in Desktop mode currently. W Moves player forward A Moves player left S Moves player back D Moves player right Space Makes player jump (if it is enabled in the room) Z Crawl/Go Prone C Crouch Escape Opens quick menu Shift makes player run Left Click Interact / Pickup Right Click (Hold) Shows interaction mouse Right Click Drop pickup Right Click + F (Hold) Allows you to throw held pickups V (Hold) (If using push to talk) Enables microphone as long as it's held V (If using toggle talk) Enables / Disables microphone Control + N Toggles visibility of player nametags Control + H Toggles visibility of the HUD (microphone and notification icons) Control + \ Local switch to default robot avatar (good for when unable to access menu or see out of avatar) RShift + Backtick + 1 † Toggles visibility of trigger debug menu RShift + Backtick + 2 † Toggles visibility of information debug menu RShift + Backtick + 3 † Toggles visibility of console debug menu RShift + Backtick + 4 † Toggles visibility of networking debug menu RShift + Backtick + 5 † Toggles visibility of networking graph debug menu F12 Takes screenshot, saved to the folder VRChat is installed Shift + F1* Hand gesture - Idle Shift + F2* Hand gesture - Fist Shift + F3* Hand gesture - Open Hand Shift + F4* Hand gesture - Point Shift + F5* Hand gesture - Victory (Peace) Shift + F6* Hand gesture - Rock 'n Roll Shift + F7* Hand gesture - Finger-gun Shift + F8* Hand gesture - Thumbs up * Use left shift to control your left hand, use right shift to control your right hand. † RShift + Backtick can be remapped in the launch settings, available by holding Shift while launching VRChat
https://docs.vrchat.com/docs/keyboard-and-mouse
2019-10-14T04:18:44
CC-MAIN-2019-43
1570986649035.4
[]
docs.vrchat.com
User Guide Welcome to the User Guide for WANdisco's WD Fusion. 1. What is WD Fusion? WD Fusion is a software application that allows Hadoop deployments to replicate HDFS data between Hadoop clusters that are running different, even incompatible versions of Hadoop. WANdisco's patented active-active replication technology, delivering single-copy consistent hdfs data, replicated between far-flung data centers. For more information read the Glossary and WD Fusion datasheet (PDF). 2. Using this guide This document describes how to install and administer WD Fusion as part of a multi data center Hadoop deployment. - Installation Guide - This section describes the evaluation and review process, on to the actual software installation. Use the deployment guide for getting set up. If you need to make changes on your platform, recheck the Deployment Checklist to ensure that you're not going to impact Hadoop data replication. - Admin Guide - This section describes all the common actions and procedures that are required as part of managing WD Fusion in a deployment. It covers how to work with the UI's monitoring and management tools. Use the Admin Guide if you need to know how to do something. - Reference Guide - This section describes the UI, systematically covering all screens and providing an explanation for what everything does. Use the Reference Guide if you need to check what something does on the UI, or gain a better understanding of Non-Stop NameNode's underlying architecture. 3. Get support See our online Knowledgebase which contains updates and more information. We use terms like node and membership, and define them in the Glossary. This contains some industry terms, as well as WANdisco product terms. If you need more help raise a case on our support website. 4. Give feedback If you find an error or if you think some information needs improving, raise a case on our support website or email [email protected]. 5. Note the following In this document we highlight types of information using the following boxes: Alert The alert symbol highlights important information. Caution The STOP symbol cautions you against doing something. Tips Tips are principles or practices that you'll benefit from knowing or using. Knowledgebase The KB symbol shows where you can find more information in our online Knowledgebase. 6. Release Notes View the Release Notes. These provide the latest information about the current release, including lists of new functionality, fixes and known issues.
https://docs.wandisco.com/bigdata/wdfusion/archive/2.6/
2019-10-14T03:00:10
CC-MAIN-2019-43
1570986649035.4
[]
docs.wandisco.com
Friday, October 26, 2018 Tips for Your First Kubecon Presentation - Part 2 Author: Michael Gasch (VMware) Hello and welcome back to the second and final part about tips for KubeCon first-time speakers. If you missed the last post, please give it a read here. The Day before the Show Tip #13 - Get enough sleep. I don’t know about you, but when I don’t get enough sleep (especially when beer is in the game), the next day my brain power is around 80% at best. It’s very easy to get distracted at KubeCon (in a positive sense). “Let’s have dinner tonight and chat about XYZ”. Get some food, beer or wine because you’re so excited and all the good resolutions you had set for the day before your presentation are forgotten :) OK, I’m slightly exaggerating here. But don’t underestimate the dynamics of this conference, the amazing people you meet, the inspiring talks and of course the conference party. Be disciplined, at least that one day. There’s enough time to party after your great presentation! Tip #14 - A final dry-run. Usually, I do a final dry-run of my presentation the day before the talk. This helps me to recall the first few sentences I want to say so I keep the flow no matter what happens when the red recording light goes on. Especially when your talk is later during the conference, there’s so much new stuff your brain has to digest which could “overwrite” the very important parts of your presentation. I think you know what I mean. So, if you’re like me, a final dry-run is never a bad idea (also to check equipment, demos, etc.). Tip #15 - Promote your session, again. Send out a final reminder on your social media channels so your followers (and KubeCon attendees) will recall to attend your session (again, KubeCon is busy and it’s hard to keep up with all the talks you wanted to attend). I was surprised to see my attendee list jumping from ~80 at the beginning of the week to >300 the day before the talk. The number kept rising even an hour before going on stage. So don’t worry about the stats too early. Tip #16 - Ask your idols to attend. Steve Wong, a colleague of mine who I really admire for his knowledge and passion, gave me a great advise. Reach out to the people you always wanted to attend your talk and kindly ask them to come along. So I texted the one and only Tim Hockin. Even though these well-respected community leaders are super busy and thus usually cannot attend many talks during the conference, the worst thing that can happen is that they cannot show up and will let you know. (see the end of this post to find out whether or not I was lucky :)) The show is on! Your day has come and it doesn’t make any sense to make big changes to your presentation now! Actually, that’s a very bad idea unless you’re an expert and your heartbeat at rest is around 40 BPM. (But even then many things can go horribly wrong). So, without further ado, here are my final tips for you. Tip #17 - Arrive ahead of time. Set an alert (or two) to not miss your presentation, e.g. because somebody caught you on the way to the room or you got a call/have been pulled in a meeting. It’s a good idea to find out were your room is at least some hours before your talk. These conference buildings can be very large. Also look for last minute schedule (time/room) changes, just because you never know… Tip #18 - Ask a friend to take photos. My dear colleague Bjoern, without me asking for it, took a lot of pictures and watched the audience during the talk. This was really helpful, not just because I now have some nice shots that will always remind me of this great day. He also gave me honest feedback, e.g. what people said, whether they liked it or what I could have done better. Tip #19 - Restroom. If you’re like me, when I’m nervous I could run every 15 minutes. The last thing you want is that you are fully cabled (microphone), everything is set up and two minutes before your presentation you feel like “oh oh”…nothing more to say here ;) Tip #20 - The audience. I had many examples and references from other Kubernetes users (and their postmortem stories) in my talk. So I tried to give them credit and actually some of them were in the room and really liked that I did so. It gave them (and hopefully the rest of the audience as well) the feeling that I did not invent the wheel and we are all in the same boat. Also feel free to ask some questions in the beginning, e.g. to get a better feeling about who is attending your talk, or who would consider himself an expert in the area of what you are talking about, etc. Tip #21 - Repeat questions. Always. Because of the time constraints, questions should be asked at the end of your presentation (unless you are giving a community meeting or panel of course). Always (always!) repeat the questions at the end. Sometimes people will not use the microphone. This is not only hard for the people in the back, but also it won’t be captured on the recording. I am sure you also had that moment watching a recording and not getting what is being asked/discussed because the question was not captured. Tip #22 - Feedback. Don’t forget to ask the audience to fill out the survey. They’re not always enforced/mandatory during conferences (especially not at KubeCon), so it’s easy to forget to give the speaker feedback. Feedback is super critical (also for the committee) as sometimes people won’t directly tell you but rather write their thoughts. Also, you might want to block your calendar to leave some time after the presentation for follow-up questions, so you are not in the hurry to catch your next meeting/session. Tip #23 - Invite your audience. No, I don’t mean to throw a round of beer for everyone attending your talk (I mean, you could). But you might let them know, at the end of your presentation, that you would like to hang out, have dinner, etc. A great opportunity to reflect and geek out with like-minded friends. Final Tip - Your Voice matters. Don’t underestimate the power of giving a talk at a conference. In my case I was lucky that the Zalando crew was in the room and took this talk as an opportunity for an ad hoc meeting after the conference. This drove an important performance fix forward, which eventually was merged (kudos to the Zalando team again!). Embrace the opportunity to give a talk at a conference, take it serious, be professional and make the best use of your time. But I’m sure I don’t have to tell you that ;) Now it’s on you :) I hope some of these tips are useful for you as well. And I wish you all the best for your upcoming talk!!! Believing in and being yourself is key to success. And perhaps your Kubernetes idol is in the room and has some nice words for you after your presentation! Besides my fantastic reviewers and the speaker support team already mentioned above, I also would like to thank the people who supported me along this KubeCon journey: Bjoern, Timo, Emad and Steve! Turns out a celebrity guest was in the audience pic.twitter.com/GhoAavnZ8Z— Steve Wong (@cantbewong) May 4, 2018
https://v1-12.docs.kubernetes.io/blog/page/3/
2019-10-14T04:28:11
CC-MAIN-2019-43
1570986649035.4
[]
v1-12.docs.kubernetes.io
Clamp in ONAP Architecture¶ CLAMP is a platform for designing and managing control loops. It is used to visualize a control loop, configure it with specific parameters for a particular network service, then deploying and undeploying it. Once deployed, the user can also update the loop with new parameters during runtime, as well as suspending and restarting it. It interacts with other systems to deploy and execute the control loop. For example, it extracts the control loop blueprint and Policy Model(Model Driven Control Loop) from CSAR distributed by SDC/DCAE-DS. It requests from DCAE the instantiation of microservices to manage the control loop flow. Furthermore, it creates and updates multiple policies (for DCAE mS configuration and actual Control Operations) in the Policy Engine that define the closed loop flow. The ONAP CLAMP platform abstracts the details of these systems under the concept of a control loop model. The design of a control loop and its management is represented by a workflow in which all relevant system interactions take place. This is essential for a self-service model of creating and managing control loops, where no low-level user interaction with other components is required. CLAMP also allows to visualize control loop metrics through a dashboard, in order to help operations understand how and when a control loop is triggered and takes action. At a higher level, CLAMP is about supporting and managing the broad operational life cycle of VNFs/VMs and ultimately ONAP components itself. It will offer the ability to design, test, deploy and update control loop automation - both closed and open. Automating these functions would represent a significant saving on operational costs compared to traditional methods.
https://docs.onap.org/en/dublin/submodules/clamp.git/docs/architecture.html
2019-10-14T04:04:47
CC-MAIN-2019-43
1570986649035.4
[array(['../../../_images/distdepl.png', 'clamp-flow'], dtype=object) array(['../../../_images/monitoring.png', 'dashboard-flow'], dtype=object) array(['../../../_images/ONAP-closedloop1.png', 'closed-loop'], dtype=object) ]
docs.onap.org
To fill dhtmlxForm with data using datasource fields binding. There are two ways of doing this: To save data back to server, you can use: dhtmlxConnector allows binding dhtmlxForm fields to server side datasource with minimal efforts. For details see the dhtmlxConnector documentation, here we'll show you the basics. To use dhtmlxConnector you need to follow a few steps on client side and create server side file. Here is the code sample (PHP) you could use to connect the form with "Users" table on server side. require_once('form_connector.php'); $conn = new PDO("mysql:dbname=db_name;host=localhost","root",""); //create connector for dhtmlxForm using connection to mySQL server $form = new FormConnector($conn); //table name, id field name, fields to use to fill the form $form->render_table("Users","user_id","f_name,l_name,email"); var myForm = new dhtmlxForm(containerID,structureAr);//create form(structure below) var dp = new dataProcessor("php/user_details.php");//instatiate dataprocessor dp.init(yourFormObject);//link form to dataprocessor ... //fill form with data.Where userID is ID or record you want to use to fill the form myForm.load("php/user_details.php?id="+userID); ... //create event handler to save data on button click myForm.attachEvent("onButtonClick",function(buttonID){ if(buttonID=="my_button"){ myForm.save();//no params needed.It uses url that you passed to dataprocessor } }) </script> Note: Client side dhtmlxForm validation will run automatically when the save() method is called. If it fails, data will not be sent to server. Form elements on client-side should have 'name' attributes with the same names as corresponded database table fields names (or aliases) returned by connector (see above): var structureAr = [ {type: "input", name: "f_name", label:"First name"}, {type:"input", name: "l_name", label:"Last Name"}, {type:"input", name:"email", label:"Email"}, {type:"button", name:"my_button", value:"Save"} ]; Instead of dhtmlxConnector, you can load data from any custom feed using the same load() method. The requirement is: feed elements should have the same names as corresponded form fields have. var myForm = new dhtmlXForm(containderID,structureAr); myForm.load("user.xml"); user.xml file: <data> <f_name>John</f_name> <l_name>Doe</l_name> <email>[email protected]</email> </data> The name of the top tag in XML - <data> - can be any. In case you would like to have your own server-side processor for form data and don't apply dhtmlxConnector, you may use the send() method and send form data to the server by AJAX POST/GET request. By default, POST request is used. myForm.attachEvent("onButtonClick",function(id){ if(id=="send_button"){ myForm.send("php/save_form.php", "get", function(loader, response){ alert(response); }); } }); Note: Client side dhtmlxForm validation will run automatically when the send() method is called. If it fails, data will not be sent to server. Also, you can wrap dhtmlxForm container with HTML form tags and use HTML form submit to send data to server in common way. Just do not forget to run dhtmlxForm validation first. Back to topBack to top <form action="php/save_form.php" method="post" target="[some target]"> <div id="form_container"></div> </form> <script> myForm = new dhtmlXForm("form_container",structureAr); myForm.attachEvent("onButtonClick",function(id){ if(id=="my_button"){ if(myForm.validate()) document.forms[0].submit() } }) </script>
https://docs.dhtmlx.com/form__server_side.html
2019-10-14T04:16:01
CC-MAIN-2019-43
1570986649035.4
[]
docs.dhtmlx.com
Overview Please use the sidebar on the left to navigate through the documentation for the Men & Mice Suite Version 8 Page: Release Notes Page: Getting Started Page: Men and Mice Suite Page: User Guides Page: How-To Articles Page: Reference Material
https://docs.menandmice.com/display/MM820/Documentation
2019-10-14T03:37:00
CC-MAIN-2019-43
1570986649035.4
[]
docs.menandmice.com
.NET Framework Support for Windows Store Apps and Windows Runtime The .NET Framework 4.5 supports a number of software development scenarios with the Windows Runtime. These scenarios fall into three categories: Developing Windows 8.x Store apps with XAML controls, as described in Roadmap for Windows Store apps using C# or Visual Basic, How tos (XAML), and .NET for Windows Store apps overview. Developing class libraries to use in the Windows 8.x Store apps that you create with the .NET Framework. Developing Windows Runtime Components, packaged in .WinMD files, which can be used by any programming language that supports the Windows Runtime. For example, see Creating Windows Runtime Components in C# and Visual Basic. This topic outlines the support that the .NET Framework provides for all three categories, and describes the scenarios for Windows Runtime Components. The first section includes basic information about the relationship between the .NET Framework and the Windows Runtime, and explains some oddities you might encounter in the Help system and the IDE. The second section discusses scenarios for developing Windows Runtime Components. The Basics The .NET Framework supports the three development scenarios listed earlier by providing .NET for Windows 8.x Store apps, and by supporting the Windows Runtime itself. .NET Framework and Windows Runtime namespaces provides a streamlined view of the .NET Framework class libraries and include only the types and members you can use to create Windows 8.x Store apps and Windows Runtime Components. When you use Visual Studio (Visual Studio 2012 or later) to develop a Windows 8.x Store app or a Windows Runtime component, a set of reference assemblies ensures that you see only the relevant types and members. This streamlined API set is simplified further by the removal of features that are duplicated within the .NET Framework or that duplicate Windows Runtime features. For example, it contains only the generic versions of collection types, and the XML document object model is eliminated in favor of the Windows Runtime XML API set. Features that simply wrap the operating system API are also removed, because the Windows Runtime is easy to call from managed code. To read more about the .NET for Windows 8.x Store apps, see the .NET for Windows Store apps overview. To read about the API selection process, see the .NET for Metro style apps entry in the .NET blog. The Windows Runtime provides the user interface elements for building Windows 8.x Store apps, and provides access to operating system features. Like the .NET Framework, the Windows Runtime has metadata that enables the C# and Visual Basic compilers to use the Windows Runtime the way they use the .NET Framework class libraries. The .NET Framework makes it easier to use the Windows Runtime by hiding some differences: Some differences in programming patterns between the .NET Framework and the Windows Runtime, such as the pattern for adding and removing event handlers, are hidden. You simply use the .NET Framework pattern. Some differences in commonly used types (for example, primitive types and collections) are hidden. You simply use the .NET Framework type, as discussed in Differences That Are Visible in the IDE, later in this article. Most of the time, .NET Framework support for the Windows Runtime is transparent. The next section discusses some of the apparent differences between managed code and the Windows Runtime. The .NET Framework and the Windows Runtime Reference Documentation The Windows Runtime and the .NET Framework documentation sets are separate. If you press F1 to display Help on a type or member, reference documentation from the appropriate set is displayed. However, if you browse through the Windows Runtime reference you might encounter examples that seem puzzling: Topics such as the IIterable<T> interface don't have declaration syntax for Visual Basic or C#. Instead, a note appears above the syntax section (in this case, ".NET: This interface appears as System.Collections.Generic.IEnumerable<T>"). This is because the .NET Framework and the Windows Runtime provide similar functionality with different interfaces. In addition, there are behavioral differences: IIterablehas a Firstmethod instead of a GetEnumerator method to return the enumerator. Instead of forcing you to learn a different way of performing a common task, the .NET Framework supports the Windows Runtime by making your managed code appear to use the type you're familiar with. You won't see the IIterableinterface in the IDE, and therefore the only way you'll encounter it in the Windows Runtime reference documentation is by browsing through that documentation directly. The SyndicationFeed(String, String, Uri) documentation illustrates a closely related issue: Its parameter types appear to be different for different languages. For C# and Visual Basic, the parameter types are System.String and System.Uri. Again, this is because the .NET Framework has its own Stringand Uritypes, and for such commonly used types it doesn't make sense to force .NET Framework users to learn a different way of doing things. In the IDE, the .NET Framework hides the corresponding Windows Runtime types. In a few cases, such as the GridLength structure, the .NET Framework provides a type with the same name but more functionality. For example, a set of constructor and property topics are associated with GridLength, but they have syntax blocks only for Visual Basic and C# because the members are available only in managed code. In the Windows Runtime, structures have only fields. The Windows Runtime structure requires a helper class, GridLengthHelper, to provide equivalent functionality. You won't see that helper class in the IDE when you're writing managed code. In the IDE, Windows Runtime types appear to derive from System.Object. They appear to have members inherited from Object, such as Object.ToString. These members operate as they would if the types actually inherited from Object, and Windows Runtime types can be cast to Object. This functionality is part of the support that the .NET Framework provides for the Windows Runtime. However, if you view the types in the Windows Runtime reference documentation, no such members appear. The documentation for these apparent inherited members is provided by the System.Object reference documentation. Differences That Are Visible in the IDE In more advanced programming scenarios, such as using a Windows Runtime component written in C# to provide the application logic for a Windows 8.x Store app built for Windows using JavaScript, such differences are apparent in the IDE as well as in the documentation. When your component returns an IDictionary<int, string> to JavaScript, and you look at it in the JavaScript debugger, you'll see the methods of IMap<int, string> because JavaScript uses the Windows Runtime type. Some commonly used collection types that appear differently in the two languages are shown in the following table: In the Windows Runtime, IMap<K, V> and IMapView<K, V> are iterated using IKeyValuePair. When you pass them to managed code, they appear as IDictionary<TKey, TValue> and IReadOnlyDictionary<TKey, TValue>, so naturally you use System.Collections.Generic.KeyValuePair<TKey, TValue> to enumerate them. The way interfaces appear in managed code affects the way types that implement these interfaces appear. For example, the PropertySet class implements IMap<K, V>, which appears in managed code as IDictionary<TKey, TValue>. PropertySet appears as if it implemented IDictionary<TKey, TValue> instead of IMap<K, V>, so in managed code it appears to have an Add method, which behaves like the Add method on .NET Framework dictionaries. It doesn't appear to have an Insert method. For more information about using the .NET Framework to create a Windows Runtime component, and a walkthrough that shows how to use such a component with JavaScript, see Creating Windows Runtime Components in C# and Visual Basic. Primitive Types To enable the natural use of the Windows Runtime in managed code, .NET Framework primitive types appear instead of Windows Runtime primitive types in your code. In the .NET Framework, primitive types like the Int32 structure have many useful properties and methods, such as the Int32.TryParse method. By contrast, primitive types and structures in the Windows Runtime have only fields. When you use primitives in managed code, they appear to be .NET Framework types, and you can use the properties and methods of the .NET Framework types as you normally would. The following list provides a summary: For the Windows Runtime primitives Int32, Int64, Single, Double, Boolean, String(an immutable collection of Unicode characters), Enum, UInt32, UInt64, and Guid, use the type of the same name in the Systemnamespace. For UInt8, use System.Byte. For Char16, use System.Char. For the IInspectableinterface, use System.Object. For HRESULT, use a structure with one System.Int32member. As with interface types, the only time you might see evidence of this representation is when your .NET Framework project is a Windows Runtime component that is used by a Windows 8.x Store app built using JavaScript. Other basic, commonly used Windows Runtime types that appear in managed code as their .NET Framework equivalents include the Windows.Foundation.DateTime structure, which appears in managed code as the System.DateTimeOffset structure, and the Windows.Foundation.TimeSpan structure, which appears as the System.TimeSpan structure. Other Differences In a few cases, the fact that .NET Framework types appear in your code instead of Windows Runtime types requires action on your part. For example, the Windows.Foundation.Uri class appears as System.Uri in .NET Framework code. System.Uri allows a relative URI, but Windows.Foundation.Uri requires an absolute URI. Therefore, when you pass a URI to a Windows Runtime method, you must ensure that it's absolute. See Passing a URI to the Windows Runtime. Scenarios for Developing Windows Runtime Components The scenarios that are supported for managed Windows Runtime Components depend on the following general principles: Windows Runtime Components that are built using the .NET Framework have no apparent differences from other Windows Runtimelibraries. For example, if you re-implement a native Windows Runtime component by using managed code, the two components are outwardly indistinguishable. The fact that your component is written in managed code is invisible to the code that uses it, even if that code is itself managed code. However, internally, your component is true managed code and runs on the common language runtime (CLR). Components can contain types that implement application logic, Windows 8.x Store UI controls, or both. Note It's good practice to separate UI elements from application logic. Also, you can't use Windows 8.x Store UI controls in a Windows 8.x Store app built for Windows using JavaScript and HTML. A component can be a project within a Visual Studio solution for a Windows 8.x Store app, or a reusable component that you can add to multiple solutions. Note If your component will be used only with C# or Visual Basic, there's no reason to make it a Windows Runtime component. If you make it an ordinary .NET Framework class library instead, you don't have to restrict its public API surface to Windows Runtime types. You can release versions of reusable components by using the Windows Runtime VersionAttribute attribute to identify which types (and which members within a type) were added in different versions. The types in your component can derive from Windows Runtime types. Controls can derive from the primitive control types in the Windows.UI.Xaml.Controls.Primitives namespace or from more finished controls such as Button. Important Starting with Windows 8 and the .NET Framework 4.5, all public types in a managed Windows Runtime component must be sealed. A type in another Windows Runtime component can't derive from them. If you want to provide polymorphic behavior in your component, you can create an interface and implement it in the polymorphic types. All parameter and return types on the public types in your component must be Windows Runtime types (including the Windows Runtime types that your component defines). The following sections provide examples of common scenarios. Application Logic for a Windows 8.x Store App with JavaScript When you develop a Windows 8.x Store app for Windows using JavaScript, you might find that some parts of the application logic perform better in managed code, or are easier to develop. JavaScript can't use .NET Framework class libraries directly, but you can make the class library a .WinMD file. In this scenario, the Windows Runtime component is an integral part of the app, so it doesn't make sense to provide version attributes. Reusable Windows 8.x Store UI Controls You can package a set of related UI controls in a reusable Windows Runtime component. The component can be marketed on its own or used as an element in the apps you create. In this scenario, it makes sense to use the Windows Runtime VersionAttribute attribute to improve compatibility. Reusable Application Logic from Existing .NET Framework Apps You can package managed code from your existing desktop apps as a standalone Windows Runtime component. This enables you to use the component in Windows 8.x Store apps built using C++ or JavaScript, as well as in Windows 8.x Store apps built using C# or Visual Basic. Versioning is an option if there are multiple reuse scenarios for the code. Related Topics Feedback
https://docs.microsoft.com/en-us/dotnet/standard/cross-platform/support-for-windows-store-apps-and-windows-runtime?view=netframework-4.8
2019-10-14T03:23:05
CC-MAIN-2019-43
1570986649035.4
[]
docs.microsoft.com
Title Opposites Attract: The Fusion of Confucianism and the Qin Dynasty’s Legalism in the People’s Republic of China Today Document Type Thesis Abstract The aim of this research. The Qin dynasty employed the legalist governmental philosophy, which allowed one ruler to effectively control all of China. This set up the principle of a concentrated government over the vast Chinese land that has remained throughout the centuries. Aspects of the Qin Dynasty have become ingrained in the culture of China, as evident in their government structure and harsh punishment system, which will be further examined in this paper. Confucianism has impacted the societal makeup of the Chinese culture since the fifth century BCE. Many Chinese today identify themselves as “Confucian in public and Daoist in private.” This paper examines the effects of Confucianism more in depth on both the society and government in China today. The aim of the research is to examine how much Legalism and Confucianism have blended together to create China today. Recommended Citation Tompkins, Elyse, "Opposites Attract: The Fusion of Confucianism and the Qin Dynasty’s Legalism in the People’s Republic of China Today" (2011). Honors Theses. 8. Included in Ethics and Political Philosophy Commons, History Commons
https://docs.rwu.edu/honors_theses/8/
2019-10-14T03:57:29
CC-MAIN-2019-43
1570986649035.4
[]
docs.rwu.edu
Office2013 Theme The official Q3 2013 release of UI for WPF brought a brand new external theme with a flat modern UI and three color variations – White, Light Gray, Dark Gray. The following topic explains the specifics of the theme's color variations and the available font options. Theme Variations The following are the supported color variations of the Office2013 theme White: White color theme palette. This is also the default variation of the theme. LightGray: Light gray theme palette. DarkGray: Dark gray theme palette. This is how the ColorVariation looks: /// <summary> /// Represents theme color variations. /// </summary> public enum ColorVariation { /// <summary> /// Represents Dark Grey Office2013 theme palette. /// </summary> DarkGrey, /// <summary> /// Represents Light Grey Office2013 theme palette. /// </summary> LightGrey, /// <summary> /// Represents the default White Office2013 theme palette. /// </summary> White } Theme Variation Changing When using NoXAML assemblies in an application you should merge the necessary resource dictionaries from the corresponding theme assembly (in this case - Telerik.Windows.Themes.Office2013.dll). Alternitevely, you can merge the resource dictionaries as *.xaml files in your application (in case there is no reference to the theme assembly) in an appropriate place in your project (i.e. App.xaml). For more information about implicit styles refer to this article. The Office2013 theme offers a very intuitive and easy way to change its color variation. You can change the variation by using the LoadPreset() method of Office2013Palette in the entry point of your application. You just have to pass to the method the desired color variation as a parameter. For example, if you want to set the DarkGray color variation, you should have the following code-block in your application: public MainWindow() { Office2013Palette.LoadPreset(Office2013Palette.ColorVariation.DarkGray); InitializeComponent(); } The DarkGrey variation of the theme is designed with a dark background in mind and it is recommended to use such a background in your application when choosing it. Office2013 Palette brushes and colors Changing Fonts The official Q1 2015 release of Telerik UI for WPF introduced features that allow you to dynamically change the FontSize and FontFamily properties of all components in the application for the Office2013 theme. All Telerik controls use resources that are linked to one major singleton object that contains the FontSize and FontFamily properties used for the Office2013 theme. These properties are public so you can easily modify the theme resources at one single point. The most commonly used FontSize in the theme is named FontSizeL and its default value is 15. Calibri. Please note that for complex scenarios we strongly recommend setting font size only initially before the application is initialized. We recommend font sizes between 11px and 19px for the FontSize property. All the available FontSizes and FontFamily as well as their default values: Office2013Palette.Palette.FontSizeXXS = 10; Office2013Palette.Palette.FontSizeXS = 12; Office2013Palette.Palette.FontSizeS = 13; Office2013Palette.Palette.FontSize = 14; Office2013Palette.Palette.FontSizeL = 15; Office2013Palette.Palette.FontSizeXL = 16; Office2013Palette.Palette.FontFamily = new FontFamily("Calibri"); Office2013Palette.Palette.FontSizeXXS is used: - GridViewNewRow in Telerik.Windows.Controls.GridView Office2013Palette.Palette.FontSizeXL is used: - ExpressionEditor in Telerik.Windows.Controls.Expressions - WizzardPage in Telerik.Windows.Controls.Navigation - ScheduleView's MonthView items in Telerik.Windows.Controls.ScheduleView As the following example shows, you can change the default FontFamily from "Calibri" to "MonoType Corsiva" and the FontSize from 15 to 16 on a click of a button: <StackPanel> <telerik:RadCalendar x: <telerik:RadButton </StackPanel> private void OnButtonChangeFontSizeClick(object sender, RoutedEventArgs e) { Office2013Palette.Palette.FontSizeL = 24; Office2013Palette.Palette.FontSizeS = 16; Office2013Palette.Palette.FontFamily = new FontFamily("MonoType Corsiva"); } Changing Opacity If you need to change the opacity of the disabled elements, you can now easily do so by using the DisabledOpacity property of the Office2013Palette. Its default value is 0.3. Example 8: Changing the opacity Office2013Palette.Palette.DisabledOpacity = 0.5; Office2013Palette.Palette.DisabledOpacity = 0.5
https://docs.telerik.com/devtools/wpf/styling-and-appearance/themes-suite/common-styling-appearance-office2013-theme
2019-10-14T04:20:34
CC-MAIN-2019-43
1570986649035.4
[array(['../images/Common_Styling_Appearance_Office2013_Theme_04.png', 'Common Styling Appearance Office 2013 Theme 03'], dtype=object)]
docs.telerik.com
Syntax float ShadowSlopeBias Remarks Controls how accurate self shadowing of whole scene shadows from this light are. This works in addition to shadow bias, by increasing the amount of bias depending on the slope of a surface. At 0, shadows will start at the their caster surface, but there will be many self shadowing artifacts. larger values, shadows will start further from their caster, and there won't be self shadowing artifacts but object might appear to fly. around 0.5 seems to be a good tradeoff. This also affects the soft transition of shadows
https://docs.unrealengine.com/en-US/API/Runtime/Engine/Components/ULightComponent/ShadowSlopeBias/index.html
2019-10-14T03:39:42
CC-MAIN-2019-43
1570986649035.4
[]
docs.unrealengine.com
What's the Story? Instagram stories don't hang about for long, so you need to make an impression fast. Stories are supposed to have a narrative - a beginning, a middle and an end - so you need at least three segments, and probably no more than five to keep it fresh. Then you can upload your segments at intervals during a given time period. Our story is a Saturday morning coffee promotion - so the idea is that the slides get uploaded to an Instagram feed at intervals during a busy morning. Grab a template Take a look at our templates for inspiration. You will notice that the story templates are larger than the standard Instagram templates because they will fill the available screen. So here I'm starting with one of the templates... It's got 4 slides - but I only want 3 so I'm deleting one by tapping on the thumbnail and choosing Delete. Add your Photos and create your story Now for the first slide - tap the thumbnail to select... Now I'm going to replace the main photo - tap to select and then choose Replace. Looking on the Properties panel on the right I'm going to choose one of the free stock photos - but I can of course choose from any of the other options. Find your photo and just tap on it to replace... learn more about photo editing here Now to change the text - just tap anywhere in the text and start typing. You can change the font, font size, color, line spacing, alignment etc. in the Text properties panel. So now to change the image and text on the two remaining slides - here's slide 2... And here is slide 3... Publish your story Now you have all three ready to go - tap Documents > Download... to download the slides. From the Download options choose JPEG or PNG - here is the JPEG option, the standard default settings are fine, we've taken care of those for you. Select All pages (zip file) and tap Download to export your slides as a zip to your download folder. When you have downloaded your slides the next thing to do would be to move the slides to somewhere accessible so that you can grab them from your phone to upload as an Instagram story - a cloud drive is a good option if you have it synced to your phone. You should then be able to export from your cloud drive to your Instagram feed. Good housekeeping Don't forget to save your Xara cloud file - Documents > Save . This is your master file if you like, then you can come back and re-edit and export any time.
https://docs.xara.com/en/articles/2016780-instagram-stories
2019-10-14T04:51:50
CC-MAIN-2019-43
1570986649035.4
[array(['https://downloads.intercomcdn.com/i/o/64428012/5d07ede2621dfaac7db400ae/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64432168/18b6b85f7beb7c916083f1bb/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64433442/d1df581fcd8af16036290554/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64668855/a0e67cb9478cb383c682f08e/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64674438/fa32f645d2918ee1578b7338/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64679944/366b9060db8e8a1094bda534/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64727018/47de8fe22cf616b189b284f4/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64727518/5f6c4b02165387e8f9a33122/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64862275/b80e66c780cbba11668f9d85/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/67601934/54bc5468bf2a8a751a4170f4/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/64738377/ea736922bfec695cb768d0ab/image.png', None], dtype=object) ]
docs.xara.com
You can run the cross-project synchronization tool, kwxsync, to copy issue status changes and comments to other projects sharing the same source code. This means that developers need to cite detected issues only once. Prerequisites When you attempt to perform the synchronization, kwxsync proceeds only if the user has the proper permissions to do so on each of the specified machines. If the user does not have the proper permissions, all machines on which it did not pass and the user name with which it did not pass will be logged, and the kwxsync process will exit. If connecting to a machine that does not have an access control method in place and an entry does not exist for the target machine in the ltoken file, the name of the user that is logged in on. Here's an example of how kwxsync can be useful in the software development process. How kwxsync solves this problem You can run kwxsync to copy developers' changes from the branch to the main trunk. For example: kwxsync ProjectBranch ProjectMain kwxsync applies the most recent status-changes and comments from ProjectBranch to ProjectMain, and from ProjectMain to ProjectBranch. All of the updates are merged, so that identical issues in the two projects have an identical history. Since you can specify multiple projects for kwxsync, and you can run kwxsync continuously, you can ensure that all projects sharing source code are always up to date. To run kwxsync continuously, call it in a loop (for example, a looping batch file). You can improve performance when running kwxsync continuously by specifying the --last-sync option. See kwxsync for usage, options and examples. To understand exactly how issue statuses and comments are synchronized, consider the following example. We have two projects, ProjectMain and ProjectBranch. They each contain an identical code issue, which we'll refer to as IssueMain and IssueBranch. Developers cited these issues as follows: Running the command kwxsync ProjectMain ProjectBranch merges the entire history of IssueMain and IssueBranch, preserving the citing time. As a result, the new citing history of IssueMain and IssueBranch will be as follows (we've truncated the information for brevity): If you'd like to preview the changes before they're made, you can perform a "dry run". With the --dry option, kwxsync won't actually synchronize the issue status updates and comments; instead, it will just output a list of these changes to the console. If you'd like to output the potential changes to an HTML report, use the --output option as well as the --dry option. kwxsync will create an HTML report for each project, with the project name as part of the output file name. For example, if you run this command: kwxsync --dry --output report.html Project1 Project2 Project3 kwxsync will create three files: report_Project1.html, report_Project2.html and report_Project3.html. To generate a synchronization report, just add the --report <file> option to your kwxsync command line. You can create a synchronization report during synchronization, or as part of a "dry run". Example: Creating a report during synchronization kwxsync --report report.txt Project1 Project2 Project3 Example: Creating a report during a "dry run" kwxsync --dry --report report.txt Project1 Project2 Project3 The synchronization report has the following format: kwxsync assigns a unique ID to each detected issue. This detected issue may exist in multiple projects. Each detected issue also has a project-specific ID; this is the ID that you see in Klocwork Static Code Analysis. 14;zlib;1 4;zlib;2 8;zlib;3 5;zlib;5 6;zlib;6 7;zlib;4 1;zlib_trend;7 10;zlib_trend;1 11;zlib_trend;9 4;zlib_trend;2 8;zlib_trend;3 5;zlib_trend;5 6;zlib_trend;6 7;zlib_trend;4 2;zlib_trend;8 1;zlib_br;7 11;zlib_br;9 4;zlib_br;2 8;zlib_br;3 5;zlib_br;5 6;zlib_br;6 7;zlib_br;4 2;zlib_br;8 You can group this data by unique issue ID to get a list of unique issues detected across all projects:
https://docs.roguewave.com/en/klocwork/current/synchronizingstatuschangesandcommentsacrossprojects
2019-10-14T03:37:42
CC-MAIN-2019-43
1570986649035.4
[]
docs.roguewave.com
Configure Windows Domain Name Server Enable DNS debug logging If you want detailed DNS server statistics, enable debug logging on your DNS servers by following the instructions for your operating system: - For Windows Server 2008 R2, see Select and enable debug logging options on the DNS server () on MS TechNet. Note: This procedure works for Windows Server 2008 R2 even though the article shows that it is for Windows Server 2003 family operating systems as the procedure is the same. - For Windows Server 2012 R2, see DNS Logging and Diagnostics () on MS TechNet..1.0, 1.1.1, 1.1.2, 1.1.3, 1.2.0, 1.2.1 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/MSApp/1.2.0/MSInfra/ConfigureWindowsDomainNameServer
2019-10-14T03:45:39
CC-MAIN-2019-43
1570986649035.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', None], dtype=object)]
docs.splunk.com
Render Order By default, objects in ZapWorks Studio are rendered in the order they are positioned in the Hierarchy i.e. nodes placed below others in the Hierarchy will have render priority and be displayed in front of nodes with lower priority. While this default behaviour works well when layering 2D content to set up a user interface, the use of 3D models in a scene requires greater control over render priority. To this end ZapWorks Studio provides the Layer Mode property, which allows the default behaviour to be changed on a per-object basis. As well as the object's position in the Hierarchy, the additional layer mode options also consider the object's position in 3D space relative to the camera when calculating the render order. Overlay The overlay layer mode is the default setting with objects being rendered in hierarchy order, and is ideal for 2D UI elements that are positioned in screen space. In the first set of images below, the grey plane has render priority as its node is positioned below the white plane's node in the Hierarchy. However, in the second set of images the white plane's node has been moved below the other node in the Hierarchy, and is rendered in front of the grey plane in the 3D view. Full 3D The full_3d layer mode differs from overlay in that it mimics real-world behaviour i.e. objects positioned closer to the camera (the 'eye') will be displayed over others. By default, 3D objects imported to ZapWorks Studio will have their layerMode property set to full_3d. In Studio this means that the render order of objects using the full_3d layer mode is determined by their Z-axis position in camera space. This setting is applied to imported 3D models by default. Both 3D models shown below are using the full_3d layer mode, with the green car's Z-axis position set to 0. In the first set of images, the green car has render priority because it's positioned in front of the blue car relative to the camera i.e. 1 in the Z-axis. However, in the second set of images the green car has lost render priority because it's position has been moved to behind the blue car relative to the camera i.e. -1 in the Z-axis. When the surfaces of two full_3d objects occupy the same space in 3D they will fight for render priority, an issue known as Z-fighting. To avoid this the models should be repositioned. Test 3D Objects with the test_3d layer mode use the same render priority calculation as those with a full_3d layer mode i.e. objects positioned closer to the camera will be rendered over others. However, unlike full_3d, objects with this layer mode do not affect the render priority of objects positioned later in the hierarchy, even if the test_3d object is positioned closer to the camera. This property makes this layer mode especially useful when working with 2D objects with transparency, that are positioned in 3D space. The following illustrations contain a 3D model using the full_3d layer mode (green car), and a 2D image with transparency that uses test_3d (blue car). In all cases the 2D object is positioned at 0 in the Z-axis. In the first scenario, the 3D model is positioned at 5 in the Z-axis. As it's positioned nearer to the camera than the 2D object, it is given render priority. In the second scenario, the 3D model has been moved to -5 in the Z-axis and is now positioned further away from the camera than the 2D object which results in the 2D image being given render priority. In the previous examples, the 2D object's position relative to the camera affected the 3D model's render priority, as the 2D object's node was positioned later in the Hierarchy. However, objects using the test_3d layer mode do not affect the render priority of objects which appear later in the Hierarchy. If the 2D object's node is placed earlier than the 3D model's node in the Hierarchy, it will not affect the 3D model's render priority, regardless of whether it is positioned closer to the camera. Reorder Meshes Studio provides the Reorder Meshes property, which defines the order that meshes inside of an object should be rendered. The opacity setting will render meshes positioned behind another object, if the surface nearer to the camera has transparency. The opacityAndDistance setting works similarly, though the distance of the object is also taken into account when calculating render priority. These settings are particularly helpful when working with models containing other meshes that should be visible through transparent areas of the object e.g. a car's interior being visible through the car window. The opacity of a model's surface can be set through an opacity map, when using lighting materials. Other objects can also be rendered through these transparent areas. Both objects must have their layer mode set to full_3d. If an object with transparency is positioned earlier in the Hierarchy, the object positioned behind will not be visible through the transparent areas. The object with transparency must be positioned later in the Hierarchy for the object positioned behind to be visible through the transparent area.
https://docs.zap.works/studio/3d-models/render-order/
2019-10-14T04:24:43
CC-MAIN-2019-43
1570986649035.4
[]
docs.zap.works
If you edit the configuration for one project, you can import the configuration into other existing projects. You can upload files up to 2 GB in size. The 'analysis_profile.pconf' defines a delta from the default analysis profile. For example, exporting an 'analysis_profile.pconf' when one checker was disabled from the default configuration would result in the exported profile containing a 'disable' entry for one checker. Similarly, importing this same 'analysis_profile.pconf' in another environment would result in only one checker being disabled. If you want to guarantee the imported configuration matches the exported configuration file, reset the configuration to default first.
https://docs.roguewave.com/en/klocwork/current/copyingthecheckerconfigurationtoanexistingproject
2019-10-14T04:29:30
CC-MAIN-2019-43
1570986649035.4
[]
docs.roguewave.com
This section guides you through user substitution and how to work with it. User substitution involves defining a substitute for a particular user for a given time window. This functionality is particularly useful in BPMN user tasks when the original task assignee becomes unavailable for a period of time. For example, the task assignee goes on vacation. In this case, all the tasks assigned to this user will get delayed until the particular assignee is available again and nobody else will be able to work on these tasks either. To avoid this kind of unnecessary processing delay, you can use the user substitution feature. This means that when a user is not available for a certain period of time, all tasks assigned to that user will be assigned to the defined substitute instead. How it works - Users are allowed to define a substitute for themselves within the time period that they are going to be unavailable. - Once this specified period (substitution period) starts, all the existing tasks of the user are transferred to the substitute (i.e.,bulk reassignment). - From this point onwards, any new tasks that are going to be assigned to the user will be assigned to the substitute user instead. - The substitute user will also be added as a candidate user to all the existing tasks for which the original assignee is a candidate user. If the given substitute is also not available (i.e., the substitute also goes on vacation), you can enable transitivity. Enabling transitivity allows the engine to try and find a transitive substitute by going through available substitution records. In this way, the engine uses the best available user as the substitute. However, if the system could not find a proper substitute (i.e., there is a cyclic dependency between substitutions), it will return an error, rejecting the substitution request. This rejection is possible only for the substitutions that start just after the substitute request. For a scheduled substitution, if the engine ends up without a proper substitute, the tasks of the user will be assigned to the ‘task owner’. If the task owner is not defined, the assignee will be removed from the tasks and it will be claimable. The WSO2 Business Process Server provides REST APIs to add, update and lookup substitution information. Alternatively, you can also use the BPMN-explorer to experience all the facilities provided by substitution APIs. Enabling user substitution - Run the database script <BPS_HOME>/ dbscripts/bps/bpmn/create/activiti.<DB_Name>.create.substitute.sqlagainst your BPMN engine's database to add the Substitution related table. Open the activiti.xmlfile found in the <BPS_HOME>/repository/conf/directory and uncomment the following configuration. <bean id="userSubstitutionConfiguration"> <property name="enabled" value="true"></property> <property name="enableTransitivity" value="false"></property> <property name="activationInterval" value="60"></property> </bean> Permission Scheme Each valid user has permission to add update and view his/her own substitution records and to view the substitution records where he/she acts as a substitute. Viewing and changing substitution records of other users requires the 'substitution' permission. The substitution permission path is as follows. See Managing Role Permissions for more information on adding permissions. Adding and managing user substitution This section guides you through adding, updating and viewing substitutions. You can do this using either the BPMN-explorer user interface or from the substitution APIs.
https://docs.wso2.com/display/BPS360/Working+with+BPMN+User+Substitution
2019-10-14T03:42:41
CC-MAIN-2019-43
1570986649035.4
[]
docs.wso2.com
SDK Development Documentation¶ The purpose of this document is to explain how to use LumiSDK for the development of bind a gateway device, which is only used for Android mobile phone APP applications. Preparation¶ - Login in AIOT Open Cloud Platform. After create an application, you can get “AppId” and “AppKey” from “Application Management" - "Application Overview" page. - Get openID and positopnId. From detail, see “OAuth 2.0” chapter in Cloud Development Manual. - Download and decompress SDK。 - Download and install Andriod and IOS integrated development environment. Steps¶ 1. Configure Andriod project (Take Android Studio as an example)¶ 1) Create Android project. choose File->New->New Module->Import .JAR/.AAR Package from the main menu, select the path of LumiSDK.aar, click "Finish". 2) Rebuild project, the icon of LumiSDK Module will become a folder with a cup, and create a .iml folder automatically. 3) Choose File->Project Structure->app->Dependencies from the main menu, click "+" on the upper right corner, select "Module Dependency", add LumiSDK module. 4) Configure src/main/AndroidManifest.xml to add networking permissions for project. If not, the authorization operation cannot be completed. <uses-permission android: 2. Bind gateway¶ 1) Telephone connect Wi-Fi, call the authorization interface(aiotAuth) to authorize and return the authorization result. Authorization interface: LumiSDK.aiotAuth(appID,appKey,openID, new CallBack()) Callback function code is as follows: package lumisdk; public interface CallBack { void onFaied(long var1, String var3); void onSuccess(String var1); } 2) After the authorization is successful, press and hold the gateway button until yellow light keeps flashing and gateway voice prompt "waiting for connection", then gateway will create a hot spot named ”lumi-gateway-xxxx“. 3) Phone connect gateway hot spot, call fastlink interface(gatewayFastLink) , and wait gateway voice prompt "connect successfully". Note: It takes a time to bind gateway, please wait 30s. Fastlink interface: Lumisdk.gatewayFastLink(params.toString(), new CallBack() - "param" using the JSON format, including cid, ssid, passwd, positionId, country_domain. { cid: xxxxxxxxxxxxxxxx, ssid:test_123, passwd: 12345678, positionId: xxxxxxxxxxxxxxxxxx country_domain: xxxx } Detailed sub-parameters are described in the following table. Note: - The system can't identify Wi-Fi that contains special characteristics. If a Wi-Fi contains special characteristics, please change the name and then reconnect. If the connection failed again, please reset the device or change the Wi-Fi and reconnect. If the Wi-Fi is unstable, it is suggested to use a 4G phone as the Wi-Fi hotspot. - The system doesn't support 5G Wi-Fi. - Because the network requests in the SDK are synchronous, the call must be invoked in a non-UI thread. For details, refer to DEMO. - When you call the interface, you may get the correct or incorrect return result. For the detail about error code, see “Description of Return codes” chapter in Cloud Development Manual.
http://docs.opencloud.aqara.com/en/sdk/android-sdk/
2019-10-14T04:41:42
CC-MAIN-2019-43
1570986649035.4
[array(['http://cdn.cnbj2.fds.api.mi-img.com/cdn/aiot/doc-images/zh/sdk/lumisdk.png', '模块'], dtype=object) array(['http://cdn.cnbj2.fds.api.mi-img.com/cdn/aiot/doc-images/zh/sdk/dependencies.png', '依赖'], dtype=object) ]
docs.opencloud.aqara.com
kuzzle.memoryStorage.srandmember("key", new ResponseListener<String[]>() { @Override public void onSuccess(String[] members) { // callback called once the action has completed } @Override public void onError(JSONObject error) { } }); Callback response: ["member1", "member2", "..."]
https://docs-v2.kuzzle.io/sdk/java/2/core-classes/memory-storage/srandmember/
2019-10-14T03:53:15
CC-MAIN-2019-43
1570986649035.4
[]
docs-v2.kuzzle.io
TruffleTruffle Truffle is a world class development environment, testing framework and asset pipeline for Ethereum, aiming to make life as an Ethereum developer easier...
https://docs.ujomusic.com/truffle/
2019-10-14T03:14:40
CC-MAIN-2019-43
1570986649035.4
[]
docs.ujomusic.com
Cursor locking (using Cursor.lockState) and full-screen mode (using Screen.fullScreen) are both supported in UnityThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info See in Glossary. Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/2019.1/Documentation/Manual/webgl-cursorfullscreen.html
2019-10-14T04:03:56
CC-MAIN-2019-43
1570986649035.4
[]
docs.unity3d.com
- ! Inter-routing domain service Citrix SD-WAN allows you to segment the network using Routing Domains, ensuring high security and easy management. With the use of the Routing Domain the traffic is isolated from each other in the overlay network. Each routing domain maintains its own routing table. For more information on Routing Domain, see Routing Domain. However, sometimes we need to route the traffic between the Routing domains. For example if shared services such as printer, scanner, and mail server are provisioned as a separate Routing Domain. Inter-routing domain is required to enable users from different routing domains to access the shared services. Citrix SD-WAN provides Static Inter-Routing Domain Service, enabling route leaking between Routing Domains within a site or between different sites. This eliminates the need for an edge router to handle route leaking. The Inter-routing domain service can further be used to set up routes, firewall policies, and NAT rules. A new Firewall Zone, Inter_Routing_Domain_Zone is created by default and serves as the firewall zone for the Inter-Routing Domain Services for routing and filtering. Note Citrix SD-WAN PE appliances do not perform WAN optimization functionality on Inter-Routing Domain packets. To configure Inter-routing Domain Service between two routing domains. Consider an SD-WAN network with an MCN and 2 or more branches with at least two Routing Domains configured globally. By default, all the routing domains are enabled on the MCN. Selectively enable the required routing domains on the other sites. For information on configuring Routing Domain see, Configure Routing Domain. In the SD-WAN Configuration Editor, navigate to Connections > Select Site > Inter-Routing Domain Service. Click +: No zone is selected and the original zone of the packet is retained. - All Zones configured in the network might be selected. - Click Apply to create the Inter-routing domain service. The created service can be used to create routes, firewall policies, and NAT policies. Note You cannot configure an Inter-routing domain service, using routing domains that are not enabled on a site. To create routes using the Inter-routing domain service, create a route with the Service type as Inter-Routing Domain Service and select the inter-routing domain service. For more information on configuring Routes, see How to Configure Routes. Policies. You can also choose Intranet service type to configure Static and Dynamic NAT policies. For more information on configuring NAT policies, see Network Address Translation. Monitoring You can view monitoring statistics for connections that use inter-routing-domain services under Monitoring > Firewall Statistics > Connections. Use Case: Sharing resources across Routing Domains Let us consider a scenario, in which users in different routing domains need to access common assets, such a printer or network storage. There are 3 routing domains at a branch RD1, RD2, and Shared RD as shown in the figure. To enable users in RD1 and RD2 to access resources in Shared RD: - Create an Inter-Routing Domain service between RD1 and Shared RD, for example Inter RD1. Create an Inter-Routing Domain service between RD2 and Shared RD, for example Inter RD2. Configure a static route to Shared RD from RD1 and RD2. In RD1, add a route 172.168.2.0/24 to InterRD1. In RD2, add a route 172.168.2.0/24 to InterRD2. Add a Dynamic NAT rule to InterRD1 using a VIP in shared RD. Enable Bind Responder Route to ensure that the reverse route uses the same service type. Add a Dynamic NAT rule to InterRD2 using a VIP in shared RD, for example 10.0.0.11. Enable Bind Responder Route to ensure that the reverse route uses the same service type. - Use filters to limit what resources in Shared RD are allowed to be accessed by users in RD1/RD.
https://docs.citrix.com/en-us/citrix-sd-wan/11-2/routing/inter-routing-domain-service.html
2021-06-13T00:20:41
CC-MAIN-2021-25
1623487586465.3
[array(['/en-us/citrix-sd-wan/11-2/media/inter-routing-domain-route.png', 'Configure route using inter-routing domain service'], dtype=object) array(['/en-us/citrix-sd-wan/11-2/media/inter-routing-domain-policies.png', 'Configure policies using inter-routing domain service'], dtype=object) array(['/en-us/citrix-sd-wan/11-2/media/inter-routing-domain-nat.png', 'Configure NAT using inter-routing domain service'], dtype=object) array(['/en-us/citrix-sd-wan/11-2/media/inter-routing-domain-monitoring.png', 'Monitoring inter-routing domain'], dtype=object) array(['/en-us/citrix-sd-wan/11-2/media/inter-routing-domain-use-case.png', 'Shared resources across routing domains'], dtype=object) ]
docs.citrix.com
EXADS Public API. Check this section for updates on new features and improvements.
https://docs.exads.com/
2021-06-13T00:05:38
CC-MAIN-2021-25
1623487586465.3
[]
docs.exads.com
Class “CanvasContext” The CanvasContext is used for drawing onto the canvas. It is a subset of the HTML5 CanvasRenderingContext2D. Example import {Canvas, contentView} from 'tabris'; new Canvas({layoutData: 'stretch'}) .onResize(({target: canvas, width, height}) => { let context = canvas.getContext("2d", width, height); context.moveTo(0, 0); // ... }).appendTo(contentView); Methods arc(x, y, radius, startAngle, endAngle, anticlockwise?) Adds an arc to the path which is centered at (x, y) position with radius r starting at startAngle and ending at endAngle going in the given direction by anticlockwise (defaulting to clockwise). beginPath() Starts a new path by emptying the list of sub-paths. bezierCurveTo(cp1x, cp1y, cp2x, cp2y, x, y) Adds a cubic Bézier curve to the path. The starting point is the last point in the current path. clearRect(x, y, width, height) Sets all pixels in the rectangle defined by starting point (x, y) and size (width, height) to transparent, erasing any previously drawn content. closePath() Adds a straight line from the current point to the start of the current sub-path. createImageData(width, height) creates a new, blank ImageData object with the specified dimensions. All of the pixels in the new object are transparent black. createImageData(imageData) creates a new, blank ImageData object with the same dimensions as the specified existing ImageData object. All of the pixels in the new object are transparent black. drawImage(image, dx, dy) Draws the entire given ImageBitmap at the given coordinates (dx, dy) in its natural size. drawImage(image, dx, dy, dWidth, dHeight) Draws the entire given ImageBitmap at the given coordinates (dx, dy) in the given dimension (dWidth, dHeight). drawImage(image, sx, sy, sWidth, sHeight, dx, dy, dWidth, dHeight) Draws a section (sx, sy, sWidth, sHeight) of the given ImageBitmap at the given coordinates (dx, dy) in the given dimension (dWidth, dHeight). fill() Fills the current or path with the current fill style. fillRect(x, y, width, height) draws a filled rectangle at (x, y) position whose size is determined by width and height. and whose color is determined by the fillStyle attribute. fillText(text, x, y) Fills a given text at the given (x, y) position using the current textAlign and textBaseline values. getImageData(x, y, width, height) Returns an ImageData object representing the underlying pixel data for the area of the canvas denoted by the given rectangle. lineTo(x, y) Connects the last point in the sub-path to the (x, y) coordinates with a straight line. moveTo(x, y) Moves the starting point of a new sub-path to the (x, y) coordinates. putImageData(imageData, x, y) Paints data from the given ImageData object onto the bitmap at coordinates (x, y). quadraticCurveTo(cpx, cpy, x, y) Adds a quadratic Bézier curve to the path. The starting point is the last point in the current path. rect(x, y, width, height) Creates a path for a rectangle with the top-left corner at (x, y) restore() Restores the most recently saved canvas state by popping the top entry in the drawing state stack. rotate(angle) Adds a rotation to the transformation matrix. save() Saves the entire state of the canvas by pushing the current state onto a stack. scale(x, y) Adds a scaling transformation to the canvas units by x horizontally and by y vertically. setTransform(a, b, c, d, e, f) resets (overrides) the current transformation to the identity matrix and then invokes a transformation described by the arguments of this method. The matrix has the following format: [[a, c, e], [b, d, f], [0, 0, 1]] stroke() Strokes the current path with the current stroke style. strokeRect(x, y, width, height) draws the outline of a rectangle at (x, y) position whose size is determined by width and height using the current stroke style. strokeText(text, x, y) Strokes a given text at the given (x, y) position using the current textAlign and textBaseline values. transform(a, b, c, d, e, f) Multiplies the current transformation with the matrix described by the arguments of this method. The matrix has the following format: [[a, c, e], [b, d, f], [0, 0, 1]] translate(x, y) Adds a translation transformation by moving the canvas and its origin x horizontally and y vertically on the grid. Properties fillStyle Specifies the color to use inside shapes. font Specifies the current text style being used when drawing text. lineCap Determines how the end points of every line are drawn. lineJoin Determines how two connecting segments in a shape are joined together. lineWidth The thickness of lines in space units. strokeStyle Specifies the color to use for the lines around shapes. textAlign Specifies the current text alignment being used when drawing text. textBaseline Specifies the current text baseline being used when drawing text. Change Events lineWidthChanged Fired when the lineWidth property has changed. lineCapChanged Fired when the lineCap property has changed. lineJoinChanged Fired when the lineJoin property has changed. fillStyleChanged Fired when the fillStyle property has changed. fontChanged Fired when the font property has changed. strokeStyleChanged Fired when the strokeStyle property has changed. textAlignChanged Fired when the textAlign property has changed. textBaselineChanged Fired when the textBaseline property has changed.
https://docs.tabris.com/3.1/api/CanvasContext.html
2021-06-12T23:35:21
CC-MAIN-2021-25
1623487586465.3
[]
docs.tabris.com
Anyone and everyone is encouraged to fork Cortex and submit pull requests, propose new features and create issues. Fork on Github, then clone your repo: $ git clone [email protected]:your-username/cortex.git Follow the setup instructions Follow the test suite instructions, and ensure all tests pass Make your changes. Make frequent commits, but keep commits and commit messages focused on individual, atomic feature changes or fixes. If you end up making many small commits during debug or development that belong to the same chunk of functionality, squash those commits before creating a pull request. Add tests for your change. Once again, ensure all tests pass. Push to a branch on your fork and submit a pull request. Your PR must adhere to the following conventions: For CareerBuilder team members, if the PR relates to a JIRA card, use the following naming convention: JIRA card #: PR Title Example: COR-365: Unhandled Error on Media Upload For open source contributors, or if the PR does not relate to a JIRA card, use: PR Title Example: Unhandled Error on Media Upload Names should use titleized capitalization. i.e.: Login Form Redesign and Refactor Names should be dense, yet informative. For example, Testing is not an appropriate PR name, nor is For update_url task, must use the body method to actually retrieve the stream from the S3 GetObjectOutput. PR names are more high-level than commit messages. PRs should be tagged appropriately (i.e. enhancement, bug, etc). Tags should be preferred over including things like 'bug' in the PR name. PR Descriptions should be a clearly-separated, bulleted list summarizing what's contained in the commits, as well as any relevant notes or considerations for developers or ops. It should also detail any potential follow-up issues. If working with a versioned library, open source users should not include version bumps or changelog updates in their PRs. From here, it's up to the Cortex maintenance team ([email protected]) to review your pull request. We operate in 2-week sprint lifecycles, but we'll try to get to your request or contribution sooner. We may suggest further improvements or alternatives, or the community at large may have input. Some things that will increase the chances that your pull request will be accepted: Write good tests Be consistent If applicable, suggest additional options or alternatives, follow-up issues or potential future improvements
https://docs.cortexcms.org/advanced/contributing
2021-06-12T22:48:23
CC-MAIN-2021-25
1623487586465.3
[]
docs.cortexcms.org
Getting started Step 1: Install Network Quarantine Install Network Quarantine. For more information, see Installing Network Quarantine. Step 2: Configure a NAC Configure a network access control (NAC) solution. For more information, see Configuring NACs. Step 3: (Optional) Configure notifications You can optionally configure notifications about NAC start and stop events or quarantine events. For more information, see Configuring notifications. Step 4: Quarantine endpoints Quarantine endpoints. For more information, see Quarantining endpoints. If you set up integration with Tanium™ Discover, you can also quarantine endpoints from the Discover workbench. Last updated: 4/21/2021 10:41 AM | Feedback
https://docs.tanium.com/network_quarantine/network_quarantine/gettingstarted.html
2021-06-13T00:19:33
CC-MAIN-2021-25
1623487586465.3
[]
docs.tanium.com
Streaming¶ Stream allows filtering and sampling of realtime Tweets using Twitter’s API. Streams utilize Streaming HTTP protocol to deliver data through an open, streaming API connection.. For futher information, see Using Stream¶ To use Stream, an instance of it needs to be initialized with Twitter API credentials (Consumer Key, Consumer Secret, Access Token, Access Token Secret): import tweepy stream = tweepy.Stream( "Consumer Key here", "Consumer Secret here", "Access Token here", "Access Token Secret here" ) Then, Stream.filter() or Stream.sample() can be used to connect to and run a stream: stream.filter(track=["Tweepy"]) Data received from the stream is passed to Stream.on_data(). This method handles sending the data to other methods based on the message type. For example, if a Tweet is received from the stream, the raw data is sent to Stream.on_data(), which constructs a Status object and passes it to Stream.on_status(). By default, the other methods, besides Stream.on_data(), that receive the data from the stream, simply log the data received, with the logging level dependent on the type of the data. To customize the processing of the stream data, Stream needs to be subclassed. For example, to print the IDs of every Tweet received: class IDPrinter(tweepy.Stream): def on_status(self, status): print(status.id) printer = IDPrinter( "Consumer Key here", "Consumer Secret here", "Access Token here", "Access Token Secret here" ) printer.sample() Threading¶ Both Stream.filter() and Stream.sample() have a threaded parameter. When set to True, the stream will run in a separate thread, which is returned by the call to either method. For example: thread = stream.filter(follow=[1072250532645998596], threaded=True) Handling Errors¶ Stream has multiple methods to handle errors during streaming. Stream.on_closed() is called when the stream is closed by Twitter. Stream.on_connection_error() is called when the stream encounters a connection error. Stream.on_request_error() is called when an error is encountered while trying to connect to the stream. When these errors are encountered and max_retries, which defaults to infinite, hasn’t been exceeded yet, the Stream instance will attempt to reconnect the stream after an appropriate amount of time. By default, all three of these methods log an error. To customize that handling, they can be overriden in a subclass: class ConnectionTester(tweepy.Stream): def on_connection_error(self): self.disconnect() Stream.on_request_error() is also passed the HTTP status code that was encountered. The HTTP status codes reference for the Twitter API can be found at. Stream.on_exception() is called when an unhandled exception occurs. This is fatal to the stream, and by default, an exception is logged.
https://docs.tweepy.org/en/latest/streaming.html
2021-06-12T22:28:43
CC-MAIN-2021-25
1623487586465.3
[]
docs.tweepy.org
User management in Stream Processor Powered by a free Atlassian Confluence Community License granted to WSO2, Inc.. Evaluate Confluence today.
https://docs.wso2.com/display/SP410/User+Management
2021-06-13T00:34:50
CC-MAIN-2021-25
1623487586465.3
[]
docs.wso2.com
Data binding provides a simple way to get data into your application's UI without having to set properties on each control each time a value changes. Binding is often used with the MVVM Pattern and for the rest of this guide we'll be assuming that you're using that pattern in your code.
https://docs.avaloniaui.net/docs/data-binding
2021-06-13T00:08:10
CC-MAIN-2021-25
1623487586465.3
[]
docs.avaloniaui.net
With Groups 2.x a new model of access control is introduced where various post types are restricted by groups instead of capabilities. The following guide will help you migrate to the new model. First, please make sure to take a full backup of your site and its database! Once you have updated to Groups 2.x, check that Legacy Access Control is enabled, in your Dashboard under Groups > Options. The use case scenario is this: We’ll assume you want to use group-based restrictions for all pages protected until now with a premium capability. You should create the Premium group if it doesn’t already exist now. If you have many posts, you might want to increase the number displayed per page for what comes next. In order to be able to use the Premium group, your user account must either have permission to Administer Groups or it must be assigned to that group. To adjust this for another user, you can log in as an administrator and after visiting the User Profile, select all the relevant groups in the Groups select box. If you’re going to make the adjustment as an Administrator with permission to Administer Groups, you’ll be allowed to use the groups directly. - Go to Pages and use the access restrictions filter on top of the list, to select one access restriction. Let’s assume that you have one restriction called premium there, select it from the dropdown. - Hit the Filter button. Now you have your list of pages that are restricted by that capability. - Use the checkbox on top of the list to select all. - In the Bulk Actions dropdown select Edit. - Click Apply. - In the Groups section of the bulk actions select to Add restriction. - Choose the Premium group. - Click the Update button to save the modifications. The necessary steps are marked in red on the following screenshot. Now all the pages in the list should carry the group-based access restriction and the previous capability-based restriction. Depending on the number of posts displayed per page, you might need to repeat this for the remaining pages. Do the same thing for all access restriction capabilities, creating new groups when necessary.
https://docs.itthinx.com/document/groups/migration-guide/
2021-06-12T22:52:07
CC-MAIN-2021-25
1623487586465.3
[]
docs.itthinx.com
Choose a Microsoft 365 Subscription Choosing the right Microsoft 365 subscription is key to getting the most out of the service. Here's how to compare the options and choose a plan that's right for your business. Try it! - In a browser, search for Microsoft 365 Business Premium. - Open the Microsoft 365 Business Premium page, and then choose See plans and pricing. Here you can see which subscriptions are tailored to smaller businesses. - Scroll down to view the features that are available with each option. - If you have a larger business or have complex IT needs, scroll down and select Microsoft 365 Enterprise. - Select See products and plans, and review the Enterprise subscriptions and their features. - Once you've decided on a subscription, choose Buy now, and go through the sign-up process. Compare plans * Indicates Microsoft 365 Business Standard has Plan 1 of the functionality and Office 365 Enterprise E3 has Plan 2. ** Available in US, UK, Canada. *** Unlimited archiving when auto-expansion is turned on. To compare Microsoft 365 Business Premium with other products, including other Microsoft 365 plans, see Licensing Microsoft 365 for small and medium-sized businesses.
https://docs.microsoft.com/en-us/microsoft-365/business-video/choose-subscription?redirectSourcePath=%252flt-lt%252foffice%252fpasirinkite-microsoft-prenumerat%2525C4%252585-b9f7c78e-430f-4117-89ec-2eeb1dced2ca&view=o365-worldwide
2021-06-13T00:22:28
CC-MAIN-2021-25
1623487586465.3
[]
docs.microsoft.com
If you are using Safari as your browser and are trying to cutomize your mail template with the earlier versions of WooCommerce Email Customizer, and if you are not able to save the changes made, please consider upgrading the plugin to the latest versions. We have included the bug fixes in the recent releases. Once you update to the latest version of the plugin, the issue would be resolved. Alternatively, you can also try working on other browsers, if you wish to sustain with the older version.
https://docs.flycart.org/en/articles/1458978-are-you-using-safari-browser
2019-12-05T16:50:03
CC-MAIN-2019-51
1575540481281.1
[]
docs.flycart.org
Released on: Monday, April 22, 2013 - 07:00 Notes Fixes - Improve performance of certain concurrent networking operations in iOS 6 - Correct errors effecting certain uses of UIWebView AJAX activity - Remove dependency on NSJSONSerialization when app is run on iOS 4 - Prevent situation where some HTTP redirects were not reported to application
https://docs.newrelic.com/docs/release-notes/mobile-release-notes/ios-release-notes/ios-1328
2019-12-05T18:44:26
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
Tutorial: Automate tasks to process emails by using Azure Logic Apps, Azure Functions, and Azure Storage Azure Logic Apps helps you automate workflows and integrate data across Azure services, Microsoft services, other software-as-a-service (SaaS) apps, and on-premises systems. This tutorial shows how you can build a logic app that handles incoming emails and any attachments. This logic app analyzes the email content, saves the content to Azure storage, and sends notifications for reviewing that content. In this tutorial, you learn how to: - Set up Azure storage and Storage Explorer for checking saved emails and attachments. - Create an Azure function that removes HTML from emails. This tutorial includes the code that you can use for this function. - Create a blank logic app. - Add a trigger that monitors emails for attachments. - Add a condition that checks whether emails have attachments. - Add an action that calls the Azure function when an email has attachments. - Add an action that creates storage blobs for emails and attachments. - Add an action that sends email notifications. When you're done, your logic app looks like this workflow at a high level: Prerequisites An Azure subscription. If you don't have an Azure subscription, sign up for a free Azure account. An email account from an email provider supported by Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, review the connectors list here. This logic app uses an Office 365 Outlook account. If you use a different email account, the general steps stay the same, but your UI might appear slightly different. Download and install the free Microsoft Azure Storage Explorer. This tool helps you check that your storage container is correctly set up. Sign in to the Azure portal with your Azure account credentials. Set up storage to save attachments You can save incoming emails and attachments as blobs in an Azure storage container. Before you can create a storage container, create a storage account with these settings on the Basics tab in the Azure portal: On the Advanced tab, select this setting: To create your storage account, you can also use Azure PowerShell or Azure CLI. When you're done, select Review + create. After Azure deploys your storage account, find your storage account, and get the storage account's access key: On your storage account menu, under Settings, select Access keys. Copy your storage account name and key1, and save those values somewhere safe. To get your storage account's access key, you can also use Azure PowerShell or Azure CLI. Create a blob storage container for your email attachments. On your storage account menu, select Overview. Under Services, select Containers. After the Containers page opens, on the toolbar, select Container. Under New container, enter attachmentsas your container name. Under Public access level, select Container (anonymous read access for containers and blobs) > OK. When you're done, you can find your storage container in your storage account here in the Azure portal: To create a storage container, you can also use Azure PowerShell or Azure CLI. Next, connect Storage Explorer to your storage account. Set up Storage Explorer Now, connect Storage Explorer to your storage account so you can confirm that your logic app can correctly save attachments as blobs in your storage container. Launch Microsoft Azure Storage Explorer. Storage Explorer prompts you for a connection to your storage account. In the Connect to Azure Storage pane, select Use a storage account name and key > Next. Tip If no prompt appears, on the Storage Explorer toolbar, select Add an account. Under Display name, provide a friendly name for your connection. Under Account name, provide your storage account name. Under Account key, provide the access key that you previously saved, and select Next. Confirm your connection information, and then select Connect. Storage Explorer creates the connection, and shows your storage account in the Explorer window under Local & Attached > Storage Accounts. To find your blob storage container, under Storage Accounts, expand your storage account, which is attachmentstorageacct here, and expand Blob Containers where you find the attachments container, for example: Next, create an Azure function that removes HTML from incoming email. Create function to clean HTML Now, use the code snippet provided by these steps to create an Azure function that removes HTML from each incoming email. That way, the email content is cleaner and easier to process. You can then call this function from your logic app. Before you can create a function, create a function app with these settings: If your function app doesn't automatically open after deployment, in the Azure portal search box, find and select Function App. Under Function App, select your function app. Otherwise, Azure automatically opens your function app as shown here: To create a function app, you can also use Azure CLI, or PowerShell and Resource Manager templates. In the Function Apps list, expand your function app, if not already expanded. Under your function app, select Functions. On the functions toolbar, select New function. Under Choose a template below or go to the quickstart, select the HTTP trigger template. Azure creates a function using a language-specific template for an HTTP triggered function. In the New Function pane, under Name, enter RemoveHTMLFunction. Keep Authorization level set to Function, and select Create. After the editor opens, replace the template code with this sample code, which removes the HTML and returns results to the caller: #r "Newtonsoft.Json" using System.Net; using Microsoft.AspNetCore.Mvc; using Microsoft.Extensions.Primitives; using Newtonsoft.Json; using System.Text.RegularExpressions; public static async Task<IActionResult> Run(HttpRequest req, ILogger log) { log.LogInformation("HttpWebhook triggered"); // Parse query parameter string emailBodyContent = await new StreamReader(req.Body).ReadToEndAsync(); // Replace HTML with other characters string updatedBody = Regex.Replace(emailBodyContent, "<.*?>", string.Empty); updatedBody = updatedBody.Replace("\\r\\n", " "); updatedBody = updatedBody.Replace(@" ", " "); // Return cleaned text return (ActionResult)new OkObjectResult(new { updatedBody }); } When you're done, select Save. To test your function, at the editor's right edge, under the arrow (<) icon, select Test. In the Test pane, under Request body, enter this line, and select Run. {"name": "<p><p>Testing my function</br></p></p>"} The Output window shows the function's result: {"updatedBody":"{\"name\": \"Testing my function\"}"} After checking that your function works, create your logic app. Although this tutorial shows how to create a function that removes HTML from emails, Logic Apps also provides an HTML to Text connector. Create your logic app From the Azure home page, in the search box, find and select Logic Apps. On the Logic Apps page, select Add. Under Create logic app, provide details about your logic app as shown here. After you're done, select Create. After Azure deploys your app, on the Azure toolbar, select the notifications icon, and select Go to resource. After the Logic Apps Designer opens and shows a page with an introduction video and templates for common logic app patterns. Under Templates, select Blank Logic App. Next, add a trigger that listens for incoming emails that have attachments. Every logic app must start with a trigger, which fires when a specific event happens or when new data meets a specific condition. For more information, see Create your first logic app. Monitor incoming email On the designer in the search box, enter when new email arrivesas your filter. Select this trigger for your email provider: When a new email arrives - <your-email-provider> For example: For Azure work or school accounts, select Office 365 Outlook. For personal Microsoft accounts, select Outlook.com. If you're asked for credentials, sign in to your email account so Logic Apps can connect to your email account. Now provide the criteria the trigger uses to filter new email. Specify the settings described below for checking emails. From the Add new parameter list, select Subject Filter. After the Subject Filter box appears in the action, specify the subject as listed here: To hide the trigger's details for now, click inside the trigger's title bar. Save your logic app. On the designer toolbar, select Save. Your logic app is now live but doesn't do anything other check your emails. Next, add a condition that specifies criteria to continue workflow. Check for attachments Now add a condition that selects only emails that have attachments. Under the trigger, select New step. Under Choose an action, in the search box, enter condition. Select this action: Condition Rename the condition with a better description. On the condition's title bar, select the ellipses (...) button > Rename. Rename your condition with this description: If email has attachments and key subject phrase Create a condition that checks for emails that have attachments. On the first row under And, click inside the left box. From the dynamic content list that appears, select the Has Attachment property. In the middle box, keep the operator is equal to. In the right box, enter true as the value to compare with the Has Attachment property value from the trigger. If both values are equal, the email has at least one attachment, the condition passes, and the workflow continues. In your underlying logic app definition, which you can view in the code editor window, this condition looks like this example: "Condition": { "actions": { <actions-to-run-when-condition-passes> }, "expression": { "and": [ { "equals": [ "@triggerBody()?['HasAttachment']", "true" ] } ] }, "runAfter": {}, "type": "If" } Save your logic app. On the designer toolbar, select Save. Test your condition Now, test whether the condition works correctly: If your logic app isn't running already, select Run on the designer toolbar. This step manually starts your logic app without having to wait until your specified interval passes. However, nothing happens until the test email arrives in your inbox. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter: Business Analyst 2 #423501 Your email has one attachment. For now, just create one empty text file and attach that file to your email. When the email arrives, your logic app checks for attachments and the specified subject text. If the condition passes, the trigger fires and causes the Logic Apps engine to create a logic app instance and start the workflow. To check that the trigger fired and the logic app ran successfully, on the logic app menu, select Overview. If your logic app didn't trigger or run despite a successful trigger, see Troubleshoot your logic app. Next, define the actions to take for the If true branch. To save the email along with any attachments, remove any HTML from the email body, then create blobs in the storage container for the email and attachments. Note Your logic app doesn't have to do anything for the If false branch when an email doesn't have attachments. As a bonus exercise after you finish this tutorial, you can add any appropriate action that you want to take for the If false branch. Call RemoveHTMLFunction This step adds your previously created Azure function to your logic app and passes the email body content from email trigger to your function. On the logic app menu, select Logic App Designer. In the If true branch, select Add an action. In the search box, find "azure functions", and select this action: Choose an Azure function - Azure Functions Select your previously created function app, which is CleanTextFunctionAppin this example: Now select your function: RemoveHTMLFunction Rename your function shape with this description: Call RemoveHTMLFunction to clean email body Now specify the input for your function to process. Under Request Body, enter this text with a trailing space: While you work on this input in the next steps, an error about invalid JSON appears until your input is correctly formatted as JSON. When you previously tested this function, the input specified for this function used JavaScript Object Notation (JSON). So, the request body must also use the same format. Also, when your cursor is inside the Request body box, the dynamic content list appears so you can select property values available from previous actions. From the dynamic content list, under When a new email arrives, select the Body property. After this property, remember to add the closing curly brace: } When you're done, the input to your function looks like this example: Save your logic app. Next, add an action that creates a blob in your storage container so you can save the email body. Create blob for email body In the If true block and under your Azure function, select Add an action. In the search box, enter create blobas your filter, and select this action: Create blob Create a connection to your storage account with these settings as shown and described here. When you're done, select Create. Rename the Create blob action with this description: Create blob for email body In the Create blob action, provide this information, and select these fields to create the blob as shown and described: When you're done, the action looks like this example: Save your logic app. Check attachment handling Now test whether your logic app handles emails the way that you specified: If your logic app isn't running already, select Run on the designer toolbar. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter: Business Analyst 2 #423501 Your email has at least one attachment. For now, just create one empty text file, and attach that file to your email. Your email has some test content in the body, for example: Testing my logic app If your logic app didn't trigger or run despite a successful trigger, see Troubleshoot your logic app. Check that your logic app saved the email to the correct storage container. In Storage Explorer, expand Local & Attached > Storage Accounts > attachmentstorageacct (Key) > Blob Containers > attachments. Check the attachments container for the email. At this point, only the email appears in the container because the logic app doesn't process the attachments yet. When you're done, delete the email in Storage Explorer. Optionally, to test the If false branch, which does nothing at this time, you can send an email that doesn't meet the criteria. Next, add a loop to process all the email attachments. Process attachments To process each attachment in the email, add a For each loop to your logic app's workflow. Under the Create blob for email body shape, select Add an action. Under Choose an action, in the search box, enter for eachas your filter, and select this action: For each Rename your loop with this description: For each email attachment Now specify the data for the loop to process. Click inside the Select an output from previous steps box so that the dynamic content list opens, and then select Attachments. The Attachments field passes in an array that contains all the attachments included with an email. The For each loop repeats actions on each item that's passed in with the array. Save your logic app. Next, add the action that saves each attachment as a blob in your attachments storage container. Create blob for each attachment In the For each email attachment loop, select Add an action so you can specify the task to perform on each found attachment. In the search box, enter create blobas your filter, and then select this action: Create blob Rename the Create blob 2 action with this description: Create blob for each email attachment In the Create blob for each email attachment action, provide this information, and select the properties for each blob you want to create as shown and described: When you're done, the action looks like this example: Save your logic app. Check attachment handling Next, test whether your logic app handles the attachments the way that you specified: If your logic app isn't running already, select Run on the designer toolbar. Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter property: Business Analyst 2 #423501 Your. In Storage Explorer, expand Local & Attached > Storage Accounts > attachmentstorageacct (Key) > Blob Containers > attachments. Check the attachments container for both the email and the attachments. When you're done, delete the email and attachments in Storage Explorer. Next, add an action so that your logic app sends email to review the attachments. Send email notifications In the If true branch, under the For each email attachment loop, select Add an action. In the search box, enter send emailas your filter, and then select the "send email" action for your email provider. To filter the actions list to a specific service, you can select the connector first. For Azure work or school accounts, select Office 365 Outlook. For personal Microsoft accounts, select Outlook.com. If you're asked for credentials, sign in to your email account so that Logic Apps creates a connection to your email account. Rename the Send an email action with this description: Send email for review Provide the information for this action and select the fields you want to include in the email as shown and described. To add blank lines in an edit box, press Shift + Enter. If you can't find an expected field in the dynamic content list, select See more next to When a new email arrives. Note If you select a field that contains an array, such as the Content field, which is an array that contains attachments, the designer automatically adds a "For each" loop around the action that references that field. That way, your logic app can perform that action on each array item. To remove the loop, remove the field for the array, move the referencing action to outside the loop, select the ellipses (...) on the loop's title bar, and select Delete. Save your logic app. Now, test your logic app, which now looks like this example: Run your logic app Send yourself an email that meets this criteria: Your email's subject has the text that you specified in the trigger's Subject filter property: Business Analyst 2 #423501 Your email has one or more attachments. You can reuse an empty text file from your previous test. For a more realistic scenario, attach a resume file. The email body has this text, which you can copy and paste: Name: Jamal Hartnett Street address: 12345 Anywhere Road City: Any Town State or Country: Any State Postal code: 00000 Email address: [email protected] Phone number: 000-000-0000 Position: Business Analyst 2 #423501 Technical skills: Dynamics CRM, MySQL, Microsoft SQL Server, JavaScript, Perl, Power BI, Tableau, Microsoft Office: Excel, Visio, Word, PowerPoint, SharePoint, and Outlook Professional skills: Data, process, workflow, statistics, risk analysis, modeling; technical writing, expert communicator and presenter, logical and analytical thinker, team builder, mediator, negotiator, self-starter, self-managing Certifications: Six Sigma Green Belt, Lean Project Management Language skills: English, Mandarin, Spanish Education: Master of Business Administration Run your logic app. If successful, your logic app sends you an email that looks like this example: If you don't get any emails, check your email's junk folder. Your email junk filter might redirect these kinds of mails. Otherwise, if you're unsure that your logic app ran correctly, see Troubleshoot your logic app. Congratulations, you've now created and run a logic app that automates tasks across different Azure services and calls some custom code. Clean up resources When you no longer need this sample, delete the resource group that contains your logic app and related resources. On the main Azure menu, select Resource groups. From the resource groups list, select the resource group for this tutorial. On the Overview pane, select Delete resource group. When the confirmation pane appears, enter the resource group name, and select Delete. Next steps In this tutorial, you created a logic app that processes and stores email attachments by integrating Azure services, such as Azure Storage and Azure Functions. Now, learn more about other connectors that you can use to build logic apps. Feedback
https://docs.microsoft.com/en-us/azure/logic-apps/tutorial-process-email-attachments-workflow
2019-12-05T18:37:41
CC-MAIN-2019-51
1575540481281.1
[array(['media/tutorial-process-email-attachments-workflow/overview.png', 'High-level finished logic app'], dtype=object) array(['media/tutorial-process-email-attachments-workflow/complete.png', 'Finished logic app'], dtype=object) ]
docs.microsoft.com
. Note See individual procedures for exceptions to these requirements.: - None.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc755119%28v%3Dws.10%29
2019-12-05T18:37:25
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
The Varnish Cache integration reports data from your Varnish environment to New Relic Infrastructure. This document explains how to install and configure the Varnish Cache integration and describes the data collected. This integration is released as Open Source under the MIT license on GitHub. A change log is also available there for the latest updates. Access to this feature depends on your subscription level. Requires Infrastructure Pro. Features The Varnish Cache on-host integration collects and sends inventory and metrics from your Varnish Cache environment to New Relic Infrastructure so you can monitor its health. Metric data is collected at the instance, lock, memory pool, storage, and backend levels. Compatibility and requirements To use the Varnish Cache integration, ensure your system meets these requirements: - New Relic Infrastructure installed on host. - Linux distribution or Windows OS version compatible with New Relic Infrastructure. - Varnish Cache 1.0 or newer. Install On-host integrations do not automatically update. For best results, you should occasionally update the integration and update the Infrastructure agent. To install the Varnish Cache integration: - Linux installation - Follow the instructions for installing an integration, using the file name nri-varnish. - Via the command line, change directory to the integrations folder: cd /etc/newrelic-infra/integrations.d - Create a copy of the sample configuration file by running: sudo cp varnish-config.yml.sample varnish-config.yml - Edit the varnish-config.ymlconfiguration file using the configuration settings. - Restart the infrastructure agent. - Windows installation Download the nri-varnish.MSI installer image from: - Run the install script. To install from the Windows command prompt, run: msiexec.exe /qn /i PATH\TO\nri-varnish-amd64.msi In the Integrations directory, C:\Program Files\New Relic\newrelic-infra\integrations.d\, create a copy of the sample configuration file by running: cp varnish-config.yml.sample varnish-config.yml - Edit the varnish-config.ymlconfiguration file using the configuration settings. - Restart the infrastructure agent. It is also possible to manually install integrations from a tarball file. For more information, see Install manually from a tarball archive. Configure Use the Varnish Cache integration's varnish-config.yml configuration file to put required login credentials and configure how data is collected. For an example configuration, see the example config file. Commands The varnish-config.yml file provides one command: all_data: collects both inventory and metrics for the Varnish Cache environment. Arguments The varnish-config.yml commands accept the following arguments: params_config_file: The location of the varnish.paramsconfig file. If this argument is omitted, the following locations will be checked: /etc/default/varnish/varnish.params /etc/sysconfig/varnish/varnish.params Note: The location and name of the varnish configuration file may vary. For details, see Different locations of the Varnish configuration file. instance_name: User defined name to identify data from this instance in New Relic. Required. Example configuration Example varnish-config.yml file configuration: - Example configuration integration_name: com.newrelic.varnish instances: - name: varnish_all command: all_data arguments: params_config_file: /etc/varnish/varnish.params instance_name: varnish-0.localnet labels: env: production role: varnish Find and use data To find your integration data in Infrastructure, go to infrastructure.newrelic.com > Third-party services and select one of the Varnish Cache integration links. In New Relic Insights, Varnish Cache data is attached to the following Insights event type: For more on how to find and use your data, see Understand integration data. Metric data The Varnish Cache integration collects the following metric data attributes. Each metric name is prefixed with a category indicator and a period, such as bans. or main.. A number of the following metrics are calculated as rates (per second) instead of totals as the metric names might suggest. For more details on which metrics are calculated as rates, refer to the spec.csv file. Varnish sample metrics These attributes can be found by querying the VarnishSample event types in Insights. Varnish lock sample metrics These attributes can be found by querying the VarnishLockSample event types in Insights. Varnish storage sample metrics These attributes can be found by querying the VarnishStorageSample event types in Insights. Varnish mempool sample metrics These attributes can be found by querying the VarnishMempoolSample event types in Insights. Varnish backend sample metrics These attributes can be found by querying the VarnishBackendSample event types in Insights. Inventory data The Varnish Cache integration captures the configuration parameters. It parses the varnish.params configuration file for all parameters that are active. The data is available on the Infrastructure Inventory page, under the config/varnish source. For more about inventory data, see Understand integration data.
https://docs.newrelic.com/docs/integrations/host-integrations/host-integrations-list/varnish-cache-monitoring-integration
2019-12-05T18:18:03
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
TOPICS× About targeting activities From .
https://docs.adobe.com/content/help/en/campaign-standard/using/managing-processes-and-data/targeting-activities/about-targeting-activities.html
2019-12-05T18:34:06
CC-MAIN-2019-51
1575540481281.1
[array(['/content/dam/help/campaign-standard.en/help/automating/using/assets/wkf_targeting_activities.png', None], dtype=object) ]
docs.adobe.com
ISF JSON ReferenceISF JSON Reference The ISF specification requires that each shader include a JSON blob that includes attributes describing the rendering setup, type of shader (generator, FX or transition), input parameters and other meta-data that host applications may want to make use of. In addition to this reference you may find it useful to download the "Test____.fs" sample filters located here: ISF Test/Tutorial filters These demonstrate the basic set of attributes available and provides examples of each input parameter type. You will probably learn more, faster, from the examples than you'll get by reading this document: each example describes a single aspect of the ISF file format, and they're extremely handy for testing, reference, or as a tutorial. Including JSON in an ISFIncluding JSON in an ISF ISF Editor linked to elsewhere on this page). This JSON dict is referred to as your "top-level dict" throughout the rest of this document. A basic ISF may have a JSON blob that looks something like this: /*{ "DESCRIPTION": "demonstrates the use of float-type inputs", "CREDIT": "by zoidberg", "ISFVSN": "2.0", "VSN": "2.0", ; } ISF AttributesISF Attributes ISFVSNISFVSN If there's a string in the top-level dict stored at the ISFVSN key, this string will describe the version of the ISF specification this shader was written for. This key should be considered mandatory- if it's missing, the assumption is that the shader was written for version 1.0 of the ISF spec (which didn't specify this key). The string is expected to contain one or more integers separated by dots (eg: '2', or '2.1', or '2.1.1'). VSNVSN - If there's a string in the top-level dict stored at the VSNkey, this string will describe the version of this ISF file. This key is completely optional, and its use is up to the host or editor- the goal is to provide a simple path for tracking changes in ISF files. Like the ISFVSNkey, this string is expected to contain one or more integers separated by dots. DESCRIPTIONDESCRIPTION - If there's a string in the top-level dict stored at the DESCRIPTIONkey, this string will be displayed as a description associated with this filter in the host app. the use of this key is optional. CATEGORIESCATEGORIES - The CATEGORIESkey in your top-level dict should store an array of strings. The strings are the category names you want the filter to appear in (assuming the host app displays categories). INPUTSINPUTS - The INPUTSkey of your top-level dict should store an array of dictionaries (each dictionary describes a different input- the inputs should appear in the host app in the order they're listed in this array). For each input dictionary: - The value stored with the key NAMEmust be a string, and it must not contain any whitespaces. This is the name of the input, and will also be the variable name of the input in your shader. - The value stored with the key TYPEmust be a string. This string describes the type of the input, and must be one of the following values: "event", "bool", "long", "float", "point2D", "color", "image", "audio", or "audioFFT". - The input types "audio" and "audioFFT" specify that the input will be sent audio data of some sort from an audio source- "audio" expects to receive a raw audio wave, and "audioFFT" expects the results of an FFT performed on the raw audio wave. This audio data is passed to the shader as an image, so "audio"- and "audioFFT"-type inputs should be treated as if they were images within the actual shader. By default, hosts should try to provide this data at a reasonably high precision (32- or 16-bit float GL textures, for example), but if this isn't possible then lower precision is fine. - The images sent to "audio"-type inputs contains one row of image data for each channel of audio data (multiple channels of audio data can be passed in a single image), while each column of the image represents a single sample of the wave, the value of which is centered around 0.5. - The images sent to "audioFFT"-type inputs contains one row of image data for each channel of audio data (multiple channels of audio data can be passed in a single image), while each column of the image represents a single value in the FFT results. - Both "audio"- and "audioFFT"-type inputs allow you to specify the number of samples (the "width" of the images in which the audio data is sent) via the MAXkey (more on this later in the discussion of MAX). - Where appropriate, DEFAULT, MIN, MAX, and IDENTITYmay). - "audio"- and "audioFFT"-type inputs support the use of the MAXkey- but in this context, MAXspecifies the number of samples that the shader wants to receive. This key is optional- if MAXis not defined then the shader will receive audio data with the number of samples that were provided natively. For example, if the MAXof an "audio"-type input is defined as 1, the resulting 1-pixel-wide image is going to accurately convey the "total volume" of the audio wave; if you want a 4-column FFT graph, specify a MAXof 4 on an "audioFFT"-type input, etc. - The value stored with the key LABELmust be a string. This key is optional- the NAMEof an input is the variable name, and as such it can't contain any spaces/etc. The LABELkey provides host sofware with the opportunity to display a more human-readable name. This string is purely for display purposes and isn't used for processing at all. - Other notes: - key stores an array of integer values. This array may have repeats, and the values correspond to the labels. When you choose an item from the pop-up menu, the corresponding value from this array is sent to your shader. - The LABELSkey stores an array of strings. This array may have repeats, and the strings/labels correspond to the array of values. PASSES and TARGETPASSES and TARGET - The PASSESkey should store an array of dictionaries. Each dictionary describes a different rendering pass. This key is optional: you don't need to include it, and if it's not present your effect will be assumed to be single-pass. - The TARGETstring in the pass dict describes the name of the buffer this pass renders to. The ISF host will automatically create a temporary buffer using this name, and you can read the pixels from this temporary buffer back in your shader in a subsequent rendering pass using this name. By default, these temporary buffers are deleted (or returned to a pool) after the ISF file has finished rendering a frame of output- they do not persist from one frame to another. No particular requirements are made for the default texture format- it's assumed that the host will use a common texture format for images of reasonable visual quality. - If the pass dict has a positive value stored at the PERSISTENTkey, it indicates that the target buffer will be persistent- that it will be saved across frames, and stay with your effect until its deletion. If you ask the filter to render a frame at a different resolution, persistent buffers are resized to accommodate. Persistent buffers are useful for passing data from one frame to the next- for an image accumulator, or motion blur, for example. This key is optional- if it isn't present (or contains a 0 or false value), the target buffer isn't persistent. - If the pass dict has a positive value stored at the FLOATkey, it indicates that the target buffer created by the host will have 32bit float per channel precision. Float buffers are proportionally slower to work with, but if you need precision- for image accumulators or visual persistence projects, for example- then you should use this key. Float-precision buffers can also be used to store variables or values between passes or between frames- each pixel can store four 32-bit floats, so you can render a low-res pass to a float buffer to store values, and then read them back in subsequent rendering passes. This key is optional- if it isn't present (or contains a 0 or false value), the target buffer will be of normal precision. - If the pass dictionary has a value for the keys WIDTHor HEIGHT(these keys are optional), that value is expected to be a string with an equation describing the width/height of the buffer. This equation may reference variables: the width and height of the image requested from this filter are passed to the equation as $WIDTHand $HEIGHT, and the value of any other inputs declared in INPUTScan also be passed to this equation (for example, the value from the float input "blurAmount" would be represented. IMPORTED imagesIMPORTED images - The IMPORTEDkey describes buffers that will be created for image files that you want ISF to automatically import. This key is optional: you don't need to include it, and if it's not present your ISF file just won't import any external images. The item stored at this key should be a dictionary. - Each key-value pair in the IMPORTEDdictionary describes a single image file to import. The key for each item in the IMPORTEDdictionary is the name of the buffer as it will be used in your ISF file, and the value for each item in the IMPORTEDdictionary is another dictionary describing the file to be imported. - The dictionary describing the image to import must have a PATHkey, and the object stored at that key must be a string. This string should describewould be "../asdf.jpg", etc. ISF ConventionsISF Conventions Within ISF there are three main usages for compositions: generators, filters and transitions. Though not explicitly an attribute of the JSON blob itself, the usage can be specified by including for specific elements in the INPUTS array. When the ISF is loaded by the host application, instead of the usual matching interface controls, these elements may be connected to special parts of the software rendering pipeline. ISF FX: inputImageISF FX: inputImage ISF shaders that are to be used as image filters are expected to pass the image to be filtered using the "inputImage" variable name. This input needs to be declared like any other image input, and host developers can assume that any ISF shader specifying an "image"-type input named "inputImage" can be operated as an image filter. ISF Transitions: startImage, endImage and progressISF Transitions: startImage, endImage and progress ISF shaders that are to be used as transitions require three inputs: two image inputs ("startImage" and "endImage"), and a normalized float input ("progress") used to indicate the progress of the transition. Like image filters, all of these inputs need to be declared as you would declare any other input, and any ISF that implements "startImage", "endImage", and "progress" can be assumed to operate as a transition. ISF GeneratorsISF Generators ISF files that are neither filters nor transitions should be considered to be generators.
https://docs.isf.video/ref_json.html
2019-12-05T16:58:12
CC-MAIN-2019-51
1575540481281.1
[]
docs.isf.video
Setup a Multilingual Site/Installing New Language 1 - Installing a new language Option 1: Setup a Multilingual site on an existing site Contents - 1 Step 1 - Installing a new language - 2 Option 1: Setup a Multilingual site on an existing site - 3 Option 2: Setup a multilingual site during a new installation: - installing the French language package - telling Joomla! we want to use it as a content language. Installing a new language package Let's install the French language package. - Go to Extensions → Language(s) - Click the button at the top left Install Language. A list of available translations is displayed. You can easily find the desired language by using the Search function. In this field, enter French. Click the button Install on the left side of the French language. Then, a message is displayed: Installation of the language was successful.. Mission accomplished! In the Languages screen (accessed through Extensions → Language(s)), you can now see that French (fr-FR) language is now available. Option 2: Setup a multilingual site during a new installation Install Languages Before you complete your installation by deleting the Installation Folder, click on → Extra steps: Install languages package. This will continue the installation of Joomla! by taking you to a new installation page. A list of language packs is now displayed. Check the language or language packs you wish to install, in our case the French package. displayed while the language pack/packs are downloaded. Choose Default Language When the download is complete you can choose the default language for the Site and the Administrator interface. - Make your choices for default languages and activate the multilingual features of Joomla. -. Finalise You will now be presented with a very similar Congratulations! Joomla! is now installed. screen. The difference will be a notation of the default Administrator and Site language settings. Now you can click on the button Remove installation folder.
https://docs.joomla.org/J3.x:Setup_a_Multilingual_Site/Installing_New_Language
2019-12-05T17:44:53
CC-MAIN-2019-51
1575540481281.1
[]
docs.joomla.org
Download URLs Download the appropriate release for your New Relic .NET agent: Fixes - [.NET Core] Fixes a problem that could cause some .NET Core 3.0 applications, configured with an invalid license key, to hang. Upgrading - Follow standard procedures to update the .NET agent. - If you are upgrading from a particularly old agent, review the list of major changes and procedures to upgrade legacy .NET agents.
https://docs.newrelic.com/docs/release-notes/agent-release-notes/net-release-notes
2019-12-05T18:39:19
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
register undo operations on specific objects you are about to perform changes on. The Undo system stores delta changes in the undo stack. Undo operations are automatically combined together based on events, e.g. mouse down events will split undo groups. Grouped undo operations will appear and work as a single undo. To control grouping manually use Undo.IncrementCurrentGroup. By default, the name shown in the UI will be selected from the actions belonging to the group using a hardcoded ordering of the different kinds of actions. To manually set the name, use Undo.SetCurrentGroupName. Undo operations store either per property or per object state. This way they scale well with any Scene size. The most important operations are outlined below: Modifying a single property: Undo.RecordObject (myGameObject.transform, "Zero Transform Position"); myGameObject.transform.position = Vector3.zero; Adding a component: Undo.AddComponent<RigidBody>(myGameObject); Creating a new game object: var go = new GameObject(); Undo.RegisterCreatedObjectUndo (go, "Created go"); Destroying a game object or component: Undo.DestroyObjectImmediate (myGameObject); Changing transform parenting: Undo.SetTransformParent (myGameObject.transform, newTransformParent, "Set new parent"); Did you find this page useful? Please give it a rating:
https://docs.unity3d.com/ScriptReference/Undo.html
2019-12-05T18:38:37
CC-MAIN-2019-51
1575540481281.1
[]
docs.unity3d.com
The port that a service monitor requires to verify an agent's service status (e.g. using port 25 to access an agent system's email service) must be open for outgoing connections from the Monitoring Station and incoming connections on the agent system(s) that will be receiving these connections. Note
http://docs.uptimesoftware.com/pages/viewpage.action?pageId=7802113&navigatingVersions=true
2019-12-05T18:31:45
CC-MAIN-2019-51
1575540481281.1
[]
docs.uptimesoftware.com
Are you looking to provide a discount like the scenarios below ? - Buy 3 for $10 - Buy 6 for $20 In short, this actually means, you wanted to offer - 3 quantities of the same or different products for $10 - 6 quantities of the same or different products for $20 Interesting part is, this set discount be applied on - All Products in Store - Specific Products in Store - Specific Categories Note : This feature is available from 1.8.0 versions. If you are using any previous versions, kindly update the plugin to latest version. Click here to know on how to update plugin to latest version. And you will need the PRO version Now, let's see how to provide set discount for all products in store like - Buy 3 for $10 - Buy 6 for $20 Navigate to Woocommerce --> Woo discount Rules. Click on Price Discount Rules and hit add new rule. General : You can provide Rule name and choose the method as "Quantity based by product/category and BOGO Deals" Conditions: You can set the Apply to --> "All Products" . Discount Tab : Choose the Adjustment type as "Set discount" and set the value. As per my example, 3 quantities should be for $10. Here is a screenshot of cart page : What happens when the cart contains 4 quantities ? Simple, 3 products will have the set discount and 4th quantity will be for full price. What if, you want to want to provide this discount in different sets, like 3 for $10, 6 for $20 and so on,.. You can just add the ranges in discount tab : Frequently Asked Questions : 1) I want this set discount to apply for specific products/specific categories. What should I do ? Choose apply to --> Specific Products from Condition Tab . >>IMAGE :)
https://docs.flycart.org/en/articles/3197678-buy-3-for-10-set-discount
2019-12-05T16:50:10
CC-MAIN-2019-51
1575540481281.1
[array(['https://downloads.intercomcdn.com/i/o/75121284/c502d4b12850e76bf8dea55d/screenshot-demo.flycart.org-2018.09.06-13-11-48.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/154226507/63330b1b494394906fb84172/screenshot-demo.flycart.org-2019.10.08-19_09_33.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/152719323/66b38a00a3c56271e106896d/screenshot-localhost-2019.10.02-12_06_19+%281%29.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/154225010/7451e02a57c7d8638a6ee873/screenshot-demo.flycart.org-2019.10.08-19_05_37.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/154225668/97898360b99b8ec10fd5da3e/screenshot-demo.flycart.org-2019.10.08-19_07_10.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/154225925/5b37da8a84f1918d840ec82e/screenshot-demo.flycart.org-2019.10.08-19_08_06.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/154226242/adaf2729db0fa6184abe2a93/screenshot-demo.flycart.org-2019.10.08-19_09_00.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/152720501/96ac596746c6093c2a8e269b/screenshot-localhost-2019.10.02-12_15_03.png', None], dtype=object) ]
docs.flycart.org
In addition to standard Clipboard support, Hex Editor Neo provides advanced copy/export feature. This feature allows you to convert selected data (multiple selection is fully supported) into one of supported format and either place it into the Clipboard or write to a file. All currently supported formats generate textual data that can directly be pasted into or opened by nearly every application. Three formats, Raw Text, Formatted Data and Encoded Data are provided, each of which offer a rich set of configurable options. This tool window is used to configure the feature and start data conversion. A window is divided into two parts: at the top, two panes, “Copy to Clipboard” and "Export" are displayed. Only one pane may be opened at a given time. The active pane determines the action to be held by the Hex Editor Neo: if the “Copy to Clipboard” pane is opened, data will be placed into the Clipboard, if "Export" pane is opened, data will be exported to a given file. “Copy to Clipboard” pane contains a single configurable option: a Merge switch. It becomes active when there is a multiple selection and allows you to merge all selected blocks during data conversion. “Export” pane allows you to specify the full path to the file you want the data to be written to. The Append switch allows you to append new data to an existing file and Merge switch, as already described, allows you to merge all selected blocks during data conversion. Second part of this tool window contains one pane for each supported data format. Only one pane may be opened at a given time. Opened pane determines the format to be used for data conversion. Subsequent sections describe each format in greater detail.
https://docs.hhdsoftware.com/hex/definitive-guide/advanced-copy-&-export/overview.html
2019-12-05T17:02:46
CC-MAIN-2019-51
1575540481281.1
[]
docs.hhdsoftware.com
There are five categories of trouble that can occur when building a custom kernel. They are:G in command mode. Make sure the keyword is typed correctly, by comparing it to the GENERIC kernel or another reference. If the make command fails, it usually signals an error in your kernel description, but not severe enough for config(8) to catch it. Again, look over your configuration, and if you still cannot resolve the problem, send mail to the FreeBSD general questions mailing list with your kernel configuration, and it should be diagnosed very quickly. If the kernel compiled fine, but failed to install (the make install or make installkernel command /kernel In FreeBSD 5.X, kernels are not installed with the system immutable flag, so this is unlikely to be the source of the problem you are experiencing. If you have installed a different version of the kernel from the one that the system utilities have been built with, for example, a 4.X kernel on a 3.X system, many system-status commands like ps(1) and vmstat(8) will not work any more. You must recompile the libkvm library as well as these utilities. This is one reason it is not normally a good idea to use a different version of the kernel from the rest of the operating system. This, and other documents, can be downloaded from. For questions about FreeBSD, read the documentation before contacting <questi[email protected]>. For questions about this documentation, e-mail <[email protected]>.
https://docs.huihoo.com/freebsd/handbook/kernelconfig-trouble.html
2019-12-05T17:53:05
CC-MAIN-2019-51
1575540481281.1
[]
docs.huihoo.com
This integration is available from the Pro plan and above.. Setting up To set up the integration for your project: - Navigate to the Project settings > Integrations - Click Connect next to the required integration - Authorize via OAuth2 Once you authorize, the list of sections, categories and articles will appear. Select and link required articles for further localization. In the case that the imported languages are not yet added into the Lokalise project, select the "Create missing languages" checkbox. Missing languages will be automatically added into your Lokalise project. By clicking the Link items button, the selected items will be linked and a confirmation message will appear with: - The number of linked items - How many keys were created in the Lokalise project - Which languages have been added and how many keys were updated in a particular language Linked items can be manually imported to/exported from Zendesk to Lokalise and vice versa. You can sort linked items using the Content type dropdown menu. If required, you can link more items or unlink the existing ones. Let's check how the inserted keys look in Lokalise. Two keys were created: Title and Body. I have added the "Welcome to your Help Center!" article from Zendesk, so the appropriate tags have been linked to the inserted keys. Learn more about tags... Once the necessary data is imported, you can add the required languages and perform translations. When the translations are done, select the items you want to export to Zendesk and click the Export to Zendesk Guide button. The selected languages will be updated in Zendesk. If any changes were made in Zendesk, you can update the existing Lokalise keys by importing items from Zendesk. Setting up Zendesk Help Center 1. Enable localizations First, you must enable the languages to which you want your help center translated, under Settings > Account > Localization in your Zendesk account. 2. Add languages to help center. As you've enabled the required localizations, you need to add them to your help center. Navigate to help center settings > Guide settings and select the languages once again.
https://docs.lokalise.com/en/articles/1525939-zendesk-help-center
2019-12-05T17:34:50
CC-MAIN-2019-51
1575540481281.1
[array(['https://downloads.intercomcdn.com/i/o/164228506/ef411b442bfa91125d6240ab/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/164232058/62a9c88a0600e16bc747dcb2/Zendesk_01.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/133881838/9ce4ac131a750dc016bccc57/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/164234560/ce1ff7eacadd847c06c5200a/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/164234911/e8b7b7216fdbc51666aef6ad/image.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/164243269/210b9ff9c69dedc8c1beb460/Zendesk_02.gif', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/46540343/5ef8b01ec1b0a032ad5893c5/Screen+Shot+2018-01-25+at+11.42.08.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/46542251/0c97a3a35ed7fefc0557d7c4/Screen+Shot+2018-01-25+at+11.52.01.png', None], dtype=object) ]
docs.lokalise.com
Problem You're a user with access to multiple New Relic accounts and, in New Relic One, you don’t see data or features that you've previously had access to. If the data you're looking for is from a recent install/enable process and you've never seen it before in New Relic, see Not seeing data. Solution There are several reasons why you may not be able to view data in New Relic One that you've previously seen: - New Relic One cross-account security. This is the most probable reason. For more about these features, see Cross-account security. - Roles and permissions. It's possible someone on your account has changed your role or permissions. The roles assigned to you will affect what you can see. For example, some features are only available for Owners and Admins. Potential solutions to gain access to other accounts or features: - Ask your organization’s New Relic administrator to help you get access to other accounts, or to change your roles/permissions. - If you cannot figure out a solution, contact your New Relic account representative for help.
https://docs.newrelic.com/docs/new-relic-one/use-new-relic-one/troubleshooting/troubleshooting-missing-or-obfuscated-data-new-relic-one
2019-12-05T18:45:32
CC-MAIN-2019-51
1575540481281.1
[]
docs.newrelic.com
vmod_utils is a VMOD for useful functions that don’t require their own VMOD. OBJECT dyn_probe([STRING url], [STRING request], [BOOL tcponly], [INT expected_response], [DURATION timeout], [DURATION interval], [INT initial], [INT window], [INT threshold]) Description Build an object that is able to output a probe. The argument list is the same as for the static VCL declaration of a probe, with the same restrictions. This allows to generate probes in vcl_init, notably using dynamic values. Dynamic probes are used by dynamic backends to provide the flexibility that these backends require. Return Value An object with the .probe() method. PROBE .probe() Return Value The probe specified during the creation of the object. VCL example sub vcl_init { # get the probe URL from an env variable new dyn_probe = utils.dyn_probe(sdt.getenv("PROBE_URL")); # create a dynamic backend with above probe new goto_dir = goto.dns_director("foo.example.com", probe= dyn_probe.probe()); } STRING newline() Description Return a string consisting of a newline escape sequence, \n. Return Value The newline escape sequence, \n. STRING time_format(STRING format, BOOL local_time, [TIME time]) Description Format the time according to format. This is an interface for strftime. Return Value The formated time as a string. format The format that the time should be in. For more information about possible options see the man page for strftime. local_time Should the time be in the local time zone or GMT. Defaults to GMT. time An optional parameter of a time to format. Defaults to the current time. VOID fast_304() Description Perform a fast 304 cache insert. New or changed headers will not be updated into cache. The object currently in cache will simply have its TTL extended. This reduces the cache read/write overhead of a 304 response to zero. Can only be used in vcl_backend_response. If used on a non 304 response, it is ignored. Return Value None STRING vcl_name() Return Value The name of the current VCL.
https://docs.varnish-software.com/varnish-cache-plus/vmods/utils/
2019-12-05T18:39:00
CC-MAIN-2019-51
1575540481281.1
[]
docs.varnish-software.com
<![CDATA[ ]]>Sketch Guide > Paperless Animation > Advanced Tools > Flip and Easy Flipping Toolbars Flipping and Easy Flipping Toolbars Toon Boom Harmony allows you to rapidly flip through drawings in the Drawing view just as you do with paper drawings. You can flip through the key, breakdown or in-between drawings individually, or view a combination.
https://docs.toonboom.com/help/harmony-11/sketch-standalone/Content/_CORE/_Workflow/017_Paperless_Animation/017_H3_Flipping.html
2017-10-17T02:06:28
CC-MAIN-2017-43
1508187820556.7
[array(['../../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stage.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/draw.png', 'Toon Boom Harmony 11 Draw Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/sketch.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/controlcenter.png', 'Installation and Control Center Online Documentation Installation and Control Center Online Documentation'], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/scan.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePaint.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stagePlay.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../../Resources/Images/HAR/_Skins/stageXsheet.png', None], dtype=object) array(['../../../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Google Cloud Integration Available under the Integration Family: generic Google Cloud Integration is used to connect Shippable DevOps Assembly Lines platform to Google Cloud and manage entities and services provided by Google Cloud. You can create this from the integrations page by following instructions here: Adding an account integration. This is the information you would require to create this integration: - Name -- friendly name for the integration - Project Name -- Project Name from Google Developers Console - Service Account Email -- Email for the service account - Credential File -- JSON Security Key for Google Cloud Resources that use this Integration Resources are the building blocks of assembly lines and some types of resources refer to integrations by their names. The following resource types can be created with a Google Cloud.
http://docs.shippable.com/platform/integration/gce/
2017-10-17T01:58:08
CC-MAIN-2017-43
1508187820556.7
[]
docs.shippable.com
Mule 1.4.3 Release Notes The Mule Team is pleased to announce the release of version 1.4.3. Please note, this is a maintenance release addressing an important JMS problem as outlined in MULE-2324. We felt strong about providing an immediate release because many of our users rely on the JMS transport. Thanks a lot to the community providing us with valuable feedback and allowing us to respond quickly.
https://docs.mulesoft.com/release-notes/mule-1.4.3-release-notes
2017-10-17T01:46:05
CC-MAIN-2017-43
1508187820556.7
[]
docs.mulesoft.com
A modifier is an option for further describing an item and can be associated with a cost. Modifiers need to be created within a Modifier group. Example A restaurant that makes burritos might create a modifier group called “Toppings” with modifiers: “Extra Cheese”, “No Cheese”, “Sour Cream” A cafe that offers drinks might create a modifier group called “Milk” with modifiers: “0% Milk”, “2% Milk”, “Whole Milk”, “Half and Half” Smart Online Order imports your existing modifiers and modifier groups from your Clover inventory and places it on your website. It is important to not just have modifiers that customers can understand but to also set limits so customers can’t order more than a certain number of toppings or options. To learn more about modifiers click here
http://docs.smartonlineorder.com/docs/online-orders/modifier-groups/
2017-10-17T01:46:33
CC-MAIN-2017-43
1508187820556.7
[]
docs.smartonlineorder.com
Microsoft Intune is now in the Azure portal, meaning that the workflows and functionality you are used to are now different. The new portal offers you new and updated functionality in the Azure portal where you can manage your organization's mobile devices, PCs, and apps. - Where did my features go in Azure? is a reference to show you the specific workflows and UIs that have changed with the move to Azure. - Intune classic groups in the Azure portal explains the implications of the shift to Azure Active Directory security groups for group management. You can find information about the new portal in this library, and it is continually updated. If you have suggestions you'd like to see, leave feedback in the topic comments. We'd love to hear from you. Highlights of the new experience Important Don’t see the new portal yet? Existing tenants are being migrated to the new experience. A notification is shown in the Office Message Center before your tenant migrates. Intune accounts created before January 2017 require a one-time migration before Apple Enrollment workflows are available in Azure. The schedule for migration has not been announced yet. If your existing account cannot access the Azure portal, we recommend creating a trial account. Review the list of potential blockers Before you start To use Intune in the Azure portal, you must have an Intune admin and tenant account. Sign up for an account if you don't already have one. Supported web browsers for the Azure portal The Azure portal runs on most modern PCs, Macs, and tablets. Mobile phones are not supported. Currently, the following browsers are supported: - Microsoft Edge (latest version) - Microsoft Internet Explorer 11 - Safari (latest version, Mac only) - Chrome (latest version) - Firefox (latest version) Check the Azure portal for the latest information about supported browsers. What's in this library? The documentation reflects the layout of the Azure portal to make it easier to find the information you need. Introduction and get started This section contains introductory information that helps you get started using Intune. Plan and design Information to help you plan and design your Intune environment. Device enrollment How to get your devices managed by Intune. Device compliance Define a compliance level for your devices, then report any devices that are not compliant. Device configuration Understand the profiles you can use to configure settings and features on devices you manage. Devices Get to know the devices you manage with inventory and reports. Mobile apps How to publish, manage, configure, and protect apps. Conditional access Restrict access to Exchange services depending on conditions you specify. On-premises access Configure access to Exchange ActiveSync, and Exchange on-premises Users Learn about the users of devices you manage and sort resources into groups. Groups Learn about how you can use Azure Active Directory groups with Intune Intune roles Control who can perform various Intune actions, and who those actions apply to. You can either use the built-in roles that cover some common Intune scenarios, or you can create your own roles. Software updates Learn about how to configure software updates for Windows 10 devices. What's new? Find out what's new in Intune.
https://docs.microsoft.com/en-au/intune/what-is-intune
2017-10-17T02:02:53
CC-MAIN-2017-43
1508187820556.7
[array(['media/azure-portal-workloads.png', 'Azure portal workloads'], dtype=object) ]
docs.microsoft.com
The Excel destination loads data into worksheets or ranges in Microsoft Excel workbooks. Access Modes The Excel destination provides three different data access modes for loading data: A table or view. A table or view specified in a variable. The results of an SQL statement. The query can be a parameterized query. Important In Excel, a worksheet or range is the equivalent of a table or view. The lists of available tables in the Excel Source and Destination editors display only existing worksheets (identified by the $ sign appended to the worksheet name, such as Sheet1$) and named ranges (identified by the absence of the $ sign, such as MyRange). Usage Considerations. For information on how to avoid including the single quote, see this blog post, Single quote is appended to all strings when data is transformed to excel when using Excel destination data flow component in SSIS package, on msdn.com.. Configuration of the Excel Destination The Excel destination uses an Excel connection manager to connect to a data source, and the connection manager specifies the workbook file to use. For more information, see Excel Connection Manager. The Excel destination has one regular input and one error output. You can set properties through SSIS Designer or programmatically. The Advanced Editor dialog box reflects all. Related Tasks Connect to an Excel Workbook Loop through Excel Files and Tables by Using a Foreach Loop Container Set the Properties of a Data Flow Component Related Content. Excel Destination Editor (Connection Manager Page) Use the Connection Manager page of the Excel Destination Editor dialog box to specify data source information, and to preview the results. The Excel destination loads data into a worksheet or a named range in a Microsoft Excel workbook. Note The CommandTimeout property of the Excel destination is not available in the Excel Destination Editor, but can be set by using the Advanced Editor. In addition, certain Fast Load options are available only in the Advanced Editor. For more information on these properties, see the Excel Destination section of Excel Custom Properties. Static Options Excel connection manager Select an existing Excel connection manager from the list, or create a new connection by clicking New. New Create a new connection manager by using the Excel Connection Manager dialog box. Data access mode Specify the method for selecting data from the source. Name of the Excel sheet Select the excel destination from the drop-down list. If the list is empty, click New. New Click New to launch the Create Table dialog box. When you click OK, the dialog box creates the excel file that the Excel Connection Manager points to. View Existing Data Preview results by using the Preview Query Results dialog box. Preview can display up to 200 rows. Warning If the Excel connection manager you selected points to an excel file that does not exist, you will see an error message when you click this button. Data Access Mode Dynamic Options Data access mode = Table or view Name of the Excel sheet Select the name of the worksheet or named range from a list of those available in the data source. Data access mode = Table name or view name variable Variable name Select the variable that contains the name of the worksheet or named range. Data access mode = SQL command SQL command text Enter the text of. Parse Query Verify the syntax of the query text. Excel Destination Editor (Mappings Page) Use the Mappings page of the Excel Destination Editor dialog box to map input columns to destination, whether it is mapped or not. Excel Destination Editor (Error Output Page) Use the Advanced page of the Excel Destination Editor dialog box to specify options for error handling. Options Input or Output View the name of the data source. Column View the external (source) columns that you selected in the Connection Manager node of the Excel Source Editordialog. See Also Excel Source Integration Services (SSIS) Variables Data Flow Working with Excel Files with the Script Task
https://docs.microsoft.com/en-us/sql/integration-services/data-flow/excel-destination
2017-10-17T03:28:42
CC-MAIN-2017-43
1508187820556.7
[]
docs.microsoft.com
Changes related to "Help25:Users User Note Category Edit" ← Help25:Users User Note Category Edit This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/Special:RecentChangesLinked/Help25:Users_User_Note_Category_Edit
2015-07-28T06:48:15
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Language Metadata From Joomla! Documentation The:
https://docs.joomla.org/index.php?title=Language_Metadata&oldid=20399
2015-07-28T07:01:18
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Changes related to "Add Static Text In Title Of Page" ← Add Static Text In Title Of Page This is a list of changes made recently to pages linked from a specified page (or to members of a specified category). Pages on your watchlist are bold. No changes during the given period matching these criteria.
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&hideanons=1&target=Add_Static_Text_In_Title_Of_Page
2015-07-28T07:01:51
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Information for "Upgrading 1.5 from an existing 1.5x version" Basic information Display titleTalk:Upgrading 1.5 from an existing 1.5x version Redirects toJ1.5 talk:Upgrading 1.5 from an existing 1.5x version (info) Default sort keyUpgrading 1.5 from an existing 1.5x version Page length (in bytes)67 Page ID2866316:14, 25 April 2013 Latest editorTom Hutchison (Talk | contribs) Date of latest edit16:14, 25 April 2013 Total number of edits1 Total number of distinct authors1 Recent number of edits (within past 30 days)0 Recent number of distinct authors0 Retrieved from ‘’
https://docs.joomla.org/index.php?title=Talk:Upgrading_1.5_from_an_existing_1.5x_version&action=info
2015-07-28T07:00:23
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
. 30 June 2015 20:27J2.5:Developing a MVC Component/Adding verifications (diff; hist; +78) Drmmr763
https://docs.joomla.org/index.php?title=Special:RecentChangesLinked&days=30&from=&target=J2.5%3ADeveloping_a_MVC_Component%2FBasic_backend
2015-07-28T05:57:05
CC-MAIN-2015-32
1438042981576.7
[array(['/extensions/CleanChanges/images/Arr_.png', None], dtype=object) array(['/extensions/CleanChanges/images/showuserlinks.png', 'Show user links Show user links'], dtype=object) ]
docs.joomla.org
Standard messages and nonstandard messages The Push Service implements both the messages that are required by the Push Access Protocol standard and some additional messages and elements that provide subscription information. You can use the nonstandard messages to query the Push Service for subscription status. The nonstandard APIs and messages are available to users of both the Push Essentials and Push Plus levels of service. Was this information helpful? Send us your comments.
http://docs.blackberry.com/es-es/developers/deliverables/51382/standard_and_nonstandard_messages_1303077_11.jsp
2015-07-28T05:51:33
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
Virtual Environments¶ requires Django 1.3 while also maintaining a project which requires Django 1.0. $ virtualenv venv virtualenv venv will create a folder in the current directory which will contain the Python executable files, and a copy of the pip library which you can use to install other packages. The name of the virtual environment (in this case, it was venv) can be anything; omitting the name will place the files in the current directory instead.: $ source venv. Install packages as usual, for example: $ pip install requests - If you are done working in the virtual environment for the moment, you can deactivate it: $ deactivate This puts you back to the system’s default Python interpreter with all its installed libraries. To delete a virtual environment, just delete its folder. (In this case, it would be rm -rf venv.) After a while, though, you might end up with a lot of virtual environments littered across your system, and its possible you’ll forget their names or where they were placed. Other Notes¶ Running virtualenv with the option --no-site-packages will not include the packages that are installed globally. This can be useful for keeping the package list clean in case it needs to be accessed later. [This is the default behavior for virtualenv 1.7 and later.] In order to keep your environment consistent, it’s a good idea to “freeze” the current state of the environment packages. To do this, run $ pip freeze > requirements.txt This will create a requirements.txt file, which contains a simple list of all the packages in the current environment, and their respective versions. Later it will be easier for a different developer (or you, if you need to re-create the environment) to install the same packages using the same versions: $ pip install -r requirements.txt This can help ensure consistency across installations, across deployments, and across developers. Lastly, remember to exclude the virtual environment folder from source control by adding it to the ignore list. virtualenvwrapper¶.
https://python-guide.readthedocs.org/en/latest/dev/virtualenvs/
2015-07-28T05:47:00
CC-MAIN-2015-32
1438042981576.7
[]
python-guide.readthedocs.org
User Guide Local Navigation Manage data synchronization conflicts You can change how conflicts that occur during organizer data synchronization are resolved by turning off wireless data synchronization, changing conflict resolution options, and synchronizing organizer data using the BlackBerry® Desktop Software. For more information about managing conflicts that occur during organizer data synchronization, see the Help in the BlackBerry Desktop Software. Next topic: About backing up and restoring smartphone data Previous topic: Manage email reconciliation conflicts Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/36022/Manage_data_sync_conflicts_70_1679310_11.jsp
2015-07-28T06:10:24
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
Information for "JDatabaseMySQL/select" Basic information Display titleAPI16:JDatabaseMySQL/select Default sort keyJDatabaseMySQL/select Page length (in bytes)1,384 Page ID87:40,’
https://docs.joomla.org/index.php?title=API16:JDatabaseMySQL/select&action=info
2015-07-28T07:36:31
CC-MAIN-2015-32
1438042981576.7
[]
docs.joomla.org
Installation and Configuration Guide Local Navigation Install a standby BlackBerry Enterprise Server During the installation process, you must restart the computer. After you finish: Before you begin: - Log in to the computer using the Windows account that you used to install the primary BlackBerry Enterprise Server. This account runs the services for the standby BlackBerry Enterprise Server. - Application extensibility settings dialog box, consider the following information: - You can type an FQDN to create a new BlackBerry MDS Integration Service pool or add the BlackBerry MDS Integration Service instance to a pool that you created during a previous installation process. - To configure a hardware load balancer for the BlackBerry MDS Integration Service pool, you can type an FQDN that corresponds to a DNS record in the DNS server that maps the FQDN to the IP address of the virtual server that you configured on the hardware load balancer. - The setup application creates the BlackBerry MDS Integration Service database on the database server that hosts the BlackBerry Configuration Database. - If you add the BlackBerry MDS Integration Service instance to an existing pool, the setup application selects the existing BlackBerry MDS Integration Service database and existing administrator account and publisher account. -. -. -: Installing a standby BlackBerry Enterprise Server Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/admin/deliverables/20913/Install_a_standby_BES_858872_11.jsp
2015-07-28T06:11:06
CC-MAIN-2015-32
1438042981576.7
[]
docs.blackberry.com
AWS Chalice Integration (EXPERIMENTAL)¶ How it works¶ Using aioboto3.experimental.async_chalice.AsyncChalice as the main app entrypoint for a chalice app adds some shims in so that you can use async def functions with HTTP routes normally. Additionally a app.aioboto3 contains an aioboto3 Session object which can be used to get s3 clients etc… Passing in a session to AsyncChalice overrides the default empty session. Chalice has some interesting quirks to how it works, most notably the eventloop can disappear between invocations so storing references to anything which could store the current event loop is not recommended. Because of this, caching aioboto3 clients and resources is not a good idea and realistically because this code is designed to be ran in a lambda, said caching buys you little. The Chalice integration is very experimental, until someone runs it for a while and has faith in it, I would not recommend using this for anything critical. Example¶ from aioboto3.experimental.async_chalice import AsyncChalice app = AsyncChalice(app_name='testclient') @app.route('/hello/{name}') async def hello(name): return {'hello': name} @app.route('/list_buckets') async def get_list_buckets(): async with app.aioboto3.client("s3") as s3: resp = await s3.list_buckets() return {"buckets": [bucket['Name'] for bucket in resp['Buckets']]}
https://aioboto3.readthedocs.io/en/latest/chalice.html
2022-01-29T03:26:21
CC-MAIN-2022-05
1642320299927.25
[]
aioboto3.readthedocs.io
v1 BrowserCompat API¶ The v1 API1 API was designed in March 2014, and implemented over the next year. It was allowed to be partially implemented, and labeled as “draft”, so that the design could be modified as new problems were discovered. The API as designed is mostly implemented as of December 2015, but further changes will be made to the v1 API in 2016. See the issues page for details. The v1 API was based on release candidate 1 (RC1) of the JSON API specification, which was released 2014-07-05. Starting with RC2 on 2015-02-18, The JSON API team rapidly iterated on the design, making significant changes informed by the experience of implementors. JSON API v1.0 was released May 2015, and is significantly different from RC1. The JSON API team does not preserve documentation for release candidates (online or with a git tag), so it is impossible to refer to the documentation for RC1. v1 will remain on JSON API RC1. The next version of the API, v2, will support JSON API 1.0. Both the v1 and v2 APIs will be supported until the tools are updated, and then the v1 API will be retired. - Resources - Change Control Resources - History Resources - Views
https://browsercompat.readthedocs.io/en/spike5_v2_api_1159406/v1/intro.html
2022-01-29T05:18:31
CC-MAIN-2022-05
1642320299927.25
[]
browsercompat.readthedocs.io
DigitalOcean provides several command-line interfaces (CLIs) and application programming interfaces (APIs) for managing your resources. This section provides the reference materials for these offerings, as well as resources from the open source community. The following distributions have reached end of life and are now deprecated as of 18 January 2022: These images will be removed from the control panel starting on 18 January 2022 but will remain accessible for Droplet creation via the API for 30 days after the initial deprecation. If you need to use these distribution versions after the images have been fully deprecated, you can create Droplets from a snapshot of a Droplet with that version or from a custom image. Released v1.69.0 of doctl, the official DigitalOcean CLI. This release contains a number of bug fixes and adds support to the kubernetes cluster kubeconfig save sub-command for setting an alias for a cluster’s context name. Rocky Linux 8.5 x64 ( rockylinux-8-x64) base image is now available in the control panel and via the API. For more information, see the full release notes.
https://docs.digitalocean.com/reference/
2022-01-29T04:46:36
CC-MAIN-2022-05
1642320299927.25
[]
docs.digitalocean.com
Analyze Azure AD activity logs with Azure Monitor logs After you integrate Azure AD activity logs with Azure Monitor logs, you can use the power of Azure Monitor logs to gain insights into your environment. You can also install the Log analytics views for Azure AD activity logs to get access to pre-built reports around audit and sign-in events in your environment. In this article, you learn how to analyze the Azure AD activity logs in your Log Analytics workspace.. Prerequisites To follow along, you need: - A Log Analytics workspace in your Azure subscription. Learn how to create a Log Analytics workspace. - First, complete the steps to route the Azure AD activity logs to your Log Analytics workspace. - Access to the log analytics workspace - The following roles in Azure Active Directory (if you are accessing Log Analytics through Azure Active Directory portal) - Security Admin - Security Reader - Report Reader - Global Admin Navigate to the Log Analytics workspace Select Azure Active Directory, and then select Logs from the Monitoring section to open your Log Analytics workspace. The workspace will open with a default query. View the schema for Azure AD activity logs The logs are pushed to the AuditLogs and SigninLogs tables in the workspace. To view the schema for these tables: From the default query view in the previous section, select Schema and expand the workspace. Expand the Log Management section and then expand either AuditLogs or SigninLogs to view the log schema. Query the Azure AD activity logs Now that you have the logs in your workspace, you can now run queries against them. For example, to get the top applications used in the last week, replace the default query with the following and select Run SigninLogs | where CreatedDateTime >= ago(7d) | summarize signInCount = count() by AppDisplayName | sort by signInCount desc To get the top audit events over the last week, use the following query: AuditLogs | where TimeGenerated >= ago(7d) | summarize auditCount = count() by OperationName | sort by auditCount desc Alert on Azure AD activity log data You can also set up alerts on your query. For example, to configure an alert when more than 10 applications have been used in the last week: From the workspace, select Set alert to open the Create rule page. Select the default alert criteria created in the alert and update the Threshold in the default metric to 10. Enter a name and description for the alert, and choose the severity level. For our example, we could set it to Informational. Select the Action Group that will be alerted when the signal occurs. You can choose to notify your team via email or text message, or you could automate the action using webhooks, Azure functions or logic apps. Learn more about creating and managing alert groups in the Azure portal. Once you have configured the alert, select Create alert to enable it. Use pre-built workbooks for Azure AD activity logs The workbooks provide several reports related to common scenarios involving audit, sign-in, and provisioning events. You can also alert on any of the data provided in the reports, using the steps described in the previous section. - Provisioning analysis: This workbook shows reports related to auditing provisioning activity, such as the number of new users provisioned and provisioning failures, number of users updated and update failures and the number of users de-provisioned and corresponding failures. - Sign-ins Events: This workbook shows the most relevant reports related to monitoring sign-in activity, such as sign-ins by application, user, device, as well as a summary view tracking the number of sign-ins over time. - Conditional access insights: The Conditional Access insights and reporting workbook enables you to understand the impact of Conditional Access policies in your organization over time.
https://docs.microsoft.com/sk-SK/azure/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics
2022-01-29T05:35:52
CC-MAIN-2022-05
1642320299927.25
[]
docs.microsoft.com
Crate llvm_ir_analysis[−][src] Expand description This crate provides various analyses of LLVM IR, such as control-flow graphs, dominator trees, control dependence graphs, etc. For a more thorough introduction to the crate and how to get started, see the crate’s README. Structs The control dependence graph for a particular function. The control flow graph for a particular function. Analyzes multiple Modules, providing a ModuleAnalysis for each; and also provides a few additional cross-module analyses (e.g., a cross-module call graph) The dominator tree for a particular function. Computes (and caches the results of) various analyses on a given Function Allows you to iterate over all the functions in the analyzed Module(s) that have a specified type. Computes (and caches the results of) various analyses on a given Module The postdominator tree for a particular function.
https://docs.rs/llvm-ir-analysis/latest/llvm_ir_analysis/
2022-01-29T03:44:14
CC-MAIN-2022-05
1642320299927.25
[]
docs.rs
Chrome Extensions Disabled From Chrome version 35 and above due to security reasons, extensions can be installed only if they are hosted on the Chrome Web Store. With this change, extensions that were previously installed (ex. Test Studio Extensions) may be automatically disabled, and cannot be re-enabled or re-installed until they are hosted on the Chrome Web Store. For your convenience we are providing you the download links for the latest Chrome extensions below: Test Studio 2017 R3 (v. 2017.3.1010) And Later There is a single extension combining recording and execution: Progress Telerik Test Studio Extension. Test Studio 2017 R2 SP1 (v. 2017.2.824) And Earlier The extensions for Test Studio versions as of 2017 R2 SP1 (v. 2017.2.824) and earlier are divided into two separate extensions: Progress Test Studio Chrome Recorder and Progress Test Studio Chrome Execution. Telerik Test Studio Chrome Explore - the exploratory extension is not mandatory. Extensions In Chrome Web Store The extensions comaptible both with Test Studio 2017 R2 SP 1 (v. 2017.2.824) and earlier and 2017 R3 (v. 2017.3.1010) and later are available for manual installation in the Chrome Web Store. Make sure the Extensions node is selected. Type Progress Test Studio in the search bar and hit Enter. The displayed extension called Progress Test Studio Extension (highlighted) is the latest extension comapatible with Test Studio 2017 R3 (v. 2017.3.1010) and later, and Chrome version 61 and later. You can proceed installing it by clicking the Add To Chrome button. The two Progress Test Studio Extensions separated for Recording and Execution (highlighted below) are compatible with Test Studio version 2017 R2 SP1 (v. 2017.2.824) and previos, and Chrome with version prior 60. You can proceed installing these by clicking the Add To Chrome button. Note: If you are using Test Studio 2014 R1 (v. 2014.1.410) or earlier the respective extensions could be also found in the Store.
https://docs.telerik.com/teststudio/troubleshooting-guide/browser-inconsistencies-tg/extensions-disabled-in-chrome
2022-01-29T04:44:48
CC-MAIN-2022-05
1642320299927.25
[array(['/teststudio/img/troubleshooting-guide/browser-inconsistencies-tg/extensions-disabled-in-chrome/fig1.png', 'Extensions'], dtype=object) array(['/teststudio/img/troubleshooting-guide/browser-inconsistencies-tg/extensions-disabled-in-chrome/fig2.png', 'Add Extensions New'], dtype=object) array(['/teststudio/img/troubleshooting-guide/browser-inconsistencies-tg/extensions-disabled-in-chrome/fig3.png', 'Add Extensions'], dtype=object) array(['/teststudio/img/troubleshooting-guide/browser-inconsistencies-tg/extensions-disabled-in-chrome/fig4.png', 'Telerik Extensions'], dtype=object) ]
docs.telerik.com
In this implementation, a host value includes everything from the end of the protocol identifier (if present) to the end of the extension (e.g. .com). URL literal examples: Output: Generates a column containing the value. Column reference example: Output: Generates the new myHost column containing the host values extracted from the myURLs column. Name of the column or URL or String literal whose values are used to extract the host value.
https://docs.trifacta.com/plugins/viewsource/viewpagesrc.action?pageId=121374055
2022-01-29T05:48:00
CC-MAIN-2022-05
1642320299927.25
[]
docs.trifacta.com
VQL is central to the design and functionality of Velociraptor, and a solid grasp of VQL is critical to understanding and extending Velociraptor. The need for a query language arose from our experience of previous Digital Forensic and Incident Response (DFIR) frameworks. Endpoint analysis tools must be flexible enough to adapt to new indicators of compromise (IOCs) and protect against new threats. While it is always possible to develop new capability in code, it’s not always easy or quick to deploy a new version. A query language can accelerate the time it takes to discover an IOC, design a rule to detect it, and then deploy the detection at scale across a large number of hosts. Using VQL, a DFIR investigator can learn of a new type of indicator, write relevant VQL queries, package them in an artifact, and hunt for the artifact across the entire deployment in a matter of minutes. Additionally, VQL artifacts can be shared with the community and facilitate a DFIR-specific knowledge exchange of indicators and detection techniques. When learning VQL, we recommend practicing in an environment where you can easily debug, iterate, and interactively test each query. You can read more about notebooks here. For the purposes of this documentation, we will assume you created a notebook and are typing VQL into the cell. VQL’s syntax is heavily inspired by SQL. It uses the same basic SELECT .. FROM .. WHERE sentence structure, but does not include the more complex SQL syntax, such as JOIN or HAVING. In VQL, similar functionality is provided through plugins, which keeps the syntax simple and concise. VQL does not place any restrictions on the use of whitespace in the query body. We generally prefer queries that are well indented because they are more readable and look better but this is not a requirement. Unlike SQL, VQL does not require or allow a semicolon ; at the end of statements. The following two queries are equivalent -- This query is all on the same line - not very readable but valid. LET X = SELECT * FROM info() SELECT * FROM X -- We prefer well indented queries but VQL does not mind. LET X= SELECT * FROM info() SELECT * FROM X Let’s consider the basic syntax of a VQL query. The query starts with a SELECT keyword, followed by a list of Column Selectors then the FROM keyword and a VQL Plugin potentially taking arguments. Finally we have a WHERE keyword followed by a filter expression. While VQL syntax is similar to SQL, SQL was designed to work on static tables in a database. In VQL, the data sources are not actually static tables on disk - they are provided by code that runs to generate rows. VQL Plugins produce rows and are positioned after the FROM clause. Like all code, VQL plugins use parameters to customize and control their operations. VQL Syntax requires all arguments to be provided by name (these are called keyword arguments). Depending on the specific plugins, some arguments are required while some are optional. You can type ? in the Notebook interface to view a list of possible completions for a keyword. Completions are context sensitive. For example, since plugins must follow the FROM keyword, any suggestions after the FROM keyword will be for VQL plugins. Typing ? inside a plugin arguments list shows the possible arguments, their type, and if they are required or optional. In order to understand how VQL works, let’s follow a single row through the query. Velociraptor’s VQL engine will call the plugin and pass any relevant arguments to it. The plugin will then generate one or more rows and send a row at a time into the query for further processing. The column expression in the query receives the row. However, instead of evaluating the column expression immediately, VQL wraps the column expression in a Lazy Evaluator. Lazy evaluators allow the actual evaluation of the column expression to be delayed until a later time. Next, VQL takes the lazy evaluator and uses them to evaluate the filter condition, which will determine if the row is to be eliminated or passed on. In this example, the filter condition ( X=1) must evaluate the value of X and therefore will trigger the Lazy Evaluator. Assuming X is indeed 1, the filter will return TRUE and the row will be emitted from the query. In the previous example, the VQL engine goes through signficant effort to postpone the evaluation as much as possible. Delaying an evaluation is a recurring theme in VQL and it saves Velociraptor from performing unnecessary work, like evaluating a column value if the entire row will be filtered out. Understanding lazy evaluation is critical to writing efficient VQL queries. Let’s examine how this work using a series of experiments. For these experiments we will use the log() VQL function, which simply produces a log message when evaluated. -- Case 1: One row and one log message SELECT OS, log(message="I Ran!") AS Log FROM info() -- Case 2: No rows and no log messages SELECT OS, log(message="I Ran!") AS Log FROM info() WHERE OS = "Unknown" -- Case 3: Log message but no rows SELECT OS, log(message="I Ran!") AS Log FROM info() WHERE Log AND OS = "Unknown" -- Case 4: No rows and no log messages SELECT OS, log(message="I Ran!") AS Log FROM info() WHERE OS = "Unknown" AND Log In Case 1, a single row will be emitted by the query and the associated log function will be evaluated, producing a log message. Case 2 adds a condition which should eliminate the row. Because the row is eliminated VQL can skip evaluation of the log() function. No log message will be produced. Cases 3 and 4 illustrate VQL’s evaluation order of AND terms - from left to right with an early exit. We can use this property to control when expensive functions are evaluated e.g. hash() or upload(). Scope is a concept common in many languages, and it is also central in VQL. A scope is a bag of names that is used to resolve symbols, functions and plugins in the query. For example, consider the query SELECT OS FROM info() VQL sees “info” as a plugin and looks in the scope to get the real implementation of the plugin. Scopes can be nested, which means that in different parts of the query a new child scope is used to evaluate the query. The child scope is constructed by layering a new set of names over the top of the previous set. When VQL tries to resolve a name, it looks up the scope in reverse order going from layer to layer until the symbol is resolved. Take the following query for example, VQL evaluates the info() plugin, which emits a single row. Then VQL creates a child scope, with the row at the bottom level. When VQL tries to resolve the symbol OS from the column expression, it examines the scope stack in reverse, checking if the symbol OS exists in the lower layer. If not, VQL checks the next layer, and so on. Columns produced by a plugin are added to the child scope and therefore mask the same symbol name from parent scopes. This can sometimes unintentionally hide variables of the same name which are defined at a parent scope. If you find this happens to your query you can rename earlier symbols using the AS keyword to avoid this problem. For example: SELECT Pid, Name, { SELECT Name FROM pslist(pid=Ppid) } AS ParentName FROM pslist() In this query, the symbol Name in the outer query will be resolved from the rows emitted by pslist() but the second Name will be resolved from the row emitted by pslist(pid=Ppid) - or in other words, the parent’s name. Strings denoted by " or ' can escape special characters using the \. For example, "\n" means a new line. This is useful but it also means that backslashes need to be escaped. This is sometimes inconvenient, especially when dealing with Windows paths (that contains a lot of backslashes). Therefore, Velociraptor also offers a multi-line raw string which is denoted by ''' (three single quotes). Within this type of string no escaping is possible, and the all characters are treated literally - including new lines. You can use ''' to denote multi line strings. In VQL an Identifier is the name of a column, member of a dict or a keyword name. Sometimes identifiers contain special characters such as space or . which make it difficult to specify them without having VQL get confused by these extra characters. In this case it is possible to enclose the identifier name with back ticks ( `). In the following example, the query specifies keywords with spaces to the dict() plugin in order to create a dict with keys containing spaces. The query then continues to extract the value from this key by enclosing the name of the key using backticks. LET X = SELECT dict(`A key with spaces`="String value") AS Dict FROM scope() SELECT Dict, Dict.`A key with spaces` FROM X VQL Subqueries can be specified as a column expression or as an arguments. Subqueries are delimited by { and }. Subqueries are also lazily evaluated, and will only be evaluated when necessary. The following example demonstrates subqueries inside plugin args. The if() plugin will evaluate the then or the else query depending on the condition value (in this example when X has the value 1). SELECT * FROM if(condition=X=1, then={ SELECT * FROM ... }, else={ SELECT * FROM ... }) An array may be defined either by ( and ) or [ and ]. Since it can be confusing to tell regular parenthesis from an array with a single element, VQL also allows a trailing comma to indicate a single element array. For example (1, ) means an array with one member, whereas (1) means a single value of 1. VQL is strict about the syntax of a VQL statement. Each statement must have a plugin specified, however sometimes we dont really want to select from any plugin at all. The default noop plugin is called scope() and simply returns the current scope as a single row. If you even need to write a query but do not want to actually run a plugin, use scope() as a noop plugin. For example -- Returns one row with Value=4 SELECT 2 + 2 AS Value FROM scope() VQL is modeled on basic SQL since SQL is a familiar language for new users to pick up. However, SQL quickly becomes more complex with very subtle syntax that only experienced SQL users use regularly. One of the more complex aspects of SQL is the JOIN operator which typically comes in multiple flavors with subtle differences (INNER JOIN, OUTER JOIN, CROSS JOIN etc). While these make sense for SQL since they affect the way indexes are used in the query, VQL does not have table indexes, nor does it have any tables. Therefore the JOIN operator is meaningless for Velociraptor. To keep VQL simple and accessible, we specifically did not implement a JOIN operator. Instead of a JOIN operator, VQL has the foreach() plugin, which is probably the most commonly used plugin in VQL queries. The foreach() plugin takes two arguments: The row parameter is a subquery that provides rows The query parameter is a subquery that will be evaluated on a subscope containing each row that is emitted by the row argument. Consider the following query: SELECT * FROM foreach( row={ SELECT Exe FROM pslist(pid=getpid()) }, query={ SELECT ModTime, Size, FullPath FROM stat(filename=Exe) }) Note how Exe is resolved from the produced row since the query is evaluated within the nested scope. Foreach is useful when we want to run a query on the output of another query. Normally foreach iterates over each row one at a time. The foreach() plugin also takes the workers parameter. If this is larger than 1, foreach() will use multiple threads and evaluate the query query in each worker thread. This allows the query to evaluate values in parallel. For example, the following query retrieves all the files in the System32 directory and calculates their hash. SELECT FullPath, hash(path=FullPath) FROM glob(globs="C:/Windows/system32/*") WHERE NOT IsDir As each row is emitted from the glob() plugin with a filename of a file, the hash() function is evaluated and the hash is calculated. However this is linear, since each hash is calculated before the next hash is started - hence only one hash is calculated at once. This example is very suitable for parallelization because globbing for all files is quite fast, but hashing the files can be slow. If we delegate the hashing to multiple threads, we can make more effective use of the CPU. SELECT * FROM foreach( row={ SELECT FullPath FROM glob(globs="C:/Windows/system32/*") WHERE NOT IsDir }, query={ SELECT FullPath, hash(path=FullPath) FROM scope() }, worker=10) Deconstructing a dict means to take that dict and create a column for each field of that dict. Consider the following query: LET Lines = '''Foo Bar Hello World Hi There ''' LET all_lines = SELECT grok(grok="%{NOTSPACE:First} %{NOTSPACE:Second}", data=Line) AS Parsed FROM parse_lines(accessor="data", filename=Lines) SELECT * FROM foreach(row=all_lines, column="Parsed") This query reads some lines (for example log lines) and applies a grok expression to parse each line. The grok function will produce a dict after parsing the line with fields determined by the grok expression. The all_lines query will have one column called “Parsed” containing a dict with two fields (First and Second). Using the column parameter to the foreach() plugin, foreach will use the value in that column as a row, deconstructing the dict into a table containing the First and Second column. We know that subqueries can be used in various parts of the query, such as in a column specifier or as an argument to a plugin. While subqueries are convenient, they can become unwieldy when nested too deeply. VQL offers an alternative to subqueries called Stored Queries. A stored query is a lazy evaluator of a query that we can store in the scope. Wherever the stored query is used it will be evaluated on demand. Consider the example below, where for each process, we evaluate the stat() plugin on the executable to check the modification time of the executable file. LET myprocess = SELECT Exe FROM pslist() LET mystat = SELECT ModTime, Size, FullPath FROM stat(filename=Exe) SELECT * FROM foreach(row=myprocess, query=mystat) A Stored Query is simply a query that is stored into a variable. It is not actually evaluated at the point of definition. At the point where the query is referred, that is where evaluation occurs. The scope at which the query is evaluated is derived from the point of reference. For example in the query above, mystat simply stores the query itself. Velociraptor will then re-evaluate the mystat query for each row given by myprocess as part of the foreach() plugin operation. We have previously seen VQL goes out of its way to do as little work as possible. Consider the following query LET myhashes = SELECT FullPath, hash(path=FullPath) FROM glob(globs="C:/Windows/system32/*") SELECT * FROM myhashes LIMIT 5 The myhashes stored query hashes all files in System32 (many thousands of files). However, this query is used in a second query with a LIMIT clause. When the query emits 5 rows in total, the entire query is cancelled (since we do not need any more data) which in turn aborts the myhashes query. Therefore, VQL is able to exit early from any query without having to wait for the query to complete. This is possible because VQL queries are asynchronous - we do not calculate the entire result set of myhashes before using myhashes in another query, we simply pass the query itself and forward each row as needed. A stored query does not in itself evaluate the query. Instead the query will be evaluated whenever it is referenced. Sometimes this is not what we want to do. For example consider a query which takes a few seconds to run, but its output is not expected to change quickly. In that case, we actually want to cache the results of the query in memory and simply access it as an array. Expanding a query into an array in memory is termed Materializing the query. For example, consider the following query that lists all sockets on the machine, and attempts to resolve the process ID to a process name using the pslist() plugin. LET process_lookup = SELECT Pid AS ProcessPid, Name FROM pslist() SELECT Laddr, Status, Pid, { SELECT Name FROM process_lookup WHERE Pid = ProcessPid } AS ProcessName FROM netstat() This query will be very slow because the process_lookup stored query will be re-evaluated for each row returned from netstat (that is, for each socket). The process listing will not likely change during the few seconds it takes the query to run. It would be more efficient to have the process listing cached in memory for the entire length of the query. We recommend that you Materialize the query: LET process_lookup <= SELECT Pid AS ProcessPid, Name FROM pslist() SELECT Laddr, Status, Pid, { SELECT Name FROM process_lookup WHERE Pid = ProcessPid } AS ProcessName FROM netstat() The difference between this query and the previous one is that the LET clause uses <= instead of =. The <= is the materialize operator. It tells VQL to expand the query in place into an array which is then assigned to the variable process_lookup. Subsequent accesses to process_lookup simply access an in-memory array of pid and name for all processes and do not need to run pslist() again. LET expressions may store queries into a variable, and have the queries evaluated in a subscope at the point of use. A LET expression can also declare explicit passing of variables. Consider the following example which is identical to the example above: LET myprocess = SELECT Exe FROM pslist() LET mystat(Exe) = SELECT ModTime, Size, FullPath FROM stat(filename=Exe) SELECT * FROM foreach(row=myprocess, query={ SELECT * FROM mystat(Exe=Exe) }) This time mystat is declares as a VQL Local Plugin that takes arguments. Therefore we now pass it an parameter explicitly and it behaves as a plugin. Similarly we can define a VQL Local Function. LET MyFunc(X) = X + 5 -- Return 11 SELECT MyFunc(X=6) FROM scope() Remember the difference between a VQL plugin and a VQL function is that a plugin returns multiple rows and therefore needs to appear between the FROM and WHERE clauses. A function simply takes several values and transforms them into a single value. In VQL an operator represents an operation to be taken on operands. Unlike SQL, VQL keeps the number of operators down, preferring to use VQL functions over introducing new operators. The following operators are available. Most operators apply to two operands, one on the left and one on the right (so in the expression 1 + 2 we say that 1 is the Left Hand Side ( LHS), 2 is the Right Hand Side ( RHS) and + is the operator. When VQL encounters an operator it needs to decide how to actually evaluate the operator. This depends on what types the LHS and RHS operands actually are. The way in which operators interact with the types of operands is called a protocol. Generally VQL does the expected thing but it is valuable to understand which protocol will be chosen in specific cases. For example consider the following query LET MyArray = ("X", "XY", "Y") LET MyValue = "X" LET MyInteger = 5 SELECT MyArray =~ "X", MyValue =~ "X", MyInteger =~ "5" FROM scope() In the first case the regex operator is applied to an array so the expression is true if any member of the array matches the regular expression. The second case applied the regex to a string, so it is true if the string matches. Finally in the last case, the regex is applies to an integer. It makes no sense to apply a regular expression to an integer and so VQL returns FALSE. Let’s summarizes some of the more frequent VQL control structures. We already met with the foreach() plugin before. The row parameter can also receive any iterable type (like an array). VQL does not have a JOIN operator - we use the foreach plugin to iterate over the results of one query and apply a second query on it. SELECT * FROM foreach( row={ <sub query goes here> }, query={ <sub query goes here >}) Sometimes arrays are present in column data. We can iterate over these using the foreach plugin. SELECT * FROM foreach( row=<An iterable type>, query={ <sub query goes here >}) If row is an array, the value will be assigned to _value as a special placeholder. The if() plugin and function allows branching in VQL. SELECT * FROM if( condition=<sub query or value>, then={ <sub query goes here >}, else={ <sub query goes here >}) If the condition is a query it is true if it returns any rows. Next, we’ll evaluate the then subquery or the else subquery. Note that as usual, VQL is lazy and will not evaluate the unused query or expression. The switch() plugin and function allows multiple branching in VQL. SELECT * FROM switch( a={ <sub query >}, b={ <sub query >}, c={ <sub query >}) Evaluate all subqueries in order and when any of them returns any rows we stop evaluation the rest of the queries. As usual VQL is lazy - this means that branches that are not taken are essentially free! The chain() plugin allows multiple queries to be combined. SELECT * FROM chain( a={ <sub query >}, b={ <sub query >}, c={ <sub query >}) Evaluate all subqueries in order and append all the rows together. A common need in VQL is to use the GROUP BY clause to stack all rows which have the same value, but what exactly does the GROUP BY clause do? As the name suggests, GROUP BY splits all the rows into groups called bins where each bin has the same value of as the target expression. Consider the query in the example above, the GROUP BY clause specifies that rows will be grouped where each bin has the same value of the X column. Using the same table, we can see the first group having X=1 contains 2 rows, while the second group having X=2 contains only a single row. The GROUP BY query will therefore return two rows (one for each bin). Each row will contain a single value for the X value and one of the Y values. As the above diagram illustrates, it only makes sense in general to select the same column as is being grouped. This is because other columns may contain any number of values, but only a single one of these values will be returned. In the above example, selecting the Y column is not deterministic because the first bin contains several values for Y. Be careful not to rely on the order of rows in each bin. Aggregate VQL functions are designed to work with the GROUP BY clause to operate on all the rows in each bin separately. Aggregate functions keep state between evaluations. For example consider the count() function. Each time count() is evaluated, it increments a number in its own state. Aggregate function State is kept in an Aggregate Context - a separate context for each GROUP BY bin. Therefore, the following query will produce a count of all the rows in each bin (because each bin has a separate state). SELECT X, count() AS Count FROM … GROUP BY X Aggregate functions are used to calculate values that consider multiple rows. Some aggregate functions: count()counts the total number of rows in each bin. sum()adds up a value for an expression in each bin. enumerate()collect all the values in each bin into an in-memory array. rate()calculates a rate (first order derivative) between each invocation and its previous one. These can be seen in the query below.
https://docs.velociraptor.app/docs/vql/
2022-01-29T04:21:25
CC-MAIN-2022-05
1642320299927.25
[]
docs.velociraptor.app
Need for Quality Skills Data We will explore one function of HR - Recruitment and examine the root cause of its poor efficiency. HR covers different functions including Recruitment, Learning & Development, Workforce Development and others. In the functions where skills are involved data quality of skills matter. And when quality of data is poor effectiveness of the function is poor. We will analyze the recruitment function and see its dependency on skills data. Finding jobs or people is frustratingly costly, time consuming, and takes a lot of effort. Job seekers are frustrated that they are unable to discover the right openings, that they are swamped with job alerts and calls that are unrelated to them and that the hiring process is tediously long. Frustration is even higher at the hiring manager’s end. Their concerns are similar. Inability to find the right people at the right time, receiving resumes that are irrelevant, and having to spend a long precious time in the recruitment process. Ultimately, there are two fundamental functions in recruitment, (1) discoverability and (2) matching. The e-commerce industry is also in a similar function of enabling discoverability and matching; discoverability and matching of right products at one end and right customers at the other end. E-commerce is doing a fabulous job despite there being millions of products. Why is it that we can’t do in recruitment what is done well in e-commerce? In the last five or six years there has been a lot of investment into start-ups trying to solve this problem using AI. However, the problem persists and no scaleable and viable solution has been found. The reason these have failed is that it is not an AI problem. After all, even AI can perform well only when there is quality data. And herein lies the key to solving the problem: quality data on skills. One reason is that the key information regarding products is well captured. The data on products is well parameterized. Different products require different kinds of parameters of data. What dataset is required for, say, TVs is not what is needed for pillow covers. Both segments require data but different parameters of data. That difference apart, discoverability and matching works well because the data is captured well for all products. So comparing TVs is easy, data driven, and insightful. Imagine data of people and jobs captured in the same manner: parameterised, quantitative, structured, neat, and less textual. Wouldn’t the recruitment space see a boost in discoverability and matching and bring efficiency? It will. However, here lies a real problem. There are two components of data when it comes to people and jobs. One is general (or should we call it demographic) information such as email, location, job titles, degrees, past companies and such. Then there is this skills information like functional/technical skills, soft skills, knowledge, domain experience, and others. The former, from a data perspective, is easy to capture and analyse. The general information is clean, has patterns (emails for example), consists of discrete elements, and is quantitative (years of experience for instance). However, the latter i.e. the skills information suffer from lack of these. This information is fuzzy, subjective, unclear, lacks a pattern and consists of non-discrete elements. Because of this, expression of these is tough and difficult to analyze. If we have to solve the problem of discoverability and matching in the recruitment space ,we need to solve the skills data problem. Skills Thoughts Introduction Last modified 4mo ago Copy link
https://docs.itsyourskills.com/skills-thoughts/need-for-quality-skills-data
2022-01-29T03:33:20
CC-MAIN-2022-05
1642320299927.25
[]
docs.itsyourskills.com
Title Global Signatures and Dynamical Origins of the Little Ice Age and Medieval Climate Anomaly Document Type Article Publication Date 2009 Abstract. Recommended Citation Mann, M.E., Z. Zhang, S. Rutherford et al. 2009. "Global Signatures and Dynamical Origins of the “Little Ice Age” and “Midieval Climate Anomaly”." Science 326 (1256). Published in: Science, Nov 27;326(5957):1256-60.
https://docs.rwu.edu/fcas_fp/314/
2022-01-29T04:02:31
CC-MAIN-2022-05
1642320299927.25
[]
docs.rwu.edu
Funnelback 15.0 patches Patches Type Release version Description 3 Bug fixes 15.0.0.11 Upgrades log4j2 to version 2.17 to fix the security vulnerability where log4j2 JNDI features do not protect against attacker-controlled LDAP and other JNDI related endpoints. 3 Bug fixes 15.0.0.10 Prevents creation of objects within Freemarker template files to ensure that template editors can not cause external code to be executed. 3 Bug fixes 15.0.0.9.0.0.8 Fixed an issue where the user editing interface for a user with no permitted collections would be presented with all collections selected, rather than none. 3 Bug fixes 15.0.0.7 Fixes a cross site scripting vulnerability when unescaped HTML was provided to the CheckBlending macro’s linkText attribute. 3 Bug fixes 15.0.0.6 Corrected the XSS Vulnerability in Anchors.html 3 Bug fixes 15.0.0.5 Fixes a bug where configs would not be reloaded in some multi server environments. 3 Bug fixes 15.0.0.4 Fixes a bug where data loss could occur in Push collections if commits failed. 3 Bug fixes 15.0.0.4 Fixes a bug on Windows where commits could fail if index files in a snapshot are held opened. 3 Bug fixes 15.0.0.4 Fixes various DLS security flaws. 3 Bug fixes 15.0.0.3 Fixes a bug where data loss could occur in push on Windows. The problem is more likely to occur when Push is used in a meta collection. 3 Bug fixes 15.0.0.2 Fixes a race condition when saving a meta collection configuration on Windows if a component collection is updating in the background. 3 Bug fixes 15.0.0.1 Fixes a bug with Curator based Best Bets, where an OutOfMemoryError would be thrown.
https://docs.squiz.net/funnelback/docs/latest/release-notes/patches/15.0/index.html
2022-01-29T05:26:13
CC-MAIN-2022-05
1642320299927.25
[]
docs.squiz.net