content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Crate trillium_conn_idsource · [−] Expand description Trillium crate to add identifiers to conns. This crate provides the following utilities: ConnIda handler which must be called for the rest of this crate to function log_formatter::conn_ida formatter to use with trillium_logger (note that this does not depend on the trillium_logger crate and is very lightweight if you do not use that crate) ConnIdExtan extension trait for retrieving the id from a conn Modules Formatter for the trillium_log crate
https://docs.trillium.rs/trillium_conn_id/index.html
2022-09-25T04:37:36
CC-MAIN-2022-40
1664030334514.38
[]
docs.trillium.rs
Using with Docker Toolbox and Machine In development, Docker recommends using Docker Toolbox to set up Docker. It includes a tool called Machine which will create a VM running Docker Engine and point your shell at it using environment variables. To configure docker-py with these environment variables First use Machine to set up the environment variables: $ eval "$(docker-machine env)" You can then use docker-py like this: import docker client = docker.from_env(assert_hostname=False) print client.version() Note: This snippet is disabling TLS hostname checking with assert\_hostname=False. Machine provides us with the exact certificate the server is using so this is safe. If you are not using Machine and verifying the host against a certificate authority, you'll want to enable hostname verification.
https://docker-py.readthedocs.io/en/1.10.3/machine/
2022-09-25T05:07:23
CC-MAIN-2022-40
1664030334514.38
[]
docker-py.readthedocs.io
Server Scaling Introduction AppsAnywhere is intended to be the primary method of deploying applications and resources to end-users, so every installation is built with redundancy and failover. During the implementation phase, an Implementation Consultant will advise if more servers are required. All installations require a minimum of: AppsAnywhere database AppsAnywhere Analytics database Cloudpaging database (if applicable) Load Balancer Configuration Further information can be found on the linked articles above and Database Requirements. AppsAnywhere always recommend that all the servers provisioned for the server license count are switched on at all times. If it is decided that not all the servers are to be switched on (e.g. when hosting in the Cloud), customers should monitor the number of users logging into AppsAnywhere via Analytics and determine the thresholds for switching on or adding more servers. Server numbers Production Installations When deploying a production system, an N+1 model is used, meaning a minimum of three end user facing services. As Analytics is a reporting system, is not end user facing and all data is stored in the SQL database, redundancy is not required AppsAnywhere Cloudpaging 3 x Cloudpaging Admin/License servers 3 x Paging servers Parallels RAS 2 x Parallels RAS Gateway/Publishing Servers 6 x Parallels RDSH Servers This is sufficient as a starting point for most Production environments for up to 300 concurrent sessions, with a maximum of 50 connections per RDSH server. The Hardware requirements are divided by the expected number of concurrent users and the resources required per the applications in each Remote Desktop session. As with any VDI based environment, the number of users and the performance will vary greatly depending on the server resources and the types of applications being delivered. Production Pilot Installations AppsAnywhere Cloudpaging 2 x Cloudpaging Admin/License Windows servers 2 x Paging Windows servers Parallels RAS 2 x Parallels RAS Gateway/Publishing Servers 2 x Parallels RDSH Servers
https://docs.appsanywhere.com/appsanywhere/2.12/server-scaling
2022-09-25T04:11:48
CC-MAIN-2022-40
1664030334514.38
[]
docs.appsanywhere.com
SSL Certificates Overview An SSL certificate issued by a trusted public certificate authority is required for AppsAnywhere, to secure access, and so that users do not see in-browser security warnings. The SSL certificates required for the Cloudpaging and Parallels RAS are used to provide secure communication between AppsAnywhere and the other services. These certificates can be issued by a trusted internal certificate authority if preferable. It is the customer's responsibility to obtain and maintain up-to-date certificates. Requirements The certificate issued must have a ‘common name’ value (cn) matching the FQDN/DNS for each service e.g. AppsAnywhere appsanywhere.uni.edu Analytics analytics.uni.edu Cloudpaging cloudpaging.uni.edu Parallels RAS parallels.uni.edu Server FQDN/DNS entries can be included as a Subject Alternate Names (SANs), if required. Format We recommend certificates are supplied to AppsAnywhere in .PFX (Personal Information Exchange) format as this format is password protected by default. Any passwords associated with the .PFX file must be supplied. If required, see Generating a certificate request (csr). SSL offloading By default, we will apply certificates to your servers. SSL offloading can be used if the SSL certificates for the service will be managed. Load balancing should be configured and operational for a Production environment. For assistance, see Load Balancer Configuration . Next Steps Once the certificates are ready, refer to Applying and Renewing SSL certificates .
https://docs.appsanywhere.com/appsanywhere/2.12/ssl-certificates
2022-09-25T05:57:52
CC-MAIN-2022-40
1664030334514.38
[]
docs.appsanywhere.com
Attach a policy for cross-account access When the CA administrator and the certificate issuer reside in different AWS accounts, the CA administrator must share CA access. This is accomplished by attaching a resource-based policy to the CA. The policy grants issuance permissions to a specific principal, which can be an AWS account owner, an IAM user, an AWS Organizations ID, or an organizational unit ID. A CA administrator can attach and manage policies in the following ways: In the management console, using AWS Resource Access Manager (RAM), which is a standard method for sharing AWS resources across accounts. When you share a CA resource in AWS RAM with a principal in another account, the required resource-based policy is attached to the CA automatically. For more information about RAM, see the AWS RAM User Guide. Note You can easily open the RAM console by choosing a CA and then choosing Actions, Manage resource shares. Programmatically, using the PCA APIs PutPolicy, GetPolicy, and DeletePolicy. Manually, using the PCA commands put-policy, get-policy, and delete-policy in the AWS CLI. Only the console method requires RAM access. Cross-account case 1: Issuing a managed certificate from the console In this case, the CA administrator uses AWS Resource Access Manager (AWS RAM) to share CA access with another AWS account, which allows that account to issue managed ACM certificates. The diagram shows that AWS RAM can share the CA directly with the account, or indirectly through an AWS Organizations ID in which the account is a member. After RAM shares a resource through AWS Organizations, the recipient principal must accept the share for it to take effect. The recipient can configure AWS Organizations to accept offered shares automatically. The recipient account is responsible for configuring autorenewal in ACM. Typically, on the first occasion a shared CA is used, ACM installs a service-linked role that allows it to make unattended certificate calls on ACM Private CA. If this fails (usually due to a missing permission), certificates from the CA are not renewed automatically. Only the ACM user can resolve the problem, not the CA administrator. For more information, see Using a Service Linked Role (SLR) with ACM. Cross-account case 2: Issuing managed and unmanaged certificates using the API or CLI This second case demonstrates the sharing and issuance options that are possible using the AWS Certificate Manager and ACM Private CA API. All of these operations can also be carried out using the corresponding AWS CLI commands. Because the API operations are being used directly in this example, the certificate issuer has a choice of two API operations to issue a certificate. The PCA API action IssueCertificate results in an unmanaged certificate that will not be automatically renewed and must be exported and manually installed. The ACM API action RequestCertificate results in a managed certificate that can be easily installed on ACM integrated services and renews automatically. The recipient account is responsible for configuring auto-renewal in ACM. Typically, on the first occasion a shared CA is used, ACM installs a service-linked role that allows it to make unattended certificate calls on ACM Private CA. If this fails (usually due to a missing permission), certificates from the CA will not renew automatically, and only the ACM user can resolve the problem, not the CA administrator. For more information, see Using a Service Linked Role (SLR) with ACM.
https://docs.aws.amazon.com/acm-pca/latest/userguide/pca-ram.html
2022-09-25T05:36:25
CC-MAIN-2022-40
1664030334514.38
[array(['images/ca_access_2_accounts_console.png', 'Cross-account issuance with the console'], dtype=object) array(['images/ca_access_2_accounts_api_options.png', 'Cross-account issuance using the APIs'], dtype=object)]
docs.aws.amazon.com
Automonitoring enables quick onboarding for multiple cloud, cloud native, and other resources. By providing environment-specific configuration parameters (such as the port and hostname) you can discover and monitor resources in real-time. Pre-configured dashboards provide out-of-the-box visualizations, which can be customized to your requirements. Discovered resources alert definitions are configurable. Alert definitions are filters that are based on tags and alert thresholds. Constraints - Automonitoring and template-based models are incompatible and can not be used together on the same client. - Custom monitor development is not supported. - Resources do not have a default availability metric assigned. This model does not support an availability metric. - Metric configuration is limited to resources being monitored by the agent: - Adjusting polling frequency. Controlled by the agent configuration .yaml files. - Disabling and enabling metrics. Controlled by the agent configuration .yaml files. - Gateway or gateway-based proxy monitoring resources are not compatible with the automonitoring model. Resources Usage The automonitoring model onboards, monitors, and sets alerts for the metrics independent of the template-based model. In addition, a single client can implement both automonitoring-based and template-based monitoring on two separate clients. The following diagram shows how both monitoring methods can be used: These models work in parallel and converge on central platform areas, making the alerting experience is the same across automonitoring and template-based clients. Alerts Alert thresholds are configured with alert definitions which you can use to define alerting thresholds on automonitored resources. Alert definitions allow you to take a metric, filter by tags, and set thresholds with a repeat count. See Automonitoring Alert Definition for more information. FAQs Availability How does availability work? Availability is a function of the alert definitions model. Alerts are set by the metrics. Co-existing with the template-based model Will the new automonitoring model and template model exist together, if so, how long? Automonitoring and template-based models exist together. However, each model requires different clients with different metric data and alert settings. Alert Definitions can be used to augment template based monitoring and provide enhanced granularity and flexibility for defining alert definitions for existing clients. New clients Does the new automonitoring model require a new client for onboarding, if so, how will this impact my existing monitoring? New clients are required to ensure that existing users are not disrupted by the new automonitoring model changes. Existing clients cannot enable the automonitoring option. New client dependency Are there any other DnM features that are dependent on a new client to be set up? The following list is dependent on a new client setup with the automonitoring model enabled: - Automonitoring - Alert Definitions model The new dashboard initiative is not dependent on a newly created client. Custom monitors or metrics Will custom monitors or metrics be allowed to be added as additional metrics if they are not automonitored? Because automonitoring is focused on cloud and cloud native workloads, custom monitor extensibility is not available. Cost of cloud API Because you can not adjust the metric pulling frequency, how will this increase the cost of cloud API monitoring spend (AWS Cloudwatch, Azure Monitor, GCP Stackdriver)? With the automonitoring model, there are two (2) API requests made every minute per cloud integration (AWS, Azure, and GCP) . There are API requests to: - GET available metrics collection - GET metric data samples for all metrics The cost delta between the current model and automonitoring model (assuming one minute rate frequency) is minimal. Both models have roughly the same cost of operating. Teams that want to reduce cost can only onboard their critical resource and increase resource filter flexibility by using the Filter by Tag option in the Onboarding Wizard.
https://docs.opsramp.com/solutions/auto-monitoring/
2022-09-25T04:17:33
CC-MAIN-2022-40
1664030334514.38
[array(['https://docsmedia.opsramp.com/diagrams/automonitoring-vs-templates.png', 'Automonitoring and Template-based Monitoring'], dtype=object) ]
docs.opsramp.com
[−][src]Crate proc_macro_error proc-macro-error This best way of emitting available based on compiler's version. When the underlying diagnostic type is finally stabilized, this crate will simply be delegating to it requiring no changes in your code! So you can just use this crate and have both some of proc_macro::Diagnostic functionality available on stable ahead of time and your error-reporting code future-proof. Cargo features This crate provides enabled by default syn-error feature that gates impl From<syn::Error> for Diagnostic conversion. If you don't use syn and want to cut off some of compilation time, you can disable it via [dependencies] proc-macro-error = { version = "1", default-features = false } *Please note that disabling this feature makes sense only if you don't depend on syn directly or indirectly, and you very likely do. Real world examples structopt-derive(abort-like usage) auto-impl(emit-like usage) Limitations - Warnings are emitted only on nightly, they are ignored on stable. - "help" suggestions can't have their own span info on stable, (essentially inheriting the parent span). - If a panic occurs somewhere in your macro no errors will be displayed. This is not a technical limitation but rather intentional design. panicis not for error reporting. #[proc_macro_error] attribute This attribute MUST be present on the top level of your macro (the function annotated with any of #[proc_macro], #[proc_macro_derive], #[proc_macro_attribute]). This attribute performs the setup and cleanup necessary to make things work. In most cases you'll need the simple #[proc_macro_error] form without any additional settings. Feel free to skip the "Syntax" section. Syntax #[proc_macro_error] or #[proc_macro_error(settings...)], where settings... is a comma-separated list of: proc_macro_hack: In order to correctly cooperate with #[proc_macro_hack], #[proc_macro_error]attribute must be placed before (above) it, like this: #[proc_macro_error] #[proc_macro_hack] #[proc_macro] fn my_macro(input: TokenStream) -> TokenStream { unimplemented!() } If, for some reason, you can't place it like that you can use #[proc_macro_error(proc_macro_hack)]instead. Note If proc-macro-hackwas detected (by any means) allow_not_macroand assert_unwind_safewill be applied automatically. allow_not_macro: By default, the attribute checks that it's applied to a proc-macro. If none of #[proc_macro], #[proc_macro_derive]nor #[proc_macro_attribute]are present it will panic. It's the intention - this crate is supposed to be used only with proc-macros. This setting is made to bypass the check, useful in certain circumstances. Pay attention: the function this attribute is applied to must return proc_macro::TokenStream. This setting is implied if proc-macro-hackwas detected. assert_unwind_safe: By default, your code must be unwind safe. If your code is not unwind safe, but you believe it's correct, you can use this setting to bypass the check. You would need this for code that uses lazy_staticor thread_localwith Cell/RefCellinside (and the like). This setting is implied if #[proc_macro_error]is applied to a function marked as #[proc_macro], #[proc_macro_derive]or #[proc_macro_attribute]. This setting is also implied if proc-macro-hackwas detected. Macros Most of the time you want to use the macros. Syntax is described in the next section below. You'll need to decide how you want to emit errors: - Emit the error and abort. Very much panic-like usage. Served by abort!and abort_call_site!. - Emit the error but do not abort right away, looking for other errors to report. Served by emit_error!and emit_call_site_error!. You can mix these usages. abort and emit_error take a "source span" as the first argument. This source will be used to highlight the place the error originates from. It must be one of: - Something that implements ToTokens(most types in synand proc-macro2do). This source is the preferable one since it doesn't lose span information on multi-token spans, see this issue for details. proc_macro::Span proc-macro2::Span The rest is your message in format-like style. See the next section for detailed syntax. Very much panic-like usage - abort right away and show the error. Expands to !(never type). Shortcut for abort!(Span::call_site(), ...). Expands to !(never type). proc_macro::Diagnostic-like usage - emit the error but keep going, looking for other errors to report. The compilation will fail nonetheless. Expands to ()(unit type). Shortcut for emit_error!(Span::call_site(), ...). Expands to ()(unit type). Like emit_error!but emit a warning instead of error. The compilation won't fail because of warnings. Expands to ()(unit type). Beware: warnings are nightly only, they are completely ignored on stable. Shortcut for emit_warning!(Span::call_site(), ...). Expands to ()(unit type). Build an instance of Diagnosticin format-like style. Syntax All the macros have pretty much the same syntax: - ⓘThis example is not tested abort!(single_expr) Shortcut for Diagnostic::from(expr).abort(). - ⓘThis example is not tested abort!(span, message) The first argument is an expression the span info should be taken from. The second argument is the error message, it must implement ToString. - ⓘThis example is not tested abort!(span, format_literal, format_args...) This form is pretty much the same as 2, except format!(format_literal, format_args...)will be used to for the message instead of ToString. That's it. abort!, emit_warning, emit_error share this exact syntax. abort_call_site!, emit_call_site_warning, emit_call_site_error lack 1 form and do not take span in 2'th and 3'th forms. Those are essentially shortcuts for macro!(Span::call_site(), args...). diagnostic! requires a Level instance between span and second argument (1'th form is the same). Important! If you have some type from proc_macroor synto point to, do not call .span()on it but rather use it directly:let ty: syn::Type = syn::parse2(input).unwrap(); abort!(ty, "BOOM"); // ^^ <-- avoid .span() .span()calls work too, but you may experience regressions in message quality. Note attachments - Every macro can have "note" attachments (only 2 and 3 form). let opt_help = if have_some_info { Some("did you mean `this`?") } else { None }; abort!( span, message; // <--- attachments start with `;` (semicolon) help = "format {} {}", "arg1", "arg2"; // <--- every attachment ends with `;`, // maybe except the last one note = "to_string"; // <--- one arg uses `.to_string()` instead of `format!()` yay = "I see what {} did here", "you"; // <--- "help =" and "hint =" are mapped // to Diagnostic::help, // anything else is Diagnostic::note wow = note_span => "custom span"; // <--- attachments can have their own span // it takes effect only on nightly though hint =? opt_help; // <-- "optional" attachment, get displayed only if `Some` // must be single `Option` expression note =? note_span => opt_help // <-- optional attachments can have custom spans too ); Diagnostic type Diagnostic type is intentionally designed to be API compatible with proc_macro::Diagnostic. Not all API is implemented, only the part that can be reasonably implemented on stable.
https://docs.rs/proc-macro-error/1.0.4/proc_macro_error/
2022-09-25T05:38:23
CC-MAIN-2022-40
1664030334514.38
[]
docs.rs
Cisco Secure Client is the next generation of secure mobility client for Cisco. The Cisco Secure Client directly replaces the AnyConnect secure mobility client. Umbrella module for Cisco Secure Client (formerly AnyConnect) provides always-on security on any network, anywhere, any time—both on and off your corporate VPN. When you install the Roaming Security module, it installs two services: - DNS Security - Secure Web Gateway Web Proxy The DNS agent enforces security at the DNS layer to block malware, phishing, command and control callbacks over any port. The SWG Agent enforces security for all web traffic for full capabilities at the URL and data transit level. Both the DNS and SWG agents are deployed at installation; however, features will only be activated based on your subscription level and desired enablement settings. DNS and SWG are configured with a single profile deployment (OrgInfo.json) and automatically activate the feature set enabled by your settings and license level. There is no separate configuration for adding SWG to a DNS deployment. For existing Umbrella Roaming Client Deployments, the Cisco Secure Client Roaming Security module can replace your existing Umbrella Roaming Client. Access to Cisco Secure Client is included in most licenses. The roaming module allows full update control and greatly increased compatibility with VPN software. The Cisco Secure Client Umbrella module is the only Umbrella agent fully compatible with Cisco Secure VPN. For more information about this solution, watch Cisco Product Manager Adam Winn discuss the solution here. Introduction > Secure Umbrella Roaming: Cisco Secure Client (Formerly AnyConnect) Updated about a month ago
https://docs.umbrella.com/umbrella-user-guide/docs/secure-client-introduction
2022-09-25T04:29:01
CC-MAIN-2022-40
1664030334514.38
[]
docs.umbrella.com
移動先のレンダーテクスチャ Usually cameras render directly to screen, but for some effects it is useful to make a camera render into a texture. This is done by creating a RenderTexture object and setting it as targetTexture on the camera. The camera will then render into that texture. targetTexture が null を返す場合、カメラは画面をレンダリング先にします。.
https://docs.unity3d.com/ja/2020.3/ScriptReference/Camera-targetTexture.html
2022-09-25T05:50:29
CC-MAIN-2022-40
1664030334514.38
[]
docs.unity3d.com
Reporting Bugs - xfce4-taskmanager If you are experiencing a bug in xfce4-taskmanager, your way of helping things getting fixed is to report a bug about it in Xfce GitLab. Please note that to do this you will need to have / create an account. Before reporting a new bug, please try your best to check if it has already been reported (see the latest reports below). Click here for a full list of bug reports.. - configure script issue - looks for libxmu but doesn't detect libxmu6 (2022/09/21 02:04) - Incorrect memory reported in status bar (fix included) (2022/08/12 23:34) - Highlight CPU/IO/RAM heavy processes (2022/07/24 00:45) - Process name changes when restarting process or task manager (2022/07/11 04:43) - Add toolbar button to toggle tree view? (2022/06/16 11:18) - Extend the status bar (2022/06/08 12:52) - Add an option to configure terminating processes lingering period (2022/06/08 12:51) - A proper drop down menu for the column titles (2022/06/08 12:48) - Quick filter (2022/06/08 12:44) - Disks IO graph (2022/06/08 12:41) - Summarize the data for the children in a group when collapsing it (2022/06/08 12:31) - Refresh rate should be an editable field, not a drop down list (2022/06/08 12:14) - More fields/columns (2022/06/08 12:11) - COPYING file (2022/05/31 10:14) - [UX] Incomplete process path copy (2022/04/13 14:34) - [UX] Please add highlighting of several processes (2022/04/01 20:55) - Add option to filter kernel threads (2022/03/04 02:54) - taskmanager does not remember desktop position (2022/02/11 14:36) - Add column for PSS (2021/02/18 03:49) Back to main xfce4-dict documentation page
https://docs.xfce.org/apps/xfce4-taskmanager/bugs?do=
2022-09-25T04:04:36
CC-MAIN-2022-40
1664030334514.38
[]
docs.xfce.org
Hacking¶ This document addresses hackers who wants to get involved in the framework development. Code conventions¶ Valum uses the Vala compiler coding style and these rules are specifically highlighted: - tabs for indentation - spaces for alignment - 80 characters for comment block and 120 for code - always align blocks of assignation around =sign - remember that little space between a function name and its arguments - doclets should be aligned, grouped and ordered alphabetically General strategies¶ Produce minimal headers, especially if the response has an empty body as every byte will count. Since GET handle HEAD as well, verifying the request method to prevent spending time on producing a body that won’t be considered is important. res.headers.set_content_type ("text/html", null); if (req.method == "HEAD") { size_t bytes_written; return res.write_head (out bytes_written); } return res.expand_utf8 ("<!DOCTYPE html><html></html>"); Use the construct block to perform post-initialization work. It will be called independently of how the object is constructed. Tricky stuff¶ Most of HTTP/1.1 specification is case-insensitive, in these cases, libsoup-2.4/Soup.str_case_equal must be used to perform comparisons. Try to stay by the book and read carefully the specification to ensure that the framework is semantically correct. In particular, the following points: - choice of a status code - method is case-sensitive - URI and query are automatically decoded by libsoup-2.4/Soup.URI - headers and their parameters are case-insensitive \r\nare used as newlines - do not handle Transfer-Encoding, except for the libsoup-2.4 implementation with steal_connection: at this level, it’s up to the HTTP server to perform the transformation The framework should rely as much as possible upon libsoup-2.4 to ensure consistent and correct behaviours. Coverage¶ gcov is used to measure coverage of the tests on the generated C code. The results are automatically uploaded to Codecov on a successful build. You can build Valum with coverage by passing the -D b_coverage flag during the configuration step. meson -D b_coverage=true ninja test ninja coverage-html Once you have identified an uncovered region, you can supply a test that covers that particular case and submit us a pull request on GitHub. Tests¶ Valum is thoroughly tested for regression with the glib-2.0/GLib.Test framework. Test cases are annotated with @since to track when a behaviour was introduced and guarantee its backward compatibility. You can refer an issue from GitHub by calling Test.bug with the issue number. Test.bug ("123");
https://valum-framework.readthedocs.io/en/stable/hacking/
2022-09-25T05:36:54
CC-MAIN-2022-40
1664030334514.38
[]
valum-framework.readthedocs.io
Google Cloud Storage To set up your Datastream with Google Cloud Services (GCS) from Chartbeat, GCS must be configured to receive the exported data. Once you have configured GCS to receive data, send the below information back to your account manager at Chartbeat. Destination bucket name Service Account HMAC key The following document summarizes how to set up the necessary GCS access. Create a GCS Bucket Before a Datastream feed can be configured, Chartbeat needs a destination to send the data to. For this, you can create a GCS bucket where Chartbeat will send data, or use an existing bucket. Create a Service Account Chartbeat transfers data to GCS through credentials tied to a Service Account associated with the destination bucket’s Google Cloud Platform Project. Here are instructions to get started on creating and managing service accounts. Generate an HMAC Key Chartbeat authenticates requests to Cloud Storage for your Service Account through hash-based message authentication code (HMAC) keys. This will be the credential you send to your Chartbeat account manager to set up your Datastream feed. To create an HMAC key associated with your Service Account, following these instructions . Destination Setup - Previous Amazon S3 Last modified 2yr ago Copy link Outline Create a GCS Bucket Create a Service Account Generate an HMAC Key
https://docs.chartbeat.com/datastream/destination-setup/google-cloud-storage
2022-09-25T04:33:45
CC-MAIN-2022-40
1664030334514.38
[]
docs.chartbeat.com
To create the minikube cluster, run the following command. Linux users should use the kvm2driver and Mac users should use the hyperkit driver. Other drivers, including the docker driver, are not supported.minikube start --driver=<kvm2|hyperkit> --cni=flannel --cpus=4 --memory=8000 -p=<cluster-name> Run kubectl get nodesto verify your cluster is up and running. Run kubectl config current-contextto verify that kubectlis pointing to your cluster. Once your cluster is up, follow the install steps to deploy Pixie.
https://docs.px.dev/installing-pixie/setting-up-k8s/minikube-setup/
2022-09-25T05:18:44
CC-MAIN-2022-40
1664030334514.38
[]
docs.px.dev
Klondike is the most famous patience — most likely because it comes with a well-known operating system. It is played with one deck. The goal in Klondike is to put all cards, as real families, ascending on the foundation. This gets easier once all cards are lying face up in the playing piles. The sequences on the playing piles have to be put there in descending order. The cards should alternate in colors (red and black). You can move whole sequences or parts of it, if the first card fits on another pile. On a free pile you can put a king of any color, or a sequence starting with a king. When you click on the talon, one card from it will be moved to the waste pile. You can move it to the playing piles or the foundation from there. If the talon is empty, you can move the complete waste pile to the talon by clicking on the empty talon. You can look through the cards on the talon as much as you like. This game was introduced to Paul Olav Tvete, the original developer of KPatience, by his grandfather; it is named after this. No other patience games are known to implement this patience game variant. In Grandfather, one deck is dealt to seven playing piles. Some cards on each pile are face down on the initial deal. The goal is to put all cards as real families ascending on the foundation piles. You can move every card on every pile if it fits on another card, to build a real sequence of descending order. For example, you can move the five of spades on top of the six of spades, no matter how many cards on are on top of the five of spades. Just the six of spades has to be on top of its pile. On a free pile you can place a king (again no matter how many cards are on top of it). If there are no more possible moves, you can redeal the cards. A redeal consists of picking up the cards from the playing piles (pile by pile, left to right) and redealing them in the starting pattern (zigzagging rows of face down cards forming a peak and then left to right rows of face up cards on top). Note that the cards are not shuffled and that cards on the foundation piles are left untouched. You may redeal no more than twice in a single game. Even though the rules are simple and allow many moves, the game is still hard to win. Despite this, or because of it, this game remains a joy to play. This patience has simple rules, yet is hard to win. It is played with one deck. The goal is to put all cards besides aces onto the foundation. There should be an ace left on every playing pile afterwards. Each top card that is of the same suit (e.g. spades) and has a lower value than another top card (e.g. six of spades and four of spades) can be put on the foundation by clicking on it. If you cannot move any more cards to the foundation, you can get a new card for each playing pile by clicking on the talon. On a free pile you can move every other card on top of a pile. You should use these moves to free piles. That way, new cards can be moved to the foundation. The auto drop feature is disabled in this patience game. Freecell is played with one card deck. You have four free cells in the top left corner. In addition there are four foundation piles, and eight playing piles below. The goal of the game is to have all cards as real families ascending on the foundation. You can achieve this often if you know how to play: Freecell is solvable at a rate of 99.9% approximately — of the first 32,000 deals there is only one unsolvable (11,982 if you want to know). In the playing piles you have to build descending sequences, where red and black cards alternate. You can put any card in a free cell. You can only move one card that lays on top of a pile or a free cell. Sequences can only be moved if you have enough free space (either free cells or free playing piles) to place the cards. The maximum amount of cards you can move is calculated by: To solve this game it is recommended to grab the cards out of the playing sequences in the same order they have to be put into the foundation (first the aces, then the twos, etc.) You should try to keep as many free cells and/or playing piles empty, so you can build sequences as long as possible. Mod3 is played with two card decks. The goal is to put all cards on the top three rows. In those you have to build sequences of the same color. In the first row you have to create the sequence 2-5-8-J, in the second row the sequence 3-6-9-Q, and in the third row the sequence 4-7-10-K. The suit of the cards must be the same in each sequence, so you can only put a five of hearts on top of a two of hearts. The fourth row is both your waste pile and playing pile. On an empty slot you can put any card from the first three rows, or one from the top of the fourth row. You can put aces on the aces piles, on top of the talon. They are in the game so you have a starting point for creating free slots. If you cannot move any more cards, you can get new cards on the fourth row by clicking on the talon. The auto drop feature is disabled in this patience game. Gypsy is played with two card decks. The aim is to put all cards in real families ascending on the foundation. The playing piles have to be descending, while red and black cards have to alternate. You can only move sequences or single cards. On a free slot you can put any card or sequence. If you cannot move any more cards, you can click on the talon to get new cards on each playing pile. In using the feature you can ease the game quite a lot, as you have to take many decisions and some of them might turn out to be wrong after you clicked the talon. Forty & Eight is played with two card decks. The goal is to put all cards as real families on the foundation. The playing piles have to be descending. Colors are important. You can only put a five of hearts on a six of hearts, for example. You can only move one card on top of a pile. You can put any card in a free slot. By clicking on the talon you can put a card on the waste pile; from there you can put it on a playing pile or the foundation (KPatience will do this for you). If the talon is empty you can put all cards on the waste pile back on the talon. This works only once: after the second time the talon empties, the game is over. This patience is difficult to solve. With some experience you can solve many of the deals, especially if you use the feature from time to time to correct your decisions, and the decisions KPatience makes in putting cards on the foundation. Simple Simon is played with one card deck. The goal is to put all cards as real families on the foundation. In the playing piles you can build sequences. In general you don't have to care about the suits of the cards, but sequences can only be moved if they are part of a real sequence. For example, you can move the six of spades if the five of spades is on top of it, but may not move it if the five of clubs is on top of it. The cards can only be moved to the foundation if all 13 cards of one family lay on top of each other in the playing piles. Suggestion You should try as soon as possible to move the cards to the correct piles, to create free piles to place cards on temporarily, since you can put any card on those. With enough free room you can build families on free slots independently of the color. If you have all cards in such families you can sort them by color, so they can be moved to the foundation. Yukon is played with one card deck. The goal is to put all cards as real families ascending on the foundation. The sequences on the playing piles have to be descending with alternating red and black cards. You can move every face up card no matter how many cards are on top of it. So you can put a five of hearts on a six of spades if that one is on top of its pile. In a free slot you can put a king of any color (again, no matter how many cards are on top of it). Grandfather's clock is a simple patience game. With some experience you should be able to solve most deals. It is played with one card deck. The aim is to put the cards as real ascending sequences on the foundation. The foundation is on the right-hand side and consists of 12 piles that form the shape of a clock. The nine is at 12 o'clock, the queen is at 3 o'clock, the three is at 6 o'clock and the six is at 9 o'clock. There are 8 playing piles beside the clock and on each are 5 cards. On the playing piles you can build descending sequences. The color of the cards is not important. You can only move one card at a time. Golf is played with one card deck. The goal of Golf is to move all the cards on the tableau to the foundation. The layout of golf solitaire is straightforward. At the beginning of the game you will see the tableau. On it are seven columns each containing five cards. The talon and the foundation are below. Playing golf solitaire is simple, but requires strategy to win. The cards at the base of each column on the tableau are available for play. Available cards are built upon the top foundation card in ascending or descending sequence regardless of suit. If there are no moves available a card may be dealt from the talon to the foundation. The game is over when all the cards in the talon have been dealt and there are no more possible moves. Spider is played with two card decks. The cards are dealt out into 10 playing piles, 4 of 6 cards and 6 of 5 cards each. This leaves 50 cards that can be dealt out 10 at a time, one on each playing pile. In the playing piles, a card can be placed on another card of any suit and of one higher value. A sequence of descending cards of the same suit may be moved from one playing pile to another. The goal of spider is to put all cards as real families descending from Kings anywhere in the playing piles. When such a family is built in a playing pile, it is removed to the lower-left corner of the window. The different levels determine how many suits are dealt - Easy uses 1 suit, Medium uses 2 suits, and Hard uses all 4 suits. The game is fairly easy to win at Easy level, and very difficult to win at Hard level.
https://docs.kde.org/trunk5/en/kdegames/kpat/rules-specific.html
2017-04-23T10:04:14
CC-MAIN-2017-17
1492917118519.29
[array(['/trunk5/en/kdoctools5-common/top-kde.jpg', None], dtype=object)]
docs.kde.org
AttributesToGet AttributesToGet is an array of one or more attributes to retrieve from DynamoDB. If no attribute names are provided, then all attributes will be returned. If any of the requested attributes are not found, they will not appear in the result. AttributesToGet allows you to retrieve attributes of type List or Map; however, it cannot retrieve individual elements within a List or a Map. Note that AttributesToGet has no effect on provisioned throughput consumption. DynamoDB determines capacity units consumed based on item size, not on the amount of data that is returned to an application. Use ProjectionExpression Instead Suppose you wanted to retrieve an item from the Music table, but that you only wanted to return some of the attributes. You could use a GetItem request with an AttributesToGet parameter, as in this AWS CLI example: Copy aws dynamodb get-item \ --table-name Music \ --attributes-to-get '["Artist", "Genre"]' \ --key '{ "Artist": {"S":"No One You Know"}, "SongTitle": {"S":"Call Me Today"} }' But you could use a ProjectionExpression instead: Copy aws dynamodb get-item \ --table-name Music \ --projection-expression "Artist, Genre" \ --key '{ "Artist": {"S":"No One You Know"}, "SongTitle": {"S":"Call Me Today"} }'
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/LegacyConditionalParameters.AttributesToGet.html
2017-11-17T23:15:44
CC-MAIN-2017-47
1510934804019.50
[]
docs.aws.amazon.com
The Database Backend Previously, in order to use Django's ORM with the App Engine Datastore, django-nonrel was required, along with djangoappengine. That's now changed. With Djangae you can use vanilla Django with the Datastore. Heavily inspired by djangoappengine (thanks Waldemar!) Djangae provides an intelligent database backend that allows vanilla Django to be used, and makes use of many of the Datastore's speed and efficiency features such as projection queries. Here's the full list of magic: - Database-level enforcement of unique and unique_together constraints. - A transparent caching layer for queries which return a single result ( .getor any query filtering on a unique field or unique-together fields). This helps to avoid Datastore consistency issues. - Automatic creation of additional index fields containing pre-manipulated values, so that queries such as __iexactwork out of the box. These index fields are created automatically when you use the queries. Use settings.GENERATE_SPECIAL_INDEXES_DURING_TESTINGto control whether that automatic creation happens during tests. - Support for queries which weren't possible with djangoappengine, such as OR queries using Qobjects. - A collection of Django model fields which provide useful functionality when using the Datastore. A ListField, SetField, RelatedSetField, ShardedCounterFieldand JSONField. See the Djangae Model Fields for full details. Roadmap 1.0-beta - Support for ancestor queries. Lots of tests What Can't It Do? Due to the limitations of the App Engine Datastore (it being a non-relational database for a start), there are some things which you still can't do with the Django ORM when using the djangae backend. The easiest way to find these out is to just build your app and look out for the NotSupportedError exceptions. Notable Limitations Here is a brief list of hard limitations you may encounter when using the Djangae datastore backend: bulk_create()is limited to 25 instances if the model has active unique constraints, or the instances being inserted have the primary key specified. In other cases the limit is 1000. filter(field__in=[...])queries are limited to 100 entries (by default) in the list if fieldis not the primary key filter(pk__in=[...])queries are limited to 1000 entries - You are limited to a single inequality filter per query, although excluding by primary key is not included in this count - Queries without primary key equality filters are not allowed within an atomicblock - Queries with an inequality filter on a field must be ordered first by that field - Only 25 individual instances can be retrieved or saved within an atomicblock, although you can get/save the same entity multiple times without increasing the allowed count - Primary key values of zero are not allowed - Primary key string values must not start with a leading double underscore ( __) ManyToManyFieldwill not work reliably/efficiently - use RelatedSetFieldor RelatedListFieldinstead - Transactions. The Datastore has transactions, but they are not "normal" transactions in the SQL sense. Transactions should be done using djangae.db.transactional.atomic. - If unique constraints are enabled, then you are limited to a maximum of 25 unique or unique_together constraints per model (see Unique Constraint Checking). - You are also restricted to altering 12 unique field values on an instance in a single save select_relateddoes nothing. It is ignored when specified as joins are not possible on the datastore. This can result in slow performance on queries which are not designed for the datastore. prefetch_relatedworks correctly however. There are probably more but the list changes regularly as we improve the datastore backend. If you find another limitation not mentioned above please consider sending a documentation PR. Other Considerations When using the Datastore you should bear in mind its capabilities and limitations. While Djangae allows you to run Django on the Datastore, it doesn't turn the Datastore into a relational database. There are things which the datastore is good at (e.g. handling huge bandwidth of reads and writes) and things which it isn't good at (e.g. counting). Djangae is not a substitute for knowing how to use the Datastore. Using Other Databases You can use Google Cloud SQL or sqlite (locally) instead of or along side the Datastore. Note that the Database backend and settings for the Datastore remain the same whether you're in local development or on App Engine Production, djangae switches between the SDK and the production datastore appropriately. However, with Cloud SQL you will need to switch the settings yourself, otherwise you could find yourself developing on your live database! Here's an example of how your DATABASES might look in settings.py if you're using both Cloud SQL and the Datastore. from djangae.environment import is_development_environment DATABASES = { 'default': { 'ENGINE': 'djangae.db.backends.appengine' } } if not is_development_environment(): DATABASES['sql'] = { 'ENGINE': 'django.db.backends.mysql', 'HOST': '/cloudsql/YOUR_GOOGLE_CLOUD_PROJECT:YOUR_INSTANCE_NAME', 'NAME': 'YOUR_DATABASE_NAME', 'USER': 'root', } else: DATABASES['sql'] = { 'ENGINE': 'django.db.backends.sqlite3', 'NAME': 'development.sqlite3' } See the Google documentation for more information on connecting to Cloud SQL via the MySQL client and from external applications. Datastore Caching Djangae has a built-in caching layer, similar to the one built into NDB - only better! You shouldn't even notice the caching layer at work, it's fairly complex and to understand the behaviour you are best reading through the caching tests. But here's a general overview: - There are two layers of caching, the context cache and the memcache cache - When possible, if you get/save an entity it will be cached by it's primary key value, and it's unique constraint combinations - This protects against HRD inconsistencies in many situations, and it happens automagically - The caching layer is heavily tied into the transaction.atomic decorator. If you use the db.RunInTransaction stuff you are going to have a hard time, so don't do that! - You can disable the caching by using the disable_cachecontext manager/decorator. disable_cachetakes two boolean parameters, contextand memcacheso you can configure which caches you want disabled. Be careful though, don't toggle the caching on and off too much or you might get into trouble (I'm sure there's a situation you can break it but I haven't figured out what it is) - The context cache has a complex stack structure, when you enter a transaction the stack is pushed, and when you leave a transaction it's popped. This is to ensure the cache gives you the right results at the right time - The context cache is cleared on each request, and it's thread-local - The memcache cache is not cleared, it's global across all instances and so is updated only when a consistent Get/Put outside a transaction is made - Entities are evicted from memcache if they are updated inside a transaction (to prevent crazy) The following settings are available to control the caching: DJANGAE_CACHE_ENABLED(default True). Setting to False it all off, I really wouldn't suggest doing that! DJANGAE_CACHE_TIMEOUT_SECONDS(default 60 * 60). The length of time stuff should be kept in memcache. DJANGAE_CACHE_MAX_CONTEXT_SIZE(default 1024 * 1024 * 8). This is (approximately) the max size of a local context cache instance. Each request and each nested transaction gets its own context cache instance so be aware that this total can rapidly add up, especially on F1 instances. If you have an F2 or F4 instance you might want to increase this value. If you hit the limit the least used entities will be evicted from the cache. DJANGAE_CACHE_MAX_ENTITY_COUNT(default 8). This is the max number of entities returned by a pk__in query which will be cached in the context or memcache upon their return. If more results than this number are returned then the remainder won't be cached. Datastore Behaviours The Djangae database backend for the Datastore contains some clever optimisations and integrity checks to make working with the Datastore easier. This means that in some cases there are behaviours which are either not the same as the Django-on-SQL behaviour or not the same as the default Datastore behaviour. So for clarity, below is a list of statements which are true: General Behaviours - Doing MyModel.objects.create(primary_key_field=value)will do an insert, so will explicitly check that an object with that PK doesn't already exist before inserting, and will raise an IntegrityError if it does. This is done in a transaction, so there is no need for any kind of manual transaction or existence checking. - Doing an update(field=new_value)query is transactionally safe (i.e. it uses transactions to ensure that only the specified field is updated), and it also automatically avoids the Stale objects issue (see Eventual consistency below) so it only updates objects which definitely match the query. But it still may suffer from the Missing objects issue. See notes about the speed of update()queries in the Speed section below. - A .distinct()query is only possible if the query can be done as a projection query (see 'Speed' section below). Eventual Consistency See App Engine documentation for background. The Datastore's eventual consistency behaviour gives us 2 main issues: - Stale objects: This is where querying for objects by a non-primary key field may return objects which no longer match the query (because they were recently modified) or which were recently deleted. - Missing objects: This is where querying for objects by a non-primary key may not return recently created or recently modified objects which do match the query. There are various solutions and workarounds for these issues. - If pk__inis used in the query (with or without other filters) then the query will be consistent and will returning all matching objects and will not return any non-matching objects. - Accessing the queryset of a RelatedSetFieldor RelatedListFieldautomatically gives you the consistency of a pk__infilter (because that's exactly what it's doing underneath). So my_obj.my_related_set_field.all()is consistent. - To avoid the Stale objects issue, you can do an initial values_list('pk')query and pass the result to a second query, e.g. MyModel.objects.filter(size='large', pk__in=list(MyModel.objects.filter(size='large').values_list('pk', flat=True))). Notes: - This causes 2 queries, so is slightly slower, although the nested values_list('pk')query is fast as it uses a Datastore keys-only. - You need to cast the nested PKs query to list, as otherwise Django will try to combine the inner query as a subquery, which the Datastore cannot handle. - You need to include the additional filters (in this case size=large) in both the inner and outer queries. - This technique only avoids the Stale objects issue, it does not avoid the Missing objects issue. djangae.db.consistency.ensure_instance_consistent It's very common to need to create a new object, and then redirect to a listing of all objects. This annoyingly falls foul of the datastore's eventual consistency. As a .all() query is eventually consistent, it's quite likely that the object you just created or updated either won't be returned, or if it was an update, will show stale data. You can fix this by using djangae.contrib.consistency or if you want a more lightweight approach you can use djangae.db.consistency.ensure_instance_consistent like this: queryset = ensure_instance_consistent(MyModel.objects.all(), updated_instance_pk) Be aware though, this will make an additional query for the extra object (although it's very likely to hit the cache). There are also caveats: - If no ordering is specified, the instance will be returned first - Only ordering on the queryset is respected, if you are relying on model ordering the instance may be returned in the wrong place (patches welcome!) - This causes an extra iteration over the returned queryset once it's retrieved There is also an equivalent function for ensuring the consistency of multiple items called ensure_instances_included. Speed - Using a pk__infilter in addition to other filters will usually make the query faster. This is because Djangae uses the PKs to do a Datastore Getoperation (which is much faster than a Datastore Query) and then does the other filtering in Python. - Doing .values('pk')or .values_list('pk')will make a query significantly faster because Djangae performs a keys-only query. - Doing .values('other')type queries will be faster if Djangae is able to perform a Datastore projection query. This is only possible if: - None of the fetched fields are also being filtered on (which would be a weird thing to do anyway). - The query is not ordered by primary key. - All of the fetched fields are indexed by the Datastore (i.e. are not list/set fields, blob fields or text (as opposed to char) fields). - The model has not got concrete parents. - Doing an .only('foo')or .defer('bar')with a pk_in=[...]filter may not be more efficient. This is because we must perform a projection query for each key, and although we send them over the RPC in batches of 30, the RPC costs may outweigh the savings of a plain old datastore.Get. You should profile and check to see whether using only/defer results in a speed improvement for your use case. - Due to the way it has to be implemented on the Datastore, an update()query is not particularly fast, and other than avoiding calling the save()method on each object it doesn't offer much speed advantage over iterating over the objects and modifying them. However, it does offer significant integrity advantages, see General behaviours section above. - Doing filter(pk__in=Something.objects.values_list('pk', flat=True)) will implicitly evaluate the inner query while preparing to run the outer one. This means two queries, not one like SQL would do! - IN queries and queries with OR branches which aren't filtered on PK result in multiple queries to the datastore. By default you will get an error if you exceed 100 IN filters but this is configurable via the DJANGAE_MAX_QUERY_BRANCHESsetting. Be aware that the more IN/OR filters in a query, the slower the query becomes. 100 is already a high value for this setting so raising it isn't recommended (it's probably better to rethink your data structure or querying) Unique Constraint Checking IMPORTANT: Make sure you read and understand this section before configuring your project tl;dr Constraint checking is costly, you might want to disable it globally using settings.DJANGAE_DISABLE_CONSTRAINT_CHECKS and re-enable on a per-model basis Djangae by default enforces the unique constraints that you define on your models. It does so by creating so called "unique markers" in the datastore. Unique constraint checks have the following caveats... - Unique constraints drastically increase your datastore writes. Djangae needs to create a marker for each unique constraint on each model, for each instance. This means if you have one unique field on your model, and you save() Djangae must do two datastore writes (one for the entity, one for the marker) - Unique constraints increase your datastore reads. Each time you save an object, Djangae needs to check for the existence of unique markers. - Unique constraints slow down your saves(). See above, each time you save, a bunch of stuff needs to happen. - Updating instances via the datastore API (NDB, DB, or datastore.Put and friends) will break your unique constraints. Don't do that! - Updating instances via the datastore admin will do the same thing, you'll be bypassing the unique marker creation. - There is a limit of 25 unique or unique_together constraints per model. However, unique markers are very powerful when you need to enforce uniqueness. They are enabled by default simply because that's the behaviour that Django expects. If you don't want to use this functionality, you have the following options: - Don't mark fields as unique, or in the meta unique_together - this only works for your models, contrib models will still use unique markers Disable unique constraints on a per-model basis via the Djangae meta class (again, only works on the model you specify) class Djangae: disable_constraint_checks = True Disable constraint checking globally via settings.DJANGAE_DISABLE_CONSTRAINT_CHECKS The disable_constraint_checks per-model setting overrides the global DJANGAE_DISABLE_CONSTRAINT_CHECKS so if you are concerned about speed/cost then you might want to disable globally and override on a per-model basis by setting disable_constraint_checks = False on models that require constraints. On Delete Constraints In general, Django's emulation of SQL ON DELETE constraints works with djangae on the datastore. Due to eventual consistency however, the constraints can fail. Take care when deleting related objects in quick succession, a PROTECT constraint can wrongly cause a ProtectedError when deleting an object that references a recently deleted one. Constraints can also fail to raise an error if a referencing object was created just prior to deleting the referenced one. Similarly, when using ON CASCADE DELETE (the default behaviour), a newly created referencing object might not be deleted along with the referenced one. Transactions Django's transaction decorators have no effect on the Datastore, which means that when using the Datastore: django.db.transaction.atomicand non_atomicwill have no effect. - The ATOMIC_REQUESTSand AUTOCOMMITsettings in DATABASESwill have no effect. - Django's get_or_createwill not have the same behaviour when dealing with collisions between threads. This is because it use's Django's transaction manager rather than Djangae's. - If your getaspect filters by PK then you should wrap get_or_createwith djangae.db.transaction.atomic. A collision with another thread will result in a TransactionFailedError. - If your getaspect filters by a unique field or unique-together fields, but not by PK, then (assuming you're using Djangae's unique markers) you won't experience any data corruption, but a collision with another thread will throw an IntegrityError. - If your getaspect does not filter on any unique or unique-together fields then you should fix that. The following functions are available to manage transactions: djangae.db.transaction.atomic- Decorator and Context Manager. Starts a new transaction, accepted xg, indepedendentand mandatoryargs djangae.db.transaction.non_atomic- Decorator and Context Manager. Breaks out of any current transactions so you can run queries outside the transaction djangae.db.transaction.in_atomic_block- Returns True if inside a transaction, False otherwise Do not use google.appengine.ext.db.run_in_transaction and friends, it will break. Multiple Namespaces It's possible to create separate "databases" on the datastore via "namespaces". This is supported in Djangae through the normal Django multiple database support. To configure multiple datastore namespaces, you can add an optional "NAMESPACE" to the DATABASES setting: DATABASES = { 'default': { 'ENGINE': 'djangae.db.backends.appengine' }, 'archive': { 'ENGINE': 'djangae.db.backends.appengine' 'NAMESPACE': 'archive' } } If you do not specify a NAMESPACE for a connection, then the Datastore's default namespace will be used (i.e. no namespace). You can make use of Django's routers, the using() method, and the save(using='...') in the same way as normal multi-database support. Cross-namespace foreign keys aren't supported. Also namespaces effect caching keys and unique markers (which are also restricted to a namespace). Special Indexes The App Engine datastore backend handles certain queries which are unsupported natively, by adding hidden fields to the Datastore entities or by storing additional child entities. The mechanism for adding these fields and then using them during querying is called "special indexing". For example, querying for name__iexact is not supported by the Datastore. In this case Djangae generates an additional entity property with the name value lower-cased, and then when performing an iexact query, will lower case the lookup value and use the generated column rather than the name column. When you run a query that requires special indexes for the first time, an entry will be added to a generated file called djangaeidx.yaml. You will see this file appear in your project root. From that point on, any entities that are saved will have the additional property added. If a new entry appears in djangaeidx.yaml, you will need to resave all of your entities of that kind so that they will be returned by query lookups. contains and icontains Filters When you use __contains or __icontains in a query, the djangaeidx.yaml file will be updated so that all subsequent entity saves will generate an additional descendent entity per-instance-field to store indexing data for that field. This approach will add an additional Put() for each save() and an additional Query() for each __contains look up. Previously, Djangae used to store this index data on the entity itself which caused a number of problems that are now avoided: - The index data had more permutations than were necessary. This was each set of possible characters had to be stored in a List property so that the lookup could use an equality query. Djangae couldn't rely on an inequality (which would allow storing fewer permutations) because that would greatly restrict the queries that a user could perform. - The large number of permutations caused the entities to bloat with additional properties and it wasn't possible to filter them out when querying which means every query (whether using containsor not) would transfer a large amount of data over the RPC, slowing down every single query on an instance which had contains data indexed. - The implementation was flawed. It was originally thought that list properties were limited to 500 entries, this may have been true at some point in datastore history but it's certainly not true now. Because of this incorrect assumption, indexed data was split across properties which made the code very confusing For now, the legacy behaviour is available by setting DJANGAE_USE_LEGACY_CONTAINS_LOGIC = True in your settings file. This setting will be removed so it's recommended that upon upgrading to Djangae 0.9.10 you resave all of your entities (that use contains) instead. Resaving will not remove old indexed properties, we hope to provide a migration file in future that will do that for you. Querying date and datetime Fields with contains The same as when Django is used with a SQL database, Djangae's indexing allows contains and icontains filters to be used on DateField and DateTimeField. When this is used, the field values are converted to ISO format and then indexed as strings, allowing you to perform contains queries on any part of the ISO format string. Distributing djangaeidx.yaml If you are writing a portable app, and your app makes queries which require special indexes, you can ship a custom djangaeidx.yaml in the root of your Django app. The indexes in this file will be combined with the user's main project djangaeidx.yaml at runtime. Migrations Djangae has support for migrating data using the Django migrations infrastructure. See Migrations.
https://djangae.readthedocs.io/en/latest/db_backend/
2017-11-17T23:12:32
CC-MAIN-2017-47
1510934804019.50
[]
djangae.readthedocs.io
To deploy the VMware Identity Manager connector, you install the connector virtual appliance in vCenter Server, power it on, and activate it using an activation code that you generate in the VMware Identity Manager administration console. You also configure appliance settings such as setting passwords. After you install and configure the connector, you go to the VMware Identity Manager administration console to set up the connection to your enterprise directory, enable authentication adapters on the connector, and enable outbound mode for the connector.
https://docs.vmware.com/en/VMware-AirWatch/services/com.vmware.vidm-cloud-deployment/GUID-FB6DDF8C-ED7F-4A2B-A494-F20A45F9EF9B.html
2017-11-17T23:35:33
CC-MAIN-2017-47
1510934804019.50
[]
docs.vmware.com
Ensure that your system meets certain database and browser requirements when working with App Volumes.
https://docs.vmware.com/en/VMware-App-Volumes/2.12/com.vmware.appvolumes.user.doc/GUID-5C1CFB43-1311-4E0D-B0E2-9E45165A5353.html
2017-11-17T23:35:39
CC-MAIN-2017-47
1510934804019.50
[]
docs.vmware.com
AWS services or capabilities described in AWS Documentation may vary by region/location. Click Getting Started with Amazon AWS to see specific differences applicable to the China (Beijing) Region. You can take snapshots of your gateway volumes on a scheduled or ad-hoc basis. This API enables you to delete a snapshot schedule for a volume. For more information, see Working with Snapshots. In the DeleteSnapshotSchedule request, you identify the volume by providing its Amazon Resource Name (ARN). To list or delete a snapshot, you must use the Amazon EC2 API. in Amazon Elastic Compute Cloud API Reference. Namespace: Amazon.StorageGateway.Model Assembly: AWSSDK.dll Version: (assembly version) The DeleteSnapshotScheduleRequest type exposes the following members .NET Framework: Supported in: 4.5, 4.0, 3.5 .NET for Windows Store apps: Supported in: Windows 8
http://docs.aws.amazon.com/sdkfornet/latest/apidocs/items/TStorageGatewayDeleteSnapshotScheduleRequestNET45.html
2017-11-17T23:30:47
CC-MAIN-2017-47
1510934804019.50
[]
docs.aws.amazon.com
ComboBoxField Class This article describes the following topics: Overview This class corresponds to the FormFieldType.ComboBox enum value and represents a drop down control with choices that can be selected. Properties ComboBoxField provides the following properties: Value: Gets or sets single choice that is selected. This choice is represented by the ChoiceOption class that has a single Value and UserInterfaceValue properties. The UserInterfaceValue property is optional and when null, the Value property is used to display the choice in the user interface. DefaultValue: Gets or sets the default selected choice used when the AcroForm is reset to its default values. Widgets: The collection of Widget annotations, which represent the field on the PDF pages. The widgets are created by using the collection's AddWidget() method and can be removed by using. HasEditableTextBox: Boolean value indicating whether the drop down should provide additional text box input, allowing the user to input value that may be different from the provided choices. ShouldSpellCheck: Boolean value indicating whether the text should be spell checked during its input. [C#] Example 1: Create a ComboBoxField and add it to a page ComboBoxField comboBoxField = new ComboBoxField("SampleComboBox"); comboBoxField.Options.Add(new ChoiceOption("First Value")); comboBoxField.Options.Add(new ChoiceOption("Second Value")); comboBoxField.Options.Add(new ChoiceOption("Third Value")); comboBoxField.Value = comboBoxField.Options[1]; VariableContentWidget widget = comboBoxField.Widgets.AddWidget(); widget.Rect = new Rect(100, 100, 200, 30); document.AcroForm.FormFields.Add(comboBoxField); document.Pages[0].Annotations.Add(widget);
https://docs.telerik.com/devtools/document-processing/libraries/radpdfprocessing/model/interactive-forms/form-fields/comboboxfield
2017-11-17T23:20:39
CC-MAIN-2017-47
1510934804019.50
[]
docs.telerik.com
Chart: Overview TKChart is a versatile charting component that offers full customization, great performance and intuitive object model. Its API allows creating complex charts with stunning animations and appearance. TKChart main features include: - Various series types: bar, column, line, spline, area, pie, donut, scatter, bubble, financial series and indicators. - Stacking of bar, column, line and area series including stack 100 mode. - Pan/Zoom and selection functionality. - Animations that use the CoreAnimations and UIKit dynamics. - Multiple axes. - Annotations. - Trackball.
https://docs.telerik.com/devtools/xamarin/nativecontrols/ios/chart/overview.html
2017-11-17T23:18:30
CC-MAIN-2017-47
1510934804019.50
[array(['../images/chart-overview001.png', None], dtype=object)]
docs.telerik.com
Point Labels: Overview TKChart supports point labels. Point labels are visual elements that are placed on the plot at the location of series data points showing the data point's value or other string by your choice. By default point labels are hidden. If you would like to show them, you should set TKChartPointLabelStyle's textHidden property to NO. You can also alter offset origin of the labels using the labelOffset property. barSeries.Style.PointLabelStyle.TextHidden = false; barSeries.Style.PointLabelStyle.LabelOffset = new UIOffset (15, 0);
https://docs.telerik.com/devtools/xamarin/nativecontrols/ios/chart/pointlabels/overview.html
2017-11-17T23:18:08
CC-MAIN-2017-47
1510934804019.50
[array(['../../images/chart-point-labels-overview001.png', None], dtype=object) ]
docs.telerik.com
string_format(val, tot, dec); Returns: String Turns a real number into a string using your own formatting, where you can choose how many "places" are saved to the string and how many decimal places are saved also. Both can be very handy, some games prefer to display a score as a set number of digits, while control over decimal places can be good for a high accuracy the two decimal places of string() cannot provide. If the number of places specified is greater than the value to be shown and/or the number plus the decimal places that have been specified is less than the total places, then spaces will be added before the value to make up the difference (see the example below). Zeros will be added to the right of the decimal point if the value given is less than the total and the number of decimal places to include. str1 = string_format(1234, 8, 0); str2 = string_format(pi, 1, 10); str3 = string_format(pi, 5, 5); This will set str1 to " 1234", str2 to "3.1415926535" and str3 to " 3.14159".
http://docs.yoyogames.com/source/dadiospice/002_reference/strings/string_format.html
2017-11-17T23:02:11
CC-MAIN-2017-47
1510934804019.50
[]
docs.yoyogames.com
General preferences Note that many preferences are now set on a per-store basis i.e. each store has its own setting for some preferences. See the Virtual stores preferences section for more information. Preferences are used to configure some of mSupply's functionality to more closely match your needs. mSupply is very flexible and highly configurable so there are lots of preferences! To access them, choose File > Preferences… from the menus. There is a scrolling side bar on the left containing a list of tabs; click on the one you want to see the preferences on that tab. General tab Organisation name: What is filled in here is quite important. Not only will it print on the top of invoices and various reports, but it is also tied to your registration code. Please think carefully about what it should be before entering it. If you need to change your organisation name, please do so and then re-contact [email protected] for a new registration code. Address lines (1, 2…): Enter the address information about your organisation that you wish to appear on invoices. The Register button This button is used for registering mSupply. (It will be dimmed if you have already registered). If you have not registered, clicking this button will display the registration details window: - Here you are provided with the information needed to register. Two of these, if changed, will invalidate your mSupply registration: - Your organisation name - Your hardware ID - Clicking the Copy details button will copy this information to the clipboard, which you can paste into an email and send to [email protected]. - Once we have received this registration information we will generate a registration code and send it back to you. - The code is entered by clicking the Enter code button in the window shown above. You will then be shown another window where you can enter the registration details you have been supplied: Your registration instructions supplied with the registration code will describe what information to put in each field. - mSupply registration codes are specific to the Hardware ID, organisation name, Number of users and Expiry date of your licence (if it is time-limited - nearly all mSupply licenses are NOT time limited). - If you change either your organisation name or the computer on which you are running mSupply, you will need to contact Sustainable Solutions for a new code. - From version 3.85, the registration code depends on the machine UUID which is unique to every single computer and consists of 36. If your machine UUID is blank, please make sure that windows program CMD is allowed to run (it may be blocked by anti-virus or anti-malware software). Please make sure you keep mSupply as a trusted application and unblocked by any anti-virus or anti-malware software you are using. - For versions before 385, the registration code is dependent on the MAC address of the computer. If you have a 3G USB dongle (modem) plugged into the computer at the time you click on the registration button, Hardware ID will likely be based on the MAC address of the 3G USB dongle rather than that of the computer's Ethernet card. This can cause problems because if the 3G USB dongle is removed or changed at a later date, mSupply can become unregistered. To avoid this, make sure mSupply uses the MAC address of its inbuilt Ethernet card rather than the 3G USB dongle: - Disconnect any 3G USB dongles from the computer and restart mSupply. - Proceed with retrieving the registration details as described above and send them to us to create the unlocking code for you. - If the Hardware ID field is empty, for pre-3.85 versions this indicates that you do not have an Ethernet card or 3G USB dongle installed on your computer and for version 3.85 and above that mSupply or the Windows CMD programme is being blocked by anti-virus or anti-malware software. If you need help, please consult Sustainable Solutions. - If you move your data file to a new computer you will need to obtain a new registration code from us and enter it within 3 months to prevent mSupply not working. After entering your registration details and clicking on the OK button in the screenshot above, if the datafile is set to use synchronisation then the following window will appear: This is a very important window and you must be careful to make the right choice! When a datafile is unregistered, synchronisation is prevented from running for data security reasons. Your datafile is just about to be registered so you are being asked whether you want synchronisation to continue working or whether it should be stopped. If you select “Stop Synchronisation”, then the synchronisation settings will be cleared and synchronisaiton will not work. You would select this option if you were using a copy of live data for training, for example. If you select “Keep Synchronisation working”, then synchronisation will keep working (surprise!). You would use this option if you were using this datafile in a live synchronisation system, either for the first time or if you've moved it to a new computer. When you have made your choice, type “I understand” in the textbox as confirmation (the consequences of turning on synchronisation to the live system for a duplicate datafile intended to be used in a training environment can be dire!) and click on the OK button. Whichever choice you have made, mSupply will now quit and you will need to restart it. Hooray, the registration process is now copmplete! Other fields on the General tab Default customer: Leave this field blank for normal operation. If you usually (or always) only issue to one customer, enter that customer's code here. You must set the value to the name code of an existing customer. Doing so will mean that this customer's details are automatically filled in when you create a new customer invoice. Default margin for suppliers: The percentage margin that will be filled in when you enter a new supplier. This value can be edited for each supplier at any time. Enter “0” if you do not apply a mark-up to items you sell (for example, if you are issuing stock to hospital wards at cost). Default tax rate The rate entered here will automatically be applied to customer and supplier invoices. Note that this amount can be edited when you are entering an invoice by clicking on the tax rate at the bottom of the invoice entry window. Period Closing: There are two fields allowing the entry of dates: The Closed date is the date prior to which no transactions can be entered. Setting the closed date means that all transactions up to that date are finalised and mSupply will not allow the entry of any transactions with an earlier date. The closed date can not be moved backwards- only forwards. The Locked date is the earliest date that can be entered for a transaction. The lock date can be moved forwards as far as the oldest non-finalised transaction, and backwards as far as the closed date If you try to set an invalid locked or closed date you will be warned. Be very careful setting the closed date. Changes to the closed date can not be undone. Inactive user logout: This setting is only visible if you are using an mSupply client in a multi-user setup, not if you're using the single user version of mSupply (because it's not applicable to the single user version). This is where you set the time in minutes before the mSupply client is closed and the user is automatically logged out. To disable the function, set the time to 0 minutes (the default setting). This is really useful for preventing inactive users holding onto a user license when they've forgotten to logout of mSupply. But BEWARE: any unsaved work will be lost when mSupply is closed so remember to save your work regularly. The good news is that most of the things you do in mSupply are automatically saved as you do them (adding and removing lines to customer and supplier invoices, inventory adjustments, stocktakes etc.) so the chances of losing work are minimised. Misc tab Own code for electronic invoices The code that customers must have for your organisation in their copy of mSupply. This code is added to invoices you export, and allows customers to import the invoice into their system automatically. This is an old copy: warn me at startup. You may want to save an old copy of your data (for example the data as it stood on the last day of the financial year). If you check this box, you will be warned at startup if the database is an old copy, to reduce the risk of accidentally entering current transactions into an old file rather than your current file. Use a lock file to warn if data already open If this box is checked, mSupply will maintain a record of when it is open outside of the database. This option only applies to the single-user version of mSupply. This means that if a second user attempts to open your data file while the data file is already in use, the user will be alerted, and no damage to the data will occur. Note that this option only applies to the single user version of mSupply. The client-server version of mSupply allows multiple users to access mSupply at the same time. What if your machine crashes? If, for example, you have a power failure and your computer shuts down suddenly, mSupply will not be able to delete the lock file, and you will get a message when you attempt to restart mSupply that another user is already using the data file. If you are sure this is not the case, use Windows Explorer or the Mac Finder to locate the folder that contains your mSupply data. Delete the file that has the same name as your data file but ends in “_locked.txt”. You will now be able to start mSupply. We recommend you do not turn this option on unless you understand the above paragraph or have a system administrator who authorises your use of this option. An example of where turning on this option might be useful is when you store your mSupply data on a file server, and allow multiple single-user copies of mSupply to access the same data file. In such a situation to have 2 users attempt to access the data simultaneously would be disastrous. Item codes must be unique When this box is checked, mSupply will ensure that each new item entered has a unique code. Include catalogue code for item search When checked, a report can be produced where the item's catalogue code is one of the search parameters. Currency formats This option specifies the format in which currencies will be displayed in mSupply. We provide two preset formats for currencies with 2 decimal places and currencies with none. If you want to enter a custom format you can change the field below the drop-down list. For example you may want to use a different separator than a comma. Note that if you type illogical values into the field the numbers may not display at all, or display erroneous data. Contact Sustianable Solutions if you need more information. Service items Service items are items that do not have any stock associated with them. For example, a fee for special handling of goods, or for reprinting an old invoice, or a consulting charge. These Preferences determine whether new items are allowed to be service items or not. Note that changing this preference will not affect existing items. Names - Customer code and charge code must match - If checked, when entering or editing a Customer, you will have to enter the same code for both the “code” and the “charge code” fields. (It is a good idea to leave this checked unless you have several customers that are invoiced separately, but whose invoices are collated onto a single statement at the end of the month) - Supplier code and charge code must match - If checked, when entering or edit a Supplier, you will have to enter the same code for both the “code” and the “charge code” fields. (It is a good idea to leave this checked unless you have several suppliers that are invoiced separately, but whose invoices are collated onto a single statement at the end of the month) - Name code for all codes to be unique (both 'store' and 'dispensary' mode) - If checked, when entering or editing a customer or supplier mSupply will not allow the creation of a second name with the same name code. Labels for custom item fields mSupply provides you with seven custom fields that you can use to record your own data for each item. - the first three fields hold text or numbers, - the fourth field is yes/no (or true/false) - the fifth field is a numeric field - the sixth field holds text or numbers - the seventh field is yes/no (or true/false) Here you can specify the label(or name) for each field. Note that field one is also displayed when you list items. Note also that when using the search editor to search for items (for example when producing a custom report), the fields are labeled “user field 1” etc.., and not with the labels you might have assigned. Label for field on cash payment or receipt The label of the Their ref field on Customer Receipts or Payments to suppliers can be changed to whatever you enter in this field. It is set to Cheque # by default. If you leave the field blank then the field label will be Their ref. Purchase order defaults tab For an explanation of the sections on Purchase order defaults go to Purchase Order Preferences Purchase order 2 tab For an explanation of the sections on Purchase order 2 go to Purchase Order Preferences Invoices 1 and 2 tab For an explanation of these sections see Invoices Preferences Item tab Default account code for new items - There are 3 spaces for you to select the accounts to use. Choose one account for expenses when buying, one account for income when selling and an asset account for stock. - The accounts you enter here will become the default accounts for newly created items. Item list (master & local) If you check this box, then the visibility of items in stores will be set to match the items on master list(s) selected to be used by that store. For details on how this works and how to set it up, see Controlling item visibility - the Master lists tab. This preference can drastically affect item visibility. For instance, if a store has no master lists assigned, all items will be made invisible in that store. Could lead to a scary moment! If some items that are currently visible in a store need to be made invisible when this preference is turned on but they can't (because they have stock, for example) then a temporary master list is created for each store containing the problem items. mSupply will inform you but you should use that list to deal with those items. Do not turn it on unless you know what you are doing! It affects ALL stores in the system. If the box is unchecked then the visibility of items in a store is not affected by master lists. Assigning item codes automatically If you want item codes to be assigned automatically, check this checkbox. Price tab On checking the box for “Use customer price categories”, mSupply will activate the price categories. Price Categories By default all customers are assigned a price category of “A” To assign a different category to a customer, choose Customers > Show customers… and find the customer entry. Then set the price category field to a different value Here in the Preferences, you specify what percentage change to the default price will be made for each category. For example, you might have a group of customers to whom you charge commercial prices, which are 20% above your normal price. Enter “20” in the “B” category field, then assign your private customers a price category of “B” Ignore price categories for items supplied by these suppliers Suppliers in this list will have no price category assigned to the items they supply. To add a supplier in this list, click the Add supplier button. A window will appear with a space to write the supplier's name. You can write the first character or two and press enter/return to bring up a list of suppliers that start with those letters. You can then select your chosen supplier from the list. To delete a supplier from the list, highlight it by clicking on it and then press the Delete Supplier button. Quotes tab Require entry of quote validity date Check this box if you want a validity date for a quote to be compulsory. Automatically turn off preferred status after validity date If you check this box then the preferred status will not appear when the validity date has passed. Reports tab Check this box if you want negative stock values to be ignored on stock history reports (negative values can be legitimate but concerning for some users so negative values are left as zeroes if this preference is turned on). Names tab Default field name/Custom field name table: In this list you can change the labels for name categories and custom fields - not the categories themselves (see Name categories for instructions on how to do that) but the labels for the categories. To change one of the labels click on it in the Custom name column to select it and click on it once again to begin editing it. Type the new name and then click outside the label to finish editing it. Now, wherever you would have seen that label in mSupply you will now see the name you have given it. The defaults are shown in the screenshot above. Example: If category 1 was to be called 'Ownership' and category 2 was to be called 'Classification' then you could change their labels like this: Now, wherever you would have seen the label 'Category 1' or 'Category 2' in mSupply, you will now see the label 'Ownership' or 'Classification' e.g. in report filters or, as in this screenshot, the General tab of the name details form: Labels for custom fields on name category: Here you can enter labels for the custom fields used when creating or editing name categories. The labels you enter here will appear on the New/Edit category window instead of the Category_user_field_1 and Category_user_field_2 labels. These labels are used in some reports. Tenders tab Tender letter section - Title: The title for the printed tender letter. If nothing is entered in here mSupply will use 'Invitation to tender'. - Tender reference: The reference for the tender so that, when you communicate with others you both know which tender you are referring to. If nothing is entered in here mSupply will use 'Tender reference'. User details for Tender module section We run a Remote tender module Click this to indicate that you operate an mSupply remote tender service where suppliers and you will log into a common web page to submit and download tender information. You will need to contact Sustainable Solutions for the account information required below before using this option. If this box is unticked you will need to enter bids manually. If it is ticked, you will see an extra Synchronise tab when you view the details of any tender. Client ID This is the ID that you use to login to the remote tender web page and is unique to you. This will be supplied to you by Sustainable Solutions. Password This is the password you use to login to the remote tender web page. Like the ID, this will be supplied to you by Sustainable Solutions. Tender Module Address The internet address of the remote tender web page in the format IP address:port number. Once again, this will be supplied to you by the indefatigable Sustainable Solutions. Reminders tab Reminders provide a simple to-do list built into mSupply. If the Show reminders on startup box is checked, any reminders that are not completed and whose due date has been reached will be displayed in a window when a user logs on. Patient Medication tab Drug Interactions tab When operating in dispensary mode, you can choose to have mSupply alert you to drug interactions. Dispensary mode is covered fully in this section The number of days of patient history …. field determines how far back from the current date mSupply should look for drugs that have a known interaction with the drug you have just entered. When drug interactions are activated, you will be shown a warning message when you enter a drug on a patient invoice that interacts with other medicines dispensed to that patient during the history period specified. Drug Registration tab If you have a license for the Registration module, you will need to check this tick-box, and click OK in order to activate it. You can specify the number of months before a drug registration expires. So when you register a drug, the expiry date of your registration will be calculated automatically by adding the number of months you have specified to that day`s date. For example, if you specify 24 months until a drug registration expires and you register a drug on 24/07/2016, the expiration date will be set automatically to 24/07/2018. If you leave the number of months at 0, you will have to set the drug registration expiry date manually. Printing tab For an explanation of this section please go to Printing Preferences OK and Print tab For an explanation of this section please go to OK and Print. Logo tab Here you can paste in a logo you have copied to the clipboard - you must copy the contents of a file to the clipboard, not the file itself. The file can be in .jpg, .png, .bmp, .gif, or .tiff format. This logo will be displayed at the bottom right of the navigator. Make sure the image you use is twice as wide as it is high. If it is not, mSupply will convert it to this ratio which will make it look squashed or stretched, sometimes with interesting results! If you have set a store logo (see here) for the store you are logged into this will be displayed on the Navigator and invoices instead of the one saved here. If you want the logo to be printed on invoices, check the checkbox. Please note that not all printing forms include the logo. If you would like customisation or assistance, Sustainable Solutions can quickly customize forms for you to meet your requirements. Dispensary mode tab Label Printing Message on labels The text you type here will be displayed on medicine labels on the last line. Custom identification code If you wish to identify the origin of the dispensed item, e.g. In-Patient Dispensary or Out- Patient Dispensary, create an identification code, and enter it in this field. It will then appear as the last item printed on the the right of the third line of each label. Printing options for units Three options are available from the drop down menu. You can choose to always print the units on labels, you can choose for each item whether or not to print the unit or you can choose to never print the units. Turn off cell padding in between direction and patient name mSupply will automatically pad the cell (leave some space in the cell) between Direction and Patient name. If you do not want this to happen then check this box. Print full prescriber name on label To have the prescriber's full name printed on labels, check the box in Print prescriber full name. Otherwise, the initials will be printed. Print organisation address 1 as separate line By checking the box Print organisation address 1 as separate line , you can print organisation address 1 as separate line . You have to set the text in Preferences: General »address 1 field. The text which you type here will be displayed on medicine labels on its own line. Print patient category on label If checked, when a prescription label is printed the patient's category will be printed alongside the patient's name in the following format; Patient name (category). Note that if the Print Patient code on label option is also checked then the patient's code and category will be printed alongside their name in the format Patient name (code/category). Print Expiry date of Item on label When checked, the item's expiry date will be printed on the label. Print Sales value of item on label When checked, the item's sales value will be printed on the label. Print Patient code on label If checked, when a prescription label is printed the patient's code will be printed alongside the patient's name in the following format; Patient name (code). Note that if the Print Patient category on label option is also checked then the patient's code and category will be printed alongside their name in the format Patient name (code/category). Print Batch on label When checked, the item's batch number will be printed on the label. Don't print placeholder lines If this box is checked, placeholder lines will not be printed. Print a receipt on labels by default mSupply allows you to print a patient receipt on a label; to enable this function by default, check the box Print a receipt on labels by default in Preferences:Dispensary mode as shown above. This has the effect of enabling the “Print receipt” check box in the Prescription entry window . Note that if the Print a receipt on labels by default is not checked, the function may still be turned on in the Prescription entry window. For more information on printing receipts, see Dispensary Mode Patients Default Patient category The text you enter into this field will be assigned to the category field for new patients. Default Prescriber name With the cursor in this field, enter the first letter or first few letters of the prescriber's name and press Tab. A window appears displaying prescribers who meet the criteria typed; note that both first and last names appear, and you should select the desired prescriber and click OK to make that prescriber the default one. Auto-generate patient code If this box is checked each new patient created will have a serial number assigned to them. The assigned code can be overriden by the user. Prefix codes with The text you enter in this field will be used as a prefix for automatically generated patient codes. e.g. if you enter “t” codes will be assigned “t1”, “t2” etc. Capitalise patient names Does what it says. The shift key will still override this option. Prescriber must be entered If this checkbox is checked, then the user will be warned if a prescriber has not been entered when they click the OK button for accept and print a prescription. Expand abbreviations in patient address fields In the patient address fields, any abbreviations will be written out in full instead of the abbreviation if you check this box. Share patient prescription over stores If this box is checked, viewing a patient history in one store will show transactions entered in other stores. Apply stock to placeholder lines This section gives you the ability to apply stock to placeholder lines on prescriptions. We know that in a fast moving dispensary it's sometimes hard to keep up with the stock coming into the dispensary so, sometimes you can get into the position where you physically have stock on the shelves to dispense to patients but you haven't been able to enter it into mSupply yet. mSupply allows you to keep dispensing by automatically adding placeholder lines to the prescriptions instead of normal stock lines. When you eventually enter the stock that you physically dispensed to the patients into mSupply, you need to allocate it to the prescriptions to keep your mSupply stock levels correct. This is the function to enable you to do that. Simply select the dates between which you want mSupply to look for placeholder lines on prescriptions by manually entering them in the From and To fields or selecting one of the preset selections in the drop down list. Then click on the Apply stock button. mSupply will then search for placeholder lines on prescriptions between the dates you chose. If it finds one it will attempt to replace the placeholder line with real stock from your store on a FEFO basis. If there is enough stock then mSupply will replace the placeholder with a real stock line. If there is only enough to replace part of the placeholder line then it will allocate what stock there is and will leave a reduced quantity placeholder line to represent the stock that still couldn't be allocated. HIS tab Log tab The significant events which mSupply automatically logs are listed here. Additional events may be logged by checking the appropriate check boxes on this tab. Please contact us if you need more logging than is provided here. It is a simple matter to incorporate into a future version - the trade-off is that it leaves the potential open to create very large log files, which may be a problem for some users. Backup tab Note - These Preferences are applicable only in single user mode; in client-server mode, the backup schedule is set on the server. Activation To activate , the Automatic Backup checkbox should be checked, then the appropriate radio button checked to have automatic backups performed according to your requirements. This function allows a backup of your datafile to be made. There are two types of backup available. - Local backups are made to another folder on your computer, or to a networked folder. - Internet backup allows for your data to be copied from a special backup folder to a secure internet site.- this applies to both single-user and multi-user systems. Local Backups You can perform a backup manually File> Backup as well as automatically. It is not necessary to quit the database before performing a backup. The settings in this window are to be used only for single-user mode. In client-server user mode you must set the backup Preferences on the server machine. Choose backup folder Click the “Choose” button to specify the destination of the backup files. For added security, we strongly recommend you backup to a different physical volume from the one where your mSupply data is stored. We recommend running mSupply server as a Windows service. This allows automatic log on, and control of the starting and stopping of the server from command line tools that can be run when a UPS is shutting down. Please contact us for more information. Compression rate Choose whether backups should be compressed or not from this drop-down list. Compact will give the smallest backup size, but it will take longer for backups to run. Fast is a compromise of speed and size. Internet backups If you are using an internet backup system such as Spideroak, Dropbox or Memopal, these options allow you to set up a folder that is watched by your backup software and backed up to the internet Verify data file after backup mSupply allows you to verify the data integrity of your main data file each time a backup is made. This is extra insurance against hardware failures and other factors that can lead to data corruption. Check this box to activate. Restore last backup if database is damaged Check the box to activate this preference. Use log file Email results of check… If you check this option Sustainable Solutions will receive by email a report of the data file verification each time it is run. Note that no confidential information is transmitted with the report. Integrate last log if database is incomplete If this checkbox is checked, then if your data becomes damaged, mSupply will automatically restore the data from a backup and use the log file to restore all transactions between the date of the backup and the current date. Backup 2 tab Here you can set the secondary backup location. This is useful for making copies of just some of your backups that you can then upload to a cloud-based backup service for automated off-site backups. Automatic backup to secondary folder: Checking this option results in mSupply trying to copy completed backups to another folder which you specify with the Choose button below. Fill in the Backup every …..th file… field with number of backups out of which one copy will be kept. That is, entering 7 will result in one backup per week being copied. You should now configure your online backup software to use the folder chosen as the secondary backup location as the source folder for backups. Note that if you have plenty of upstream bandwidth, you might simply wish to set your main mSupply backup folder as the source folder. The duplication and block-level comparsisons that services such as Spideroak use mean that it may be better for you to use uncompressed mSupply backups, as then only the changed portion of your data file will be backed up. Secondary backup options 1, 2, 3 mSupply now allows you to make up to 3 secondary backups to separate locations if you would like to. You can use 1, 2 or all 3 if you would like to. At least one is recommended for security purposes. You can choose to send orders, reports, invoices and notifications to recipients using e-mail in mSupply. However, before mSupply can send anything by e-mail you must tell mSupply about the server you wish to use and details of the e-mail described below: Provide me with a mail server to send emails If this is checked you will use mSupply's own internal email server to send emails. If it is unchecked you will use your own email server, the details of which you must enter in the Mail server name, Username, Password, Port and Use SSL fields. Mail server name The name of your mail (SMTP) server. eg “mail.mac.com”. You can only enter this if you are not using mSupply's inbuilt e-mail server (i.e. Provide me with a mail server to send emails is unchecked). User name The username mSupply will use to authenticate itself to the mail server. Only needed if you are not using mSupply's inbuilt e-mail server (i.e. Provide me with a mail server is unchecked). The password which goes with the username. (Only when not using mSupply's e-mail server) Use SSL If this is checked mSupply will use the Secure Socket Layer protocol to send e-mail. A more secure way of sending e-mails but only check this if your mail server can support SSL. Port The port on the mail server which mSupply will send e-mail to (must be the same port the mail server is listening on!). (Only when not using mSupply's e-mail server) Return email address Enter an email address you would like any response to come to. mSupply cannot send email unless it also has a return address for email. Signature The text you enter here will be added to the end of all e-mails you send. You might want to put your organisation contact details here. It is generally poor etiquette to make your signature too long. By default use a mono-spaced font to view text A mono-space font such as “Courier” or “Monaco” is better for viewing text in columns, such as is produced by the automatic order generation in mSupply . However, visually it has less 'eye appeal'. Default subject line when creating orders When mSupply automatically turns an order for a supplier into an email, the text entered here will be put in the subject line. For example you might want to put “Acme Hospital order” to advise the supplier of its contents. Note that you can edit the subject line of automatically generated emails before you send them. Sort order lines for email and HTML export by item name This option allows emails generated automatically to be sorted alphabetically by item name before the email is created. If this option is left unchecked, emails will use the creation order of the order lines to create the email. Server tab These settings are all for mSupply's built-in webserver, which is used for things such as the Dashboard, mSupply mobile, online catalogue and the Customer interface. Note: To use this feature an additional license is required. Please contact Sustainable Solutions for further information. Starting the web server The web server can be set to start automatically when mSupply starts by checking the box, or manually as and when you use the service by clicking the button. Run Webserver on the following port The default port is 8080. Disable the Customer order web module and display the following message: If for any reason you want to take the customer ordering module off line you can choose a message to display for your customers. Time out For clients connected via a web browser, this is the maximum period of inactivity, after which the client must log in again. Default store for web interface Choose which store you want to be the default for the web interface (only applicable if you have more than one store). Be careful when changing this option because it will change the default store that all the web interfaces will use to get information from mSupply, not just one of them. User can create XX uncompleted orders at a time This limits erroneous submission of too many orders. Choose the maximum number of orders that a customer can make. If they have reached the maximum amount, they will be able to create more when other ones are completed. Maximum allowable time a mobile interface request can take This sets how long mSupply will keep trying to supply report data before it gives up and displays an error message. Synchronise tab For an explanation of this section please go to Synchronisation. Customisation Options Should a client wish to have customised features which are specific to their version of mSupply, we are happy, whenever possible, to incorporate such features. If you are running such a customised version of mSupply, you will have received from us a Customisation code . To activate the customised features, that code must be entered in this field. Moneyworks tab If you use the superb Moneyworks accounting software, you can have mSupply directly input invoices into Moneyworks. Contact [email protected] for more information on these options. Visit for more information on Moneyworks, including a free demonstration version. Link to Moneyworks accounting software The Moneyworks application must be installed on your machine, and this option establishes a link between mSupply and Moneyworks. Don't turn it on if you don't know what you're doing, or if you haven't set up Moneyworks as described below. Three options are presented in the drop down menu: - Gold - select this option if the Moneyworks application you are using is the one installed on your computer. - Gold Client - select this option if you are connecting to the Moneyworks application installed on another machine on your network. - Data Centre Client - select this option if you are connecting to a Moneyworks Data Centre Export to Moneyworks when finalising individual invoices This allows for production of individual invoices, e.g. for non credit customers, who make cash purchases; if you have such customers, check this box , but if all your customers are credit customers, receiving monthly statements, it may be left unchecked. Location of the Moneyworks application mSupply needs to know the location on your computer of Moneyworks , and by clicking on the Choose button, a window appears, and you should navigate to the location of the Moneyworks .exe file Document Log on Enter your logon details to access the Moneyworks document that you want to access. Data Center Log on If you are using a partitioned data center, enter your logon details to gain access to the partition where your document resides. If you are not using a partitioned data center then leave this section blank. Location of the Moneyworks document (datafile) You need to identify the Moneyworks document (datafile) you are using If your link is to Gold, this is done by clicking on Choose , and navigating to the file's location. If access to the file is restricted, you need to complete your user name and password in the fields under Document Logon If your link is to the Client option, when you click on Choose , a window appears and you need to enter the name of the Moneyworks datafile to which you are connecting; the file's restrictions will require you to enter your user name and password in the fields under Data Centre Logon Type of customer invoice export The drop down menu allows you to choose from several options: - Single income account - Separate income accounts by item account - Choose for store I.P. address of machine using Moneyworks You only need to fill in the I.P. address here if: - You are on Macintosh and - You are connecting to a remote machine across a network. If you are on Windows or a Mac connecting to a local installation of Moneyworks, make sure this field is empty. Notes on setting up Moneyworks: - The import into Moneyworks relies on using an import map. The map for supplier invoices must be named “si_import.impo” and the map for customer invoices “ci_import.impo”. These map files must be stored in the “Import Maps” folder inside the “Moneyworks Customer Plug-Ins” which is next to your data file. - The advantage of using import maps is that it gives you flexibility in deciding which accounts will be designated for sales and purchases, and the way the fields exported from mSupply are used inside Moneyworks. - If you would like sample import maps from Sustainable Solutions, please email us at [email protected]. - Once you have set up the import maps and turned on the “Link to Moneyworks accounting software” checkbox, mSupply will attempt to send invoices to Moneyworks that are finalised using the “finalise customer invoices” and “finalise supplier invoices” commands. - Note that the “Export invoices when finalising” option must also be checked. (See Preferences> Invoices) - If you get an error when exporting, usually you will get a message telling you what the problem is. Things to check include - Is Moneyworks running …… it must be! - Make sure the correct I.P address is specified if connecting to a remote machine on Macintosh. - Make sure any charge codes used are actually present in the Moneyworks data file you are using. We can supply a version of mSupply that automatically adds names to the Moneyworks data file if they aren't found when exporting, but this costs extra! - Make sure that Moneyworks has open periods for the dates of the invoices that are about to be imported. - If you still have no success, turn off the Link to Moneyworks. .. checkbox, and produce a file. Then manually import the records into Moneyworks using the File > Import > Transactions command (making sure you load the correct import map using the “Load” button). The file has errors, Moneyworks will give a more complete error report. - Note that if export to Moneyworks is not successful, the transactions will not be finalised in mSupply, so you will not get invoices that are missed in Moneyworks. We are also able to provide a similar option to link with Quickbooks accounting software. Please contact [email protected] if this is of interest to you. FrontlineSMS tab If you use FrontlineSMS for sending information to mSupply via SMS messages (using mobile phones), this is the tab where you enter all the settings. For an explanation of this tab please go to FrontlineSMS preferences. eLMIS tab (New in mSupply 3.50) eLMIS is an LMIS tool used by some countries to collect and aggregate supply information from health facilities. If you use eLMIS, you can interface your mSupply server with it using these preferences. Use the eLMIS interface Check this box to enable the interface. Interface Folder Click on the Choose… button to select the main folder that eLMIS and mSupply will use to share data. Time between scans for new information from eLMIS Enter the number of hours mSupply will wait between checking the interface folder set above for new files to process. When the Use the eLMIS interface checkbox is checked it tells mSupply to check the Interface folder/Orders/Incoming folder every Time between scans for new information from eLMIS hours for new order files to process. Scan now button Click this to make mSupply check the Interface folder/Orders/Incoming folder for new files to process immediately. Useful if you have manually put a file in the Interface folder/Orders/Incoming folder and want mSupply to process it immediately instead of waiting for the Time between scans for new information from eLMIS interval set above to elapse. Send errors to this email address The email address mSupply will send any error information to. This would normally be the address of your eLMIS helpdesk. LDAP tab This section is where you define the details of the LDAP (Lightweight Directory Access Protocol) server you are using to provide user authentication for logging into mSupply. There is no need to fill in these fields if you are not using an LDAP server to check user logins: Server URL or IP: enter the URL or IP address for your LDAP server. Port no.: enter the port number your LDAP database is being served on. The Dashboard tab See the Setting up Dashboards chapter for a detailed description of the process Stock tab This section is where you tell mSupply which custom stock fields a user can fill in when receiving stock on a supplier invoice. There are 8 fields available. Fields 1 to 4 are free text but fields 5-8 contain values which are selected from a list you define: Each of the fields you tick the “Show” checkbox for will appear on the bottom of the supplier invoice line detail form when receiving stock. They will appear with the label you give them in the “Display name” column (click once in the column to make the cell editable then type the name). The values you enter in these fields (or select for the fields if it's one of fields 5-8) will be attached to the stock and follow it through the system. For full details see Custom stock fields. Previous: Statistics (HIS) Next: Purchase Order Preferences
http://docs.msupply.org.nz/preferences:general
2018-12-10T04:05:58
CC-MAIN-2018-51
1544376823303.28
[]
docs.msupply.org.nz
Spec DfsPreorderIteratorIterate vertices of a graph in depth-first preorder fashion. Iterate vertices of a graph in depth-first preorder fashion. Template Parameters Member Function Overview Member Functions Inherited From Iter Interface Function Overview Interface Functions Inherited From IteratorAssociatedTypesConcept Interface Metafunction Overview Interface Metafunctions Inherited From Iter Interface Metafunctions Inherited From IteratorAssociatedTypesConcept Detailed Description Preorder means that a vertex is visited before its neighbours are. If not stated otherwise, concurrent invocation is not guaranteed to be thread-safe.
http://docs.seqan.de/seqan/2.4.0/specialization_DfsPreorderIterator.html
2018-12-10T04:32:01
CC-MAIN-2018-51
1544376823303.28
[]
docs.seqan.de
Binary Ninja Intermediate Language Series, Part 1: Low Level IL¶ The Binary Ninja Intermediate Language (BNIL) is a semantic representation of the assembly language instructions for a native architecture in Binary Ninja. BNIL is actually a family of intermediate languages that work together to provide functionality at different abstraction layers. This developer guide is intended to cover some of the mechanics of the LLIL to distinguish it from the other ILs in the BNIL family. The Lifted IL is very similar to the LLIL and is primarily of interest for Architecture plugin authors. If you're writing an analysis plugin, you'll always want to be working at LLIL or higher. During each stage of the lifting process a number of transformations take place, and each layer of IL can have different instructions. Because of this, you can not rely on an instruction from one layer existing in another. Introduction by example¶ Since doing is the easiest way to learn lets start with a simple example binary and step through analyzing it using the python console. - Download chal1 and open it with Binary Ninja - Next, bring up the Low Level ILview by clicking in the options pane at the bottom of the screen (or alternatively, use the ikey) - Navigate to main ( g, then "main", or double-click it in the function list) - Finally, bring up the python console using: ~ Next, enter the following in the console: >>> for block in current_function.low_level_il: ... for instr in block: ... print instr.address, instr.instr_index, instr ... 4196422 0 push(rbp) 4196423 1 rbp = rsp {var_8} 4196426 2 rsp = rsp - 0x110 4196433 3 rax = rbp - 0xc0 {var_c8} ... This will print out all the LLIL instructions in the current function. How does this code work? First we use the global magic variable current_function which gives us the python object function.Function for whatever function is currently selected in the UI. The variable is only usable from the python console, and shouldn't be used for headless plugins. In a script you can either use the function that was passed in if you registered your plugin to handle functions, or you can compute the function based on a specific address, or maybe even just iterate over all the functions in a BinaryView ( for func in bv.functions:). Next we get the lowlevelil.LowLevelILFunction from the Function class: current_function.low_level_il. Iterating over the LowLevelILFunction class provides access to the lowlevelil.LowLevelILBasicBlock classes for this function. Inside the loop we can now iterate over the LowLevelILBasicBlock class which provides access to the individual lowlevelil.LowLevelILInstruction classes. Finally, we can print out the attributes of the instruction. We first print out address which is the address of the corresponding assembly language instruction. Next, we print the instr_index, this you can think of as the address of the IL instruction. Since translating assembly language is a many-to-many relationship we may see multiple IL instructions needed to represent a single assembly language instruction, and thus each IL instruction needs to have its own index separate from its address. Finally, we print out the instruction text. In python, iterating over a class is a distinct operation from subscripting. This separation is used in the LowLevelILFunction class. If you iterate over a LowLevelILFunction you get a list of LowLevelILBasicBlocks, however if you subscript a LowLevelILFunction you actually get the LowLevelILInstruction whose instr_index corresponds to the subscript: >>> list(current_function.low_level_il) [<block: x86_64@0x0-0x3f>, <block: x86_64@0x3f-0x45>, <block: x86_64@0x45-0x47>, <block: x86_64@0x47-0x53>, <block: x86_64@0x53-0x57>, <block: x86_64@0x57-0x5a>] >>> type(current_function.low_level_il[0]) <class 'binaryninja.lowlevelil.LowLevelILInstruction'> >>> current_function.low_level_il[0] <il: push(rbp)> Low Level IL Instructions¶ Now that we've established how to access LLIL Functions, Blocks, and Instructions, lets focus in on the instructions themselves. LLIL instructions are infinite length and structured as an expression tree. An expression tree means that instruction operands can be composed of operation. Thus we can have an IL instruction like this: eax = eax + ecx * 4 The tree for such an instruction would look like: = / \ eax + / \ eax * / \ ecx 4 There are quite a few reasons that we chose to use expression trees that we won't go into in detail here, but suffice it to say lifting to this form and reading this form are both much easier than other forms. Now lets get back to the examples. First let's pick an instruction to work with: >>> instr = current_function.low_level_il[2] >>> instr <il: rsp = rsp - 0x110> For the above instruction, we have a few operations we can perform: - address - returns the virtual address >>> hex(instr.address) '0x40084aL' - dest - returns the destination operand >>> instr.dest 'rsp' - function - returns the containing function >>> instr.function <binaryninja.lowlevelil.LowLevelILFunction object at 0x111c79810> - instr_index - returns the LLIL index >>> instr.instr_index 2 - operands - returns a list of all operands. >>> instr.operands ['rsp', <il: rsp - 0x110>] - operation - returns the enumeration value of the current operation >>> instr.operation <LowLevelILOperation.LLIL_SET_REG: 1> - src - returns the source operand >>> instr.src <il: rsp - 0x110> - dest - returns the destination operand >>> instr.dest 'rsp' - size - returns the size of the operation in bytes (in this case we have an 8 byte assigment) >>> instr.size 8L Now with some knowledge of the LowLevelIL class lets try to do something with it. Lets say our goal is to find all the times the register rdx is written to in the current function. This code is straight forward: >>> for block in current_function.low_level_il: ... for instr in block: ... if instr.operation == LowLevelILOperation.LLIL_SET_REG and instr.dest.name == 'rdx': ... print instr.address, instr.instr_index, instr ... 4196490 14 rdx = [rax].q 4196500 16 rdx = [rax + 8].q 4196511 18 rdx = [rax + 0x10].q 4196522 20 rdx = [rax + 0x18].q 4196533 22 rdx = [rax + 0x20].q 4196544 24 rdx = [rax + 0x28].q 4196798 77 rdx = [0x602090].q The Instructions¶ Going into gross detail on all the instructions is out of scope of the this article, but we'll go over the different instructions types and speak generally about how they are used. Registers, Constants & Flags¶ When parsing an instruction tree the terminals are registers, constants and flags. This provide the basis from which all instructions are built. LLIL_REG- A register, terminal LLIL_CONST- A constant integer value, terminal LLIL_SET_REG- Sets a register to the results of the of the IL operation in srcattribute. LLIL_SET_REG_SPLIT- Uses a pair of registers as one double sized register, setting both registers at once. LLIL_SET_FLAG- Sets the specified flag to the IL operation in srcattribute. Memory Load & Store¶ Reading and writing memory is accomplished through the following instructions. LLIL_LOAD- Load a value from memory. LLIL_STORE- Store a value to memory. LLIL_PUSH- Store value to stack adjusting stack pointer by sizeof(value) after the store. LLIL_POP- Load value from stack adjusting stack pointer by sizeof(value) after the store. Control Flow & Conditionals¶ Control flow transfering instructions and comparison instructions are straight forward enough, but one instruction that deserves more attention is the if instruction. To understand the if instruction we need to first understand the concept of labels. Labels function much like they do in C code. They can be put anywhere in the emitted IL and serve as a destination for the if and goto instructions. Labels are required because one assembly language instruction can translate to multiple IL instructions, and you need to be able to branch to any of the emitted IL instructions. Lets consider the following x86 instruction cmove (Conditional move if equal flag is set): test eax, eax cmove eax, ebx To translate this instruction to IL we have to first create true and false labels. Then we emit the if instruction, passing it the proper conditional and labels. Next we emit the true label, then we emit the set register instruction and a goto false label instruction. This results in the following output: 0 @ 00000002 if (eax == 0) then 1 else 3 1 @ 00000002 eax = ebx 2 @ 00000002 goto 3 As you can see from the above code labels are really just used internaly and aren't explicitly marked. In addition to if and goto, the jump_to IL instruction is the only other instruction that operates on labels. The rest of the IL control flow instructions operate on addresses rather than labels, much like actual assembly language instructions. Note that an architecture plugin author should not be emitting jump_to IL instructions as those are generated by the analysis automatically. LLIL_JUMP- Branch execution to the result of the IL operation. LLIL_JUMP_TO- Jump table construct, contains an expression and list of possible targets. LLIL_CALL- Branch execution to the result of the IL operation. LLIL_RET- Return execution to the caller. LLIL_NORET- Instruction emitted automatically after syscall or call instruction which cause the program to terminate. LLIL_IF- If provides conditional execution. If cond is true execution branches to the true label and false label otherwise. LLIL_GOTO- Goto is used to branch to an IL label, this is different than jump since jump can only jump to addresses. LLIL_FLAG_COND- Returns the flag condition expression for the specified flag condition. LLIL_CMP_E- equality LLIL_CMP_NE- not equal LLIL_CMP_SLT- signed less than LLIL_CMP_ULT- unsigned less than LLIL_CMP_SLE- signed less than or equal LLIL_CMP_ULE- unsigned less than or equal LLIL_CMP_SGE- signed greater than or equal LLIL_CMP_UGE- unsigned greater than or equal LLIL_CMP_SGT- signed greater than LLIL_CMP_UGT- unsigned greater than The Arithmetic & Logical Instructions¶ LLIL implements the most common arithmetic as well as a host of more complicated instruction which make translating from assembly much easier. Most arithmetic and logical instruction contain left and right attributes which can themselves be other IL instructions. The double precision instruction multiply, divide, modulus instructions are particularly helpful for instruction sets like x86 whose output/input can be double the size of the input/output. LLIL_ADD- Add LLIL_ADC- Add with carry LLIL_SUB- Subtract LLIL_SBB- Subtract with borrow LLIL_AND- Bitwise and LLIL_OR- Bitwise or LLIL_XOR- Exclusive or LLIL_LSL- Logical shift left LLIL_LSR- Logical shift right LLIL_ASR- Arithmetic shift right LLIL_ROL- Rotate left LLIL_RLC- Rotate left with carry LLIL_ROR- Rotate right LLIL_RRC- Rotate right with carry LLIL_MUL- Multiply single precision LLIL_MULU_DP- Unsigned multiply double precision LLIL_MULS_DP- Signed multiply double precision LLIL_DIVU- Unsigned divide single precision LLIL_DIVU_DP- Unsigned divide double precision LLIL_DIVS- Signed divide single precision LLIL_DIVS_DP- Signed divide double precision LLIL_MODU- Unsigned modulus single precision LLIL_MODU_DP- Unsigned modulus double precision LLIL_MODS- Signed modulus single precision LLIL_MODS_DP- Signed modulus double precision LLIL_NEG- Sign negation LLIL_NOT- Bitwise complement Special instructions¶ The rest of the instructions are pretty much self explanitory to anyone with familiarity with assembly languages. LLIL_NOP- No operation LLIL_SX- Sign extend LLIL_ZX- Zero extend LLIL_SYSCALL- System call instruction LLIL_BP- Breakpoint instruction LLIL_TRAP- Trap instruction LLIL_UNDEF- Undefined instruction LLIL_UNIMPL- Unimplemented instruction LLIL_UNIMPL_MEM- Unimplemented memory access instruction
https://docs.binary.ninja/dev/bnil-llil.html
2018-12-10T03:59:46
CC-MAIN-2018-51
1544376823303.28
[array(['../img/BNIL.png', 'BNIL-LLIL Selected'], dtype=object) array(['../img/llil_option.png', 'Low Level IL Option >'], dtype=object)]
docs.binary.ninja
Xerox® DocuShare® 7.0 Release Notes What's New in this Release Installation Notes For information on how to configure ConnectKey for DocuShare, refer to the Xerox ConnectKey for DocuShare Setup Guide located on the Help Desk. For information on how to configure ConnectKey for DocuShare, refer to the Xerox ConnectKey for DocuShare Setup Guide located on the Help Desk. Web Server Notes Upgrade Notes Upgrades and DocuShare on 32-bit operating systems Relaunch the browser and clear the cache after upgrading After upgrading to DocuShare 7 from an earlier version of DocuShare, close and re-open the browser as well as clear the cache to have the new UI display correctly.. Content Map and Show All button MIME Type assignment method Themes Xerox Process Automation for DocuShare If your site uses Xerox DocuShare eForms, you must first upgrade to Xerox Process Automation for DocuShare 7.6 before upgrading to DocuShare 7. Contact Xerox Content Management Professional Services or your DocuShare sales representative to have the Xerox Process Automation 7.6 upgrade performed. System Requirements Notes Windows Internet Explorer in Compatibility View Windows Platforms Upgrades and URLs and virtual root path names Additional Notes DocuShare add-ons and features not supported in the release Xerox DocuShare Email Agent Content Store Statistics Multiple DocuShare instances on the same server fixupMimeTypes.sh on Solaris and Linux platforms Known Issues Object Reporting configuration and custom properties: If you select a custom property to use as a column heading in a report and later delete the custom property, the heading still displays in object reports. To remove the custom property as a column heading, go to the Object Reporting Configuration page in the Admin UI, click in the Selected Properties area and click Apply. Document titles containing double-byte characters are corrupted after decompressing a zip file: If you use the zip and download feature and then decompress the zip file on Windows 7, document titles containing double-byte characters are corrupted. This is a known issue with the Windows zip program. See the Microsoft article for more information:. Editing existing weblog entries after upgrading to DocuShare 7: When a weblog entry contains text copied from Microsoft Word, you may not be able to edit the entry. If that occurs, try editing the weblog entry using Firefox or copy and paste the text to a new weblog entry. Fixes in the Release Automated Email Agent and WorkFlow replies are sent in the user’s specified email format. Documentation Additional Information and Support information regarding DocuShare can be found on the DocuShare web site.
https://docs.mackenziecounty.com/docushare/en/help/release.htm
2018-12-10T04:45:43
CC-MAIN-2018-51
1544376823303.28
[]
docs.mackenziecounty.com
User Interface Guidelines¶ This is a resource for developers, product managers, and designers, providing a unified language to build and customize eZ Platform admin user interface (aka Admin UI). We use it to simplify how we can build and offer a consistent interface across our content management platform. This section provides information about the specific User Interface components and resources that eZ Platform is using: - If you're interested in how to add a specific component, check them out under Components. - For design aspects, you'll find under Resources the three main ones we are using: Typography, Colors and Icons - Last but not least, take a look at our Design Principles in case you want to know the Accessibility standards we are applying.
https://ez-systems-developer-documentation.readthedocs-hosted.com/en/latest/guidelines/Introduction/
2018-12-10T05:21:45
CC-MAIN-2018-51
1544376823303.28
[]
ez-systems-developer-documentation.readthedocs-hosted.com
Time Source Peer Applies To: Windows Server 2008 A time source peer is a server from which time samples are acquired. The time source for this varies, depending on whether the computer is joined to a domain in Active Directory Domain Services (AD DS) (domain heirarchy peers) or to a workgroup (manually configured peers). Aspects The following is a list of all aspects that are part of this managed entity:
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc756454%28v%3Dws.10%29
2018-12-10T04:33:37
CC-MAIN-2018-51
1544376823303.28
[]
docs.microsoft.com
Building (manufacturing) items mSupply provides for a mechanism to manufacture (build) a new item from two or more existing items. A build is a way of recording items you have manufactured. That is, raw materials that are in your stock are used (taken out of stock), and a new stock item is created. Show builds... From the Items page of the Navigator, click on the Show Builds button: You will be shown the standard find window to enter either the number of recent builds to display, or a particular build number. You will then be shown a list as shown below: From this list you should select the required build by double-clicking on it. New build... From the Items page of the Navigator, click on the New Builds button: On choosing this menu item you are shown the build entry window: If you have restricted access to builds, you will not be able to see cost prices or the profit summary window at the bottom. The build window has two parts: - The top part of the window records the details of the item to be built - The lower part lists ingredients that are used in the manufacture of the product. click the Item to build or to edit icon and in the next window enter the name of the product you're manufacturing (Syrup in our example) and this window is displayed where you should complete the appropriate fields: Adding ingredients manually Note that ingredients can be added automatically from the Bill of materials tab. If you regularly build the same item, we recommend that you enter a Bill of Materials for the item being built, and use the method outlined under the Using a Bill of Materials heading below. First, if you are entering a projected build (one that you expect to perform in the future), check the This is a projected build check box. If checked, all items added will be placeholder lines rather than actual stock. Doing this allows you to enter your manufacturing schedule in advance of ordering raw materials. The schedule will be taken into account when ordering to ensure that you will have enough materials in stock when the time comes to manufacture. - To add a new ingredient, click the New ingredient button. You will be shown the standard window for issuing goods from stock: - Enter ingredients just as you would for entering a customer invoice. - Once you have finished entering ingredient lines, click OK to return to the main window. - If you wish to edit a line, double-click it, and change the details. - To delete a line, double-click it, set it's quantity to zero, then click the OK button. Adding the item to be built. - To add the item to build, click Item to build or edit button. You will be shown the Add/edit supplier invoice line window below for receiving goods. - The cost price for the item is automatically calculated for you. You may enter the margin or the selling price as you prefer. - Once you have entered the item to build, click OK to return to the main window. - If you wish to edit the item, simply double-click inside the “item to build” rectangle. - At the bottom right of the window in Summary section is a summary of the cost, margin and selling prices for the build. - Once you are satisfied with the details, click OK to enter the build into the system. You will be asked if you want to enter the details into stock. If you say yes , the newly created item will immediately be available for issuing to customers. If you click later then the stock will not be available until you open the build window at a later date and enter it into stock. - Note that the ingredients used in a build are considered to have been “sold” for re-ordering purposes, and will be counted in your usage. Finishing build entry - Understanding build status codes enables you to know what stage each build is at. The codes are the same as for other transactions. Each build transaction has a status code: - When you click the OK button you may be asked if you want to enter the build into stock. You should only do so once the manufacturing and Quality Assurance (QA) process is complete. You will not be asked this question if there are any placeholder lines (those with a batch of “none”) entered as an ingredient. Such builds are presumed to be for projected manufacturing, and are kept with status sg automatically. - To finalize builds, choose File > Finalize builds when the splash screen is showing. Converting projected builds into an actual build. - Once your manufacturing of a projected build is about to take place, choose Item > Show builds … to locate the build you want to edit. - For each line whose batch is equal to “none” (a placeholder line) you will have to double-click it and choose an actual stock line from the item issue window (either by entering the line number or double-clicking the line you wish to use). Once you have done this, the stock you have chosen will be reserved, and manufacturing can take place. - mSupply® calculates the number of items that will result from your build, and clicking on the Print labels icon prints the correct number of labels. Using a Bill of Materials A Bill of materials can be thought of as a “recipe” or “formula” for building an item. It records the ingredients, and the quantity of each required to make the finished product. You should create a bill of materials for an item before you come to this screen. This is done in an item's Item details window: see here for details. Screenshots in this section are using Simple Syrup as an example, and for this product a Bill of Materials has already been created. When you click the “Bill of materials” tab in the build window, this window appears: First you need to choose the item to be manufactured: in the next window you need to specify the quantity to be manufactured and other details: When you click OK, you are returned to the New Build window, and when you click the Add Bill of Materials Button, a window appears where you can confirm or cancel the quantity to be manufactured: Assuming the quantity is correct, click OK, and you are returned to the New Build window, where the open tab is the Bill of materials tab. Click the Add Bill of Materials button, and the details on the Bill of Materials according to the formula previously entered for Simple Syrup is displayed: Now click the Ingredients tab, where the ingredients are listed, but no stock is attached to any item - they are placeholder items (displayed in red). This is done as mSupply® cannot take into account all the factors that go into choosing an appropriate batch to use for each manufacturing run (The expiry, amount on hand, etc). click each line in turn to select the quantity and batch number of available stock lines for each ingredient. Note that there is a button displayed Re-distribute all. Clicking this button will take the “total quantity issued” figure and re-distribute it over the available batches, making it easy to move from using a placeholder line to issuing actual stock. The ingredient is repeated in black with appropriate details displayed. At this time (or later) you can also adjust the amount issued to reflect actual issued quantities and the actual batches of raw materials used, as opposed to the theoretical quantities that are initially entered. If you are manufacturing the product immediately, the status of the build transaction should be changed to Confirmed on completion of the manufacturing process. Print options: It's possible to print either a Pick list, detailing the ingredients and quantities, or a summary of the manufactured product. To achieve this, check the print icon in the bottom right hand corner of the window and click the OK button. The printing options window will appear and you can choose which document to print: Calculate Yields button: This button (on the Bill of materials tab) compares the actual quantities issued and the actual final quantity manufactured with the theoretical amounts that should have been used and made. This allows you to monitor production efficiency. Use the Print yield report button to print the yield information if required. Previous: Locations and Location types Next: Merging items
http://docs.msupply.org.nz/items:manufactured_items
2018-12-10T04:29:26
CC-MAIN-2018-51
1544376823303.28
[array(['/_media/items:build_print_options.png?w=450&tok=01733c', None], dtype=object) ]
docs.msupply.org.nz
From within the Devices overview screen you can add, edit, lock or remove a device. The Devices overview screen To edit or remove a device you have to use the action buttons located on the far right of each row in the table list in the Devices overview screen, as shown below. For your convenience, tooltips appear when hovering over the buttons with your mouse cursor. To add a New Device, click on the New Device button at the top right of the screen. The New Device page To create the device, the following fields have to be input: To add a New Virtual Device, tick the “Virtual Device” check-box in the “New Device” form. The New Virtual Device form To create a virtual device, the following information has to be provided: Virtual device IP and location are inherited from parent physical device attributes.
https://docs.moreal.co/add-edit-remove-a-device/
2018-12-10T04:19:04
CC-MAIN-2018-51
1544376823303.28
[]
docs.moreal.co
Making the scheduling add-in available to all users The VMR Scheduling for Exchange feature allows you to create an add-in that enables Microsoft Outlook desktop and Web App users in Office 365 or Exchange environments to schedule meetings using Pexip VMRs as a meeting resource. This topic explains how to make the Pexip VMR Scheduling for Exchange add-in available to Outlook users from their desktop and Web App clients. This involves uploading an XML manifest file to your Microsoft Exchange deployment. These instructions explain how to make the same add-in available to all users in your Exchange deployment. For information on how to make a specific add-in available to a particular group of users, see Restricting the scheduling add-in to specific users. Before you start you must have completed the following steps: Downloading the add-in XML file The add-in XML manifest file contains all the add-in configuration and is generated by the Pexip Infinity Management Node based on the information you provided when Configuring a Pexip Exchange Integration. To download the file: - From the Management Node, go to . - Select the Pexip Exchange Integration you have configured. - From the bottom of the page, select. Upload the add-in XML file to Microsoft Exchange You must now upload the add-in XML manifest file to your Microsoft Exchange deployment: - Log in to the Exchange Admin Center (EAC) and select organization > add-ins. - Select the add icon (+) and then Add from file. - Browse to the manifest XML file and then select. The Pexip Scheduling Service add-in will appear in the list. - Double click the Pexip add-in to edit it. - Select Make this add-in available to users in your organization. - Select either Optionally, enabled by default or Mandatory, always enabled. Now, when users access Outlook, the Pexip VMR Scheduling for Exchange add-in will be available for them to use to schedule meetings in Pexip Infinity VMRs. The add-in will be available to all Outlook users in your deployment unless you choose to restrict it to certain users. Testing the integration You can test that the add-in is working as expected by logging in to an Outlook client and creating a test meeting, and then joining that meeting using the links that were generated. Note that you should ensure that the test meeting is scheduled to start within the buffer time, otherwise it won't be available to join immediately. Troubleshooting If you are having issues installing the add-in, see Troubleshooting VMR Scheduling for Exchange.
https://docs.pexip.com/admin/scheduling_addin.htm
2018-12-10T04:38:04
CC-MAIN-2018-51
1544376823303.28
[array(['../Resources/Images/admin_guide/scheduling_edit_addin.png', None], dtype=object) array(['../Resources/Images/admin_guide/scheduling_addin_installed_840x416.png', None], dtype=object) ]
docs.pexip.com
Contains classes that are specific to the ASP.NET Dashboards Module. The module contained in the DevExpress.ExpressApp.Dashboards.Web.v18.2.dll assembly. A ViewController that provides the ShowDashboardInSeparateTab, ExportDashboard and SetDashboardParameters Actions. The ViewController that opens a dashboard in a separate browser tab when a user holds the CONTROL key and clicks a row in the Dashboards List View. A View Item that displays the ASPxDashboard control. Contains values specifying if the dashboard designer is displayed in a popup window within the current browser tab, or in a separate tab.
https://docs.devexpress.com/eXpressAppFramework/DevExpress.ExpressApp.Dashboards.Web?v=18.2
2020-01-17T18:29:23
CC-MAIN-2020-05
1579250590107.3
[]
docs.devexpress.com
2014 Appendix 2 – Equity groups View this document as… Higher education equity groups tables for the 2014 full year. Equity groups include students that: · are from non-English speaking backgrounds (NESB); · have a disability; · are women in non-traditional areas; · identify as indigenous; · are from low SES (socioeconomic status) locations based on postcode of permanent home residence; and · are from regional and remote locations based on postcode of permanent home residence.
https://docs.education.gov.au/node/38145
2020-01-17T18:36:47
CC-MAIN-2020-05
1579250590107.3
[]
docs.education.gov.au
UDP Delivers Take Total Control Of Your Networking With .NET And UDP Yaniv Pessach Code download available at:UDP.exe(136 KB) Contents TCP Versus UDP A Custom UDP Solution Improving the Code Reliability Protocols Sample Protocol Implementation Broadcast and Multicast Security Windows Communication Foundation Conclusion You've probably made use of Transmission Control Protocol (TCP) in your applications at some point, whether directly or indirectly—after all, HTTP runs on top of TCP, so every Web application is built atop it. But maybe you've wondered whether its brother User Datagram Protocol (UDP) might be a better fit for your intended solution. Most Internet traffic utilizes TCP and UDP, running on top of Internet Protocol (IP), the low-level protocol used by all traffic on the Internet. While TCP is the more familiar of the two, accounting for as much as 75 percent of Internet traffic, UDP holds second place with approximately 20 percent of sent packets. All other low-level protocols combined, including raw Internet Control Message Protocol (ICMP), account for less than 5 percent of Internet traffic. UDP is used for such important and common tasks as DNS resolution, Simple Network Management Protocol (SNMP) network status, Windows® Internet Naming Service (WINS) NetBIOS name resolution, Trivial FTP (TFTP), Kerberos security, server discovery, digital media streaming, voice over IP (VoIP) using the International Telecommunications Union (ITU) H.323 protocol, and online gaming. Trying to design an efficient, quick, and responsive network application is hard work, so using the right tools can help a lot. Sometimes, choosing UDP as the low-level network protocol can give you the necessary flexibility to use fewer resources, support more clients, reduce latency, increase throughput, or implement services that are not otherwise practical over TCP. These benefits aren't free, however. Writing UDP code is relatively simple, but using UDP in a useful, safe, and efficient manner requires that you implement the application protocol yourself. In this article, I will discuss the pros and cons of using UDP in your own applications. I'll take a look at the design considerations for UDP-based applications, including the details and pitfalls of implementing those applications in C#. I will also offer a preview of exposing UDP Web services through Windows Communication Foundation. TCP Versus UDP Since their inception, the TCP and UDP protocols have taken very different evolutionary paths. TCP is based on connections, maintains a session, and guarantees a degree of reliability and standardized flow control. UDP provides no such features and relies upon the application layer to provide such services. UDP allows you to send a packet, with or without a checksum, and redirects the packet (multiplexes it) to a listening application based on the port number the packet was sent to. A single packet can be sent to multiple machines at once by using multicast and broadcast transmission. With UDP, no connection is maintained—each packet sent is treated independently. UDP makes no attempt to control transmission speeds based on congestion. If a packet is lost, the application must detect and remedy the situation. If a packet arrives out of order, you're on your own again. UDP also does not provide the security features that you can expect to find with TCP. This includes the three-way handshake that TCP uses to ensure the validity of the claimed source address. So what can UDP do for your application that TCP cannot? To start, UDP supports multicast—sending a single packet to multiple machines. Multicast can be used to send a message even when you do not know the IP address or DNS name of the target machines. This is used, for example, to discover the servers within a local network. Using multicast can also save bandwidth. If you want to play a presentation to 20 local machines, by using multicast the server would only need to send each packet once. UDP multicast is used by UPnP and the Windows XP My Network Places feature. UDP is also useful when the overhead of TCP is not acceptable, especially when only one request/response is needed. Establishing a TCP connection requires exchanging at least three packets (see Figure 1). If it takes 150ms for a packet to go one way from Seattle to Sydney, and if the client (the sender) makes only one request at a time from the server (the receiver), then connection establishment and a response would take at least 600ms. On the other hand, if you sent the request over UDP, it could shave 300ms off this wait. By using UDP, you also spare the server the resources it needs to manage a TCP connection, thus enabling the server to process more requests. Figure 1** UDP and TCP Request/Response Models ** UDP can help if your application can use a different packet-loss recovery mechanism. Since TCP guarantees that data will be processed by the server in the order it was sent, packet loss on a TCP connection prevents the processing of any later data until the lost packet is received successfully. For some applications this behavior is not acceptable, but others can proceed without the missing packet. For example, the loss of one packet in a broadcast video should not cause a delay because the application should just play the next frame. UDP can also help if you want to control or fine-tune communication parameters such as the outgoing buffer size, minimize network traffic, or use different congestion-avoidance logic than TCP provides. Specifically, TCP assumes that packet loss indicates congestion. Therefore, when packet-loss is detected, TCP slows down the rate of outgoing information on a connection. In that respect, TCP uses a "considerate" algorithm to optimize throughput. However, this can result in slow transmission rates on networks with high packet loss. In some cases, you may want to deliberately choose a less-considerate approach to achieve higher throughput. On the other hand, TCP has distinct advantages in simplicity of implementation. Many security threats are resolved in the TCP stack so you don't have to worry about them. Also, since UDP does not have a connection, firewalls cannot easily identify and manage UDP traffic. System administrators would rather allow an outgoing TCP connection than open their firewalls. A Custom UDP Solution To illustrate the issues surrounding a real-world UDP implementation, I'll start by building simple UDP client and server apps modeled on an online gaming scenario. The application will send a player's information (ID, X coordinate, and Y coordinate) from the client to a server: public class PlayerInfo { public byte playerID, locationX, locationY; } Before sending any information on the wire, I first have to decide on a wire format and how to serialize and deserialize the message—that is, how to translate PlayerInfo into an array of bytes and then back. In this article, I use a couple of simple methods to do this (see Figure 2). Figure 2 Serializing and Deserializing PlayerInfo public int ToBuffer(byte[] buffer, int pos) { int newPos = pos; buffer[newPos++] = playerID; buffer[newPos++] = locationX; buffer[newPos++] = locationY; return newPos - pos; } public void FromBuffer(byte[] buffer, int pos) { playerID = buffer[pos++]; locationX = buffer[pos++]; locationY = buffer[pos++]; } The client code should initialize a Socket, serialize the data from PlayerInfo, and send the data. The data will be sent to the IP address 127.0.0.1, which is always the local computer: const int ProtocolPort = 3001; Socket sendSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); IPAddress sendTo = IPAddress.Parse("127.0.0.1"); EndPoint sendEndPoint = new IPEndPoint(sendTo, ProtocolPort); byte[] buffer = new byte[PlayerInfo.MaxWireSize]; int bufferUsed = player.ToBuffer(buffer, 0); sendSocket.SendTo(buffer, bufferUsed, SocketFlags.None, sendEndPoint); Production code would also add framing—in the form of headers—to the actual data. In this example, I chose to send each PlayerInfo in a separate UDP packet. The Socket.SendTo method sends the number of bytes specified by bufferUsed to the address in sendEndPoint. Since UDP does not implement a flow control protocol and there isn't a guaranteed response for a UDP packet, SendTo does not generally need to block. This is unlike sending with TCP, which would block until the point when too much data has been sent but not yet acknowledged by the receiver. It is safe to assume that no long delays will happen when using SendTo, which is why I did not use the asynchronous version Socket.BeginSendTo. If you have the machine name rather than an IP address, you can modify the code to resolve the machine name to an IP address: IPAddress FirstDnsEntry(string hostName) { IPHostEntry IPHost = Dns.Resolve(hostName); IPAddress[] addr = IPHost.AddressList; if (addr.Length == 0) throw new Exception("No IP addresses"); return addr[0]; } Note that the DNS protocol, used in Dns.Resolve, may use either UDP or TCP. When I call FirstDnsEntry with a remote machine name, I can see a UDP packet sent from my machine to port 53. Traffic analyzers, such as Netmon.exe, can help to debug your applications by showing exactly which packets are being sent and received on your machine. To implement the server side of this communication, I use the asynchronous Socket.BeginReceiveFrom method, which will call a delegate when a UDP packet is received (see the code in Figure 3). You have to call again if you want to receive additional packets, so a good practice to follow is to always call BeginReceiveFrom from the BeginReceiveFrom delegate. Figure 3 Receiving a Message Socket receiveSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint bindEndPoint = new IPEndPoint(IPAddress.Any, ProtocolPort); byte[] recBuffer = new byte[PlayerInfo.MaxWireSize]; receiveSocket.Bind(bindEndPoint); receiveSocket.BeginReceiveFrom(recBuffer, 0, recBuffer.Length, SocketFlags.None, ref bindEndPoint, new AsyncCallback(MessageReceivedCallback), (object)this); void MessageReceivedCallback(IAsyncResult result) { EndPoint remoteEndPoint = new IPEndPoint(0, 0); try { int bytesRead = receiveSocket.EndReceiveFrom(result, ref remoteEndPoint); player.FromBuffer(recBuffer, 0, Math.Min(recBuffer.Length, bytesRead)); Console.WriteLine("ID:{0} X:{1} Y:{2}", player.playerID, player.locationX, player.locationY); } catch (SocketException e) { Console.WriteLine("Error: {0} {1}", e.ErrorCode, e.Message); } receiveSocket.BeginReceiveFrom(recBuffer, 0, recBuffer.Length, SocketFlags.None, ref bindEndPoint, new AsyncCallback(MessageReceivedCallback), (object)this); } Improving the Code While responding to a UDP packet is normally up to the receiving application, a special case exists when there is no application listening on the port. The UDP stack then will send (in most configurations, and unless a firewall is configured to prevent it) an ICMP port unreachable response. This will be visible to the sending application if it is listening on the socket as a SocketException with an ErrorCode value of 10054 (connection reset by peer). Adding this code after socket initialization would catch the exception: EndPoint bindEndPoint = new IPEndPoint(IPAddress.Any, 0); sendSocket.Bind(bindEndPoint); byte[] recBuffer = new byte[PlayerInfo.MaxWireSize]; sendSocket.BeginReceiveFrom(recBuffer, 0, recBuffer.Length, SocketFlags.None, ref reponseEndPoint, new AsyncCallback(CheckForFailuresCallback), (object)this); I react to this exception in the callback: void CheckForFailuresCallback(IAsyncResult result) { EndPoint remoteEndPoint = new IPEndPoint(0, 0); try { int bytesRead = sendSocket.EndReceiveFrom(result, ref remoteEndPoint); } catch (SocketException e) { if (e.ErrorCode == 10054) serviceMissing = true; } } Incidentally, the behavior I just relied on—detecting whether any application is listening on a UDP port—can let people identify which UDP ports a machine is listening on, a process known as UDP port scanning. The same can be done for TCP, and the recommended cure is firewall configuration. UDP is not a session-based protocol, and responses to a message, if sent at all, are treated as independent of the original message. The sender of a response must specify both the IP address and the port the response is sent to, and must use the SendTo method. There are several ways for a sender and a receiver to agree on a return address and port. The sender IP address (as claimed on the packet) can be retrieved from the received message, along with the source port. Sending a message to the source port implies that the same socket is used by the original sender to both send the original request and listen to responses. An alternative approach is to specify the response port in the request: bytesRead = socket.EndReceiveFrom(result, ref remoteEndPoint); EndPoint endPointDestination = new IPEndPoint(((IPEndPoint)remoteEndPoint).Address, ((IPEndPoint)remoteEndPoint).Port); socket2.SendTo(buffer, length, SocketFlags.None, endPointDestination); The port a server is listening on can either be fixed (a well-known value used by a protocol and hardcoded in all clients and servers) or determined at run time based on which ports are in use. Since, in general, only one socket and one application can listen on a specific port number at a time, determining the port at run time allows multiple instances of a receiver to run on a machine. Run-time port selection is also useful when a server should send a message or messages back to clients. To find a free port number, the receiver should bind to port 0. The actual port it is listening on can be found using the LocalEndPoint property, as shown here: socket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint endPoint = new IPEndPoint(IPAddress.Any, 0); socket.Bind(endPoint); listeningPort = ((IPEndPoint)(socket.LocalEndPoint)).Port; Reliability Protocols While the techniques I've demonstrated so far work most of the time, they don't help address the reliability issues of UDP. Packets can be lost in transit, and no corrective action will be taken. UDP will drop packets on the send side before sending them if sending happens fast enough and the internal buffers are exhausted. UDP will also drop packets upon receipt if, even momentarily, BeginReceiveFrom is not active on the socket. The last problem is the one most easily solved. In my sample receiver code, there's a short span of time between acceptance of a packet and calling another BeginReceiveFrom. Even if this call were the first one in MessageReceivedCallback, there's a still short period when the app isn't listening. One improvement would be activating several instances of BeginReceiveFrom, each with a separate buffer to hold received packets. Also consider the impact of the UDP packet sizes. While UDP packets can be up to 64 kilobytes (KB), the underlying IP transport can only transport packets up to the Maximum Transmission Unit (MTU) of the link. When a packet bigger than an MTU comes along, IP fragmentation occurs—the IP protocol divides the sent packet into fragments smaller than MTU, and reassembles them at the destination, but only if all fragments were received. Not relying on IP fragmentation, but instead limiting your packets to MTU, increases the reliability of the transmission and reduces packet loss. Ethernet MTU is usually 1500 bytes, but some connections (such as dial-up and some tunneling protocols) have an MTU of 1280 bytes. Practically speaking, it is hard to discover the minimum MTU of a link, so choosing a packet size of less than 1280 (or 1500) will increase reliability. Another concern is that sending UDP packets too fast will result in some packets being silently discarded. Earlier I pointed out that SendTo calls do not block. This can surprise developers who are accustomed to the TCP behavior of blocking the Send call until the data can be sent. What constitutes too fast? A few packets a second are not an issue; hundreds or thousands may be an issue, depending on the computer and network. All those factors show that even on a local network with perfect connectivity, you cannot rely on all sent UDP packets being received. And for applications using the Internet, packet loss can be a major issue, reaching more than 5 percent in some areas, with worse packet loss experienced on specific links or at specific times. Most UDP-based applications need to implement some kind of reliability mechanism. Reliability is often seen in the form of guarantees. The three guarantees you may be familiar with in TCP are: non-duplication (each packet is received once at most); transmission (each packet is received at least once or an error is reported to the sender); and packet order (if packet A was sent before packet B on the same TCP connection, it will be read before packet B). Figure 4 shows an example of the problems caused by dropped and out-of-order packets. Figure 4** Dropped and Out-of-Order Packets ** With UDP, each of those guarantees, if desired, needs to be implemented by the application. If a specific guarantee is not needed or can be relaxed, you have more freedom to write a more efficient protocol. In my version of the protocol, I deal with dropped packets by resending them—but I send only the dropped packets to conserve bandwidth. To do that, the application needs to detect which packets were dropped and which arrived successfully. The common method (which is also used by TCP) is for the receiver to send an acknowledgment (ACK) for packets received. The protocol can save bandwidth if the receiver sends a single ACK for several received packets. To facilitate this, each sent packet will normally have a sequence number or a unique identifier. Different packet-loss handling protocols select different ACK methods and timeouts for handling a packet that did not receive an ACK. They also differ in their handling of out-of-order packets. To help test the UDP samples presented in this article, I wrote a utility called UDPForward that can simulate packet drops and delays. UDPForward is available in the code download for this article. If you try the samples in this article with simulated packet loss, you can see how packet loss and delay affects the receiver of messages, and how the reliability protocol I implemented resolves this issue. You can also use UDPForward to test your own UDP applications and get a feel for their behavior in the presence of packet loss. Sample Protocol Implementation To demonstrate how an application can use UDP to tailor a specific protocol for specific needs, I will implement a sample protocol for transmitting the current location of a PlayerInfo object. Using UDP, I will choose suitable ACK behaviors as well as control and communication parameters such as buffer sizes and timeouts. The receiver of a sequence of PlayerInfo packets normally cares only about the last known or current state. Using this property, which is applicable to other applications such as streaming audio and video, I can better handle a dropped packet. If a newer update to the same information becomes available, I can avoid the ACK and the delay until the packet is resent. This approach results in better bandwidth utilization and lower delay. Why would you ever resend a packet if you can constantly receive updated information? For the PlayerInfo application, I'm willing to accept a momentary stale display, but not a permanent one. What if the packet notifying the movement of player 7 was lost and therefore player 7 was not moved afterwards? Without an ACK, I will never know. In addition, the sender can determine when the receiver is no longer receiving any packets. And since the sender may send a packet twice, the receiver implements a mechanism to drop duplicate and stale copies of the packet received. The important point is that this is an application-specific decision. In some applications, including the PlayerInfoBroadcaster example shown later and some voice streaming implementations, you can choose a different tradeoff. The protocol I chose to implement optimizes sending PlayerInfo data by only requiring one ACK per PlayerInfo send if no newer PlayerInfo data was received. The sender will send one packet per update of PlayerInfo data, together with a constantly-increasing sequence number. The sender maintains a list of "waiting for ACK" data, but the sender would only process an ACK for the last update sent (so old ACKs are discarded). The sender also occasionally checks the waiting for ACK list and resends the PlayerInfo data for any item that has been waiting too long. The receiver needs to perform two duties: duplicate detection and ACK sending. This implementation sends ACKs immediately for new data as well as for duplicate data, but duplicates are detected (since I maintain the last sequence received for this PlayerID) and will be dropped before they are passed to the application. The application also detects a broken connection if too many attempts to resend data fail. In the real world, the user may be informed or some error recovery could be attempted. I first define PacketInfo: public struct PacketInfo { public int sequenceNumber; public long sentTicks; public int retryCount; } The sender initiates the socket and listens for ACKs (see Figure 5). The sender maintains a list of ACKs that are needed and will handle incoming ACKs by retrieving the PlayerID and sequenceNumber from the ACK packet, then removing an entry from the ACK-needed list if the sequence number matches (see Figure 6). Figure 6 Maintaining the List of ACKs static void OnReceiveAck(IAsyncResult result) { UDP_protocol uf = (UDP_protocol)result.AsyncState; EndPoint remoteEndPoint = new IPEndPoint(0, 0); int bytesRead = uf.senderSocket.EndReceiveFrom( result, ref remoteEndPoint); uf.ProcessIncomingAck(uf.bufferAck, bytesRead); uf.ListenForAcks(); } void ProcessIncomingAck(byte[] packetData, int size) { ... // remove from 'needing ACK' list if ACK for newest info if (sentPacketInfo[playerID].sequenceNumber == sequenceNumber) { Console.WriteLine("Received current ACK on {0} {1}", playerID, sequenceNumber); ProcessPacketAck(playerID, sequenceNumber); } ... } Figure 5 Listening for ACKs void OpenSenderSocket(IPAddress ip, int port) { senderEndPoint = new IPEndPoint(ip, port); senderSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint bindEndPoint = new IPEndPoint(IPAddress.Any, 0); senderSocket.Bind(bindEndPoint); // any free port ListenForAcks(); // start 'ack waiting' thread ThreadPool.QueueUserWorkItem(CheckPendingAcks, 0); } public void ListenForAcks() { EndPoint endPoint = new IPEndPoint(0, 0); senderSocket.BeginReceiveFrom (bufferAck, 0, SizeAck, SocketFlags.None, ref endPoint, onReceiveAck, (object)this); } public void ListenForAcks() { EndPoint endPoint = new IPEndPoint(0, 0); senderSocket.BeginReceiveFrom (bufferAck, 0, SizeAck, SocketFlags.None, ref endPoint, onReceiveAck, (object)this); } When sending data, the sequenceNumber and PlayerID are written to the packet, the packet is sent, and the list of ACKs that are needed is updated, including current time, which will be used to decide if a resend is needed: void SendPlayerInfoData(byte playerID, byte locationX, byte locationY) { sequenceNumber++; ... SendPlayerInfo(info, sequence, false); } void SendPlayerInfo(PlayerInfo info, int sequenceNumber, bool retry) { ... UpdatePlayerInfo(info, sequenceNumber, retry); senderSocket.SendTo(packetData, SizeData, SocketFlags.None, senderEndPoint); } Occasionally, the sender will resend packets that were not ACKed. If too many resends are attempted, then the sender will decide that the connection is broken (see Figure 7). Figure 7 Resending Packets public void CheckPendingAcks(object o) { Console.WriteLine("Checking for missing ACKs"); byte currentPosition = 0; while(!closingSender) { Thread.Sleep(10); ResendNextPacket(ref currentPosition, 10, DateTime.Now.Ticks - TimeSpan.TicksPerSecond / 10); } } void ResendPlayerInfo(byte playerID) { SendPlayerInfo(sentPlayerInfo[playerID], sentPacketInfo[playerID].sequenceNumber, true); sentPacketInfo[playerID].retryCount++; sentPacketInfo[playerID].sentTicks = DateTime.Now.Ticks; Console.WriteLine("Resending packet {0} {1} {2}", playerID, sentPacketInfo[playerID].sequenceNumber, sentPacketInfo[playerID].retryCount); } void ResendNextPacket(ref byte currentPosition, byte maxPacketsToSend, long olderThan) { ... if ((sentPacketInfo[newPosition].sequenceNumber = 0; sentPacketInfo[newPosition].sentTicks < olderThan) { if (sentPacketInfo[newPosition].retryCount > 4) { Console.WriteLine("Too many retries, should fail connection"); } else { ResendPlayerInfo(newPosition); packetsSent++; } } } The receiver will just listen for packets and check for duplicate or old data (see Figure 8). An ACK here is sent immediately. More elaborate implementations could reduce the number of ACKs by sending one ACK for several packets. Figure 8 Listening for Packets void OpenReceiverSocket(int port) { EndPoint receiverEndPoint = new IPEndPoint(IPAddress.Any, port); receiverSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); receiverSocket.Bind(receiverEndPoint); ListenForData(); } public void ListenForData() { EndPoint endPoint = new IPEndPoint(0, 0); receiverSocket.BeginReceiveFrom (bufferData, 0, SizeData, SocketFlags.None, ref endPoint, onReceiveData, (object)this); } static void OnReceiveData(IAsyncResult result) { UDP_protocol uf = (UDP_protocol)result.AsyncState; EndPoint remoteEndPoint = new IPEndPoint(0, 0); int bytesRead = uf.receiverSocket.EndReceiveFrom(result, ref uf.receiverAckEndPoint); uf.ProcessIncomingPlayerInfo(uf.bufferData, bytesRead); uf.ListenForData(); } void SendAck(byte playerID, int sequenceNumber) { ... // create packet with playerID and sequenceNumber receiverSocket.SendTo(packetData, SizeAck, SocketFlags.None, receiverAckEndPoint); } void ProcessIncomingPlayerInfo(byte[] packetData, int size) { ... // create packet with playerID and sequenceNumber if (sequenceNumber >= recvSequenceNumber[info.playerID]) SendAck(info.playerID, sequenceNumber); if (sequenceNumber > recvSequenceNumber[info.playerID]) { ... // update internal data and process packet Console.WriteLine("Received update: {0} ({1},{2})", info.playerID, info.locationX, info.locationY); } // else older packet, don't process or send ACK } Broadcast and Multicast A crucial feature supported by UDP but not TCP is sending a single message to multiple destinations. UDP supports both broadcast and multicast communication. Broadcasting a packet makes it available within a subnet mask. Multicasting is subscription-based in the sense that listeners have to join a multicast group to receive messages sent to that group. A multicast group uses an IP address in the range of 224.0.0.0 through 239.255.255.255. To listen on a multicast group, you need to indicate that you want multicast messages from a specific multicast group by setting the add membership SocketOption after calling Bind, but before calling BeginReceiveFrom: IPAddress multicastGroup = IPAddress.Parse("239.255.255.19"); socket.Bind(...); socket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(multicastGroup)); socket.BeginReceiveFrom(...); You do not need to set SocketOption in order to send to a multicast group, as shown here: const int ProtocolPort = 3001; Socket sendSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint sendEndPoint = new IPEndPoint(multicastGroup, ProtocolPort); sendSocket.SendTo(buffer, bufferUsed, SocketFlags.None, sendEndPoint); If your application does not know the IP address or machine name of, say, a game server, that server can listen on a well-known multicast address, wait for a specific request, and respond directly to the application that made the inquiry. In the sample I built that follows, the inquiring application will also use a dynamic port to receive the responses from zero, one, or more game servers. It also shows serialization of more complex data types. First I'll define the classes representing FindRequest and FindResult, as shown in the following: class FindRequest { public int serviceID; public int responsePort; public int SerializeToPacket(byte[] packet) {...} public FindRequest(byte[] packet) {...} } class FindResult { public int serviceID; public int SerializeToPacket(byte[] packet) {...} public FindResult(byte[] packet) {...} } To advertise itself, a server would open a socket, set multicast options on it, and process any requests, as shown in Figure 9. Figure 9 Starting a Server Session public AsyncCallback onReceiveRequest = new AsyncCallback(OnReceiveRequest); IPAddress multicastGroup = IPAddress.Parse("239.255.255.19"); public void FindMe(int blockFor, int serviceID) { responseSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint responseEndPoint = new IPEndPoint(IPAddress.Any, multicastPort); responseSocket.Bind(responseEndPoint); responseSocket.SetSocketOption(SocketOptionLevel.IP, SocketOptionName.AddMembership, new MulticastOption(multicastGroup)); ListenForRequests(); } public void ListenForRequests() { EndPoint endPoint = new IPEndPoint(0, 0); responseSocket.BeginReceiveFrom(findRequestBuffer, 0, 12, SocketFlags.None, ref endPoint, onReceiveRequest, (object)this); } When a request arrives, the server identifies the sender of the request either by looking at the remoteEndPoint obtained or by including this data in the packet. The server then sends a response announcing that it is present and active (see Figure 10). The client looking for a server would send a request, then wait a given amount of time for a response as shown in Figure 11. Figure 11 Client-Side Multicast Request public void Finder(int waitFor, int serviceID) { // start listening for responses before sending the request responseSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint responseEndPoint = new IPEndPoint(IPAddress.Any, 0); responseSocket.Bind(responseEndPoint); responsePort = ((IPEndPoint)(responseSocket.LocalEndPoint)).Port; ListenForResponses(); // prepare request FindRequest fr = new FindRequest(); fr.serviceID = serviceID; fr.responsePort = responsePort; int requestLength = fr.SerializeToPacket(findRequestBuffer); //send request Socket requestSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint requestEndPointDestination = new IPEndPoint(multicastGroup, multicastPort); requestSocket.SendTo(findRequestBuffer, requestLength, SocketFlags.None, requestEndPointDestination); requestSocket.Close(); //wait for responses Thread.Sleep(waitFor); } public void ListenForResponses() { EndPoint endPoint = new IPEndPoint(0, 0); responseSocket.BeginReceiveFrom(findResultBuffer, 0, 8, SocketFlags.None, ref endPoint, onReceiveResponse, (object)this); } static void OnReceiveResponse(IAsyncResult result) { UDP_finder uf = (UDP_finder)result.AsyncState; EndPoint remoteEndPoint = new IPEndPoint(0, 0); int bytesRead = uf.responseSocket.EndReceiveFrom(result, ref remoteEndPoint); FindResult response = new FindResult(uf.findResultBuffer); uf.ListenForResponses(); Console.WriteLine(„Found service {0}", response.serviceID); } Figure 10 Responding to Requests static void OnReceiveRequest(IAsyncResult result) { UDP_finder uf = (UDP_finder)result.AsyncState; EndPoint remoteEndPoint = new IPEndPoint(0, 0); int bytesRead = uf.responseSocket.EndReceiveFrom(result, ref remoteEndPoint); FindRequest request = new FindRequest(uf.findRequestBuffer); // prepare result FindResult fr = new FindResult(); fr.serviceID = uf.currentServiceID; int requestLength = fr.SerializeToPacket(uf.findResultBuffer); //send result Socket requestSocket = new Socket(AddressFamily.InterNetwork, SocketType.Dgram, ProtocolType.Udp); EndPoint requestEndPointDestination = new IPEndPoint(((IPEndPoint)remoteEndPoint).Address, request.responsePort); requestSocket.SendTo(uf.findResultBuffer, requestLength, SocketFlags.None, requestEndPointDestination); requestSocket.Close(); uf.ListenForRequests(); } Another purpose is reducing traffic and server load by sending a message to multiple senders, since one packet sent can be read by many machines on the local network, when multiple machines are interested in the same data. Copying the same file to many machines can take advantage of multicast and save bandwidth and reduce CPU usage for the sending server. Security When building a UDP (or any) server, security is an important consideration. The first concern is a denial of service attack, where an attacker sends many requests and spends too much of the server's resources, especially CPU time. It is common to implement throttling so that, if messages are received too quickly, some of them will be rejected or ignored. A more difficult problem to solve is authenticating incoming messages. Since an attacker can possibly send messages as well as observe sent messages, you can use standard cryptographic methods of verifying an identity, including signing and encrypting packets. Signing a message can prevent tampering and may contain some form of authorization. Encryption can prevent third parties from reading the information. However, UDP messages pose the additional problem of controlling the response address. You can find out the source IP address by examining the EndPoint class passed to EndReceiveFrom, but this source address is not verified by the UDP stack. It is just the claimed source address—an attacker can craft UDP packets that fake this. An attacker can also copy a valid request spotted on the network, change the source address, and send the modified copy. Often, the application will either validate the claimed source address against the message authentication or ignore the claimed source address altogether and pass the reply information in the packet, where tampering is prevented by message signing. Sometimes you may choose to encrypt a message rather than just sign it. Encryption can use symmetric algorithms such as Data Encryption Standard (DES) or asymmetric algorithms (using public/private key cryptography). Note that for messages of several kilobytes or larger, compressing the message before encrypting it can not only reduce bandwidth, but also reduce encryption time. That is because encryption is a CPU-intensive operation and the time it takes is a function of the number of bytes (or blocks of bytes) to encrypt. If the compression reduced the data size by 75 percent, the encryption stage will be reduced similarly. Windows Communication Foundation Windows Communication Foundation (WCF), formerly codenamed "Indigo," provides a new way to exchange messages between distributed applications. WCF can be used to expose Web services, a standardized way to exchange messages and expose services using SOAP and XML, and allow an easier interoperability experience. In the past, Web services were usually exposed over HTTP, but specifications for SOAP over UDP are available at soap-over-udp.pdf. WCF exposes multiple network protocols (most notably TCP) and is extensible to other network protocols by writing a WCF transport channel. WCF supports UDP through its extensibility model, and the code for a basic UDP channel will be provided as a code sample in the WinFX® SDK. WCF sends messages as XML representations of data passed between applications. It also provides a programming model that makes exchanging messages between applications easy. In addition to one-way messages, which are the equivalent of sending a UDP packet in the code samples, support is provided for duplex (two-way) and request-reply message exchange. Although UDP does not provide reliability features out of the box, WCF supports a standard-based reliability protocol through WS-ReliableMessaging. The WS-ReliableMessaging protocol can work with any WCF transport channel and defines a standard method to send and request ACKs, resend a dropped message, and ensure message delivery. WS-ReliableMessaging exposes three different guarantees: AtMostOnce, AtLeastOnce, and InOrder. Using this protocol layer implementation can save you from hand-coding your own reliability layer. The code to send a message using UDP and WS-ReliableMessaging isn't very complex. I first define an address, binding, and contract. The contract outlines the data that I want passed in the message. I will define a one-way contract, similar to the one implemented previously in this article: [ServiceContract ] public interface IUpdatePlayerInfoContract { [OperationContract(IsOneWay = true)] void Update(byte playerID, byte locationX, byte locationY); } You could also choose to implement a request-reply contract by making the Update method return a value and removing the one-way reference. The binding is the link between the transfer protocol and the data. I have to specify that I want to use the UDP transport, and that I want to use a ReliableMessaging channel. The WCF-based implementation of UDP is a one-way channel, but I need both a listener and a sender on each side—after all, the receiver (server) needs a return address to be able to send back ACKs. Support for a return address feature in an inherently one-way transport channel is provided by the CompositeDuplex channel, as shown in this code snippet: BindingElementCollection bindingElements = new BindingElementCollection(); ReliableSessionBindingElement session = new ReliableSessionBindingElement(); session.Ordered = true; bindingElements.Add(session); bindingElements.Add(new CompositeDuplexBindingElement()); bindingElements.Add(new UdpTransportBindingElement()); Binding playerInfoBinding = new CustomBinding(bindingElements); Had I opted for no reliability, the BindingElementCollection would be comprised of only the UdpTransportBindingElement, but the rest of the code would stay the same. The address is simply the address the UDP server will listen on. In the case of the UDP transport, this address includes only a machine name and port: Uri playerInfoAddress = new Uri( "soap.udp://localhost:16000/"); The code on the server should handle a change in the player information, like so: class playerInfoService : IUpdatePlayerInfoContract { public void Update(byte playerID, byte locationX, byte locationY) { Console.WriteLine("Player {0}: {1},{2}", playerID, locationX, locationY); } } The server code will also include something like this to set up the endpoint and start listening: using(ServiceHost service = new ServiceHost(typeof(playerInfoService))) { service.AddServiceEndpoint(typeof(IUpdatePlayerInfoContract), playerInfoBinding, playerInfoAddress); service.Open(); ... // more service code here } The client would need to define a proxy—a class that will be used to call the server method: public class PlayerInfoProxy : ClientBase<IUpdatePlayerInfoContract>, IUpdatePlayerInfoContract { public PlayerInfoProxy(Binding binding, EndpointAddress address) : base(binding, address) {} public void Update(byte playerID, byte locationX, byte locationY) { base.InnerProxy.Update(playerID, locationX, locationY); } } And with an instance of a proxy, the client can send a message to update the PlayerInfo: using(PlayerInfoProxy playerInfoProxy = new PlayerInfoProxy( playerInfoBinding, new EndpointAddress(playerInfoAddress))) { for (byte i = 0; i < 10; ++i) { playerInfoProxy.Update(1, i, i); Console.WriteLine("Sent"); } } You can also control the ReliableMessage parameters. For example, if message ordering is not important, a more efficient exchange would take place if you set the session's Ordered property to false: session.Ordered = false; You can easily configure the channel selection using a configuration file and avoid the code setting bindingElements, allowing an administrator to change the transport selection, the port, address, or even protocol, and replace UDP with TCP without changing the program source. You can configure security on the exchange in a similar manner; WCF supports authentication either using cryptographic signatures (x.509 certificates) or Windows Authentication. It also allows for validating, signing, and encrypting messages. If the provided reliability mechanism does not fit your application, you can use WCF extensibility to implement your own reliability protocol on top of UDP by implementing a custom reliability channel. Similarly, you can support other transport protocol and security mechanisms. Conclusion In this article, I've discussed both the benefits and challenges of implementing UDP communication in your own apps. You've seen two ways to use UDP in your .NET-based applications. Using the Socket class will be useful if you want to provide a full implementation of the application protocol and do the bookkeeping yourself. Meanwhile, using the Windows Communication Foundation classes will enable you to rely on a standardized implementation, reducing the code you need to write, optimize, and test. Either way, with UDP in your toolbox, you have more options available when designing your next networked application. Note that the code samples shown here use IPv4, which is the version of IP most commonly used. Converting the sample code to use IPv6 is usually simple, and requires you to specify AddressFamily.InterNetworkV6 instead of AddressFamily.InterNetwork, IPAddress.IPv6Any instead of IPAddress.IPAny, and SocketOptionLevel.IPv6 instead of SocketOptionLevel.IP. You must also replace IPv4 addresses with IPv6 addresses. For simplicity, I skipped most error checking (such as bounds checking) in the code samples. For safety and stability, don't forget these details in your own code. Yaniv Pessach is a Software Design Engineer at Microsoft working on Windows Communication Foundation Core Messaging. He contributed to the WS-Discovery and SOAP-over-UDP specifications, and is co-author of the upcoming book An Introduction to Windows Communication Foundation (Addison-Wesley, 2006). Reach Yaniv at.
https://docs.microsoft.com/en-us/archive/msdn-magazine/2006/february/udp-delivers-take-total-control-of-your-networking-with-net-and-udp
2020-01-17T18:54:34
CC-MAIN-2020-05
1579250590107.3
[array(['images/cc163648.fig01.gif', 'Figure 1 UDP and TCP Request/Response Models'], dtype=object) array(['images/cc163648.fig04.gif', 'Figure 4 Dropped and Out-of-Order Packets'], dtype=object)]
docs.microsoft.com
April 2019 Volume 34 Number 4 [Machine Learning] Closed-Loop Intelligence: A Design Pattern for Machine Learning By Geoff Hulten | April 2019 There are many great articles on using machine learning to build models and deploy them. These articles are similar to ones that teach programming techniques—they give valuable core skills in great detail. But to go beyond building toy examples you need another set of skills. In traditional systems these skills are called things like software engineering, software architecture or design patterns—approaches to organizing large software systems and the teams of people building them to achieve your desired impact. This article introduces some of the things you’ll need to think about when adding machine learning to your traditional software engineering process, including: Connecting machine learning to users: What it means to close the loop between users and machine learning. Picking the right objective: Knowing what part of your system to address with machine learning, and how to evolve this over time. Implementing with machine learning: The systems you’ll need to build to support a long-lived machine learning-based solution that you wouldn’t need to build for a traditional system. Operating machine learning systems: What to expect when running a machine learning-based system over time. Of course, the first question is determining when you need to use machine learning. One key factor in the decision is how often you think you’ll need to update an application before you have it right. If the number is small—for example, five or 10 times—then machine learning is probably not right. But if that number is large—say, every hour for as long as the system exists—then you might need machine learning. There are four situations that clearly require a number of updates to get right: - Big Problems: Some problems are big. They have so many variables and conditions that need to be addressed that they can’t be completed in a single shot. - Open-Ended Problems: Many problems lack a single, fixed solution, and require services that live and grow over long periods of time. - Time-Changing Problems: If your domain changes in ways that are unpredictable, drastic or frequent, machine learning might be worth considering. - Intrinsically Hard Problems: Tough problems like speech recognition and weather simulation and prediction can benefit from machine learning, but often only after years of effort spent gathering training data, understanding the problems and developing intelligence. If your problem has one of these properties, machine learning might be right. If not, you might be better off starting with a more traditional approach. If you can achieve your goal with a traditional approach, it will often be cheaper and simpler. Connecting Machine Learning to Users Closing the loop is about creating a virtuous cycle between the intelligence of a system and the usage of the system. As the intelligence gets better, users get more benefit from the system (and presumably use it more) and as more users use the system, they generate more data to make the intelligence better. Consider a search engine. A user types a query and gets some answers. If she finds one useful, she clicks it and is happy. But the search engine is getting value from this interaction, too. When users click answers, the search engine gets to see which pages get clicked in response to which queries, and can use this information to adapt and improve. The more users who use the system, the more opportunities there are to improve. But a successful closed loop doesn’t happen by accident. In order to make one work you need to design a UX that shapes the interactions between your users and your intelligence, so they produce useful training data. Good interactions have the following properties: The components of the interaction are clear and easy to connect. Good interactions make it possible to capture the context the user and application were in at the time of the interaction, the action the user took and the outcome of the interaction. For example, a book recommender must know what books the user owns and how much they like them (the context); what books were recommended to the user and if they bought any of them (the action); and if the user ended up happy with the purchase or not (the outcome). The outcome should be implicit and direct. A good experience lets you interpret the outcome of interactions implicitly, by watching the user use your system naturally (instead of requiring them to provide ratings or feedback). Also, there won’t be too much time or too many extraneous interactions between the user taking the action and getting the outcome. Have no (or few) biases. A good experience will be conscious of how users experience the various possible outcomes and won’t systematically or unconsciously drive users to under-report or over-report categories of outcomes. For example, every user will look at their inbox in an e-mail program, but many will never look at their junk folder. So the bad outcome of having a spam message in the inbox will be reported at a much higher rate than the bad outcome of having a legitimate message in the junk folder. Does not contain feedback loops. A closed loop can suffer from feedback that compounds mistakes. For example, if the model makes a mistake that suppresses a popular action, users will stop selecting the action (because it’s suppressed) and the model may learn that it was right to suppress the action (because people stopped using it). To address feedback loops, an experience should provide alternate ways to get to suppressed actions and consider a bit of randomization to model output. These are some of the basics of connecting machine learning to users. Machine learning will almost always be more effective when the UX and the machine learning are designed to support one another. Doing this well can enable all sorts of systems that would be prohibitively expensive to build any other way. Picking the Right Objective One interesting property of systems built with machine learning is this: They perform worst on the day you ship them. Once you close the loop between users and models, your system will get better and better over time. That means you might want to start with an easy objective, and rebalance toward more difficult objectives as your system improves. Imagine designing an autonomous car. You could work on this until you get it totally perfect, and then ship it. Or you could start with an easier sub-problem—say, forward collision avoidance. You could actually build the exact same car for forward collision avoidance that you would build for fully autonomous driving—all the controls, all the sensors, everything. But instead of setting an objective of full automation, which is extremely difficult, you set an objective of reducing collisions, which is more manageable. Because avoiding collisions is valuable in its own right, some people will buy your car and use it—yielding data that you can leverage with machine learning to build better and better models. When you’re ready, you can set a slightly harder objective, say lane following, which provides even more value to users and establishes a virtuous cycle as you ultimately work toward an autonomous vehicle. This process might take months. It might take years. But it will almost certainly be cheaper than trying to build an autonomous car without a closed loop between users and your machine learning. You can usually find ways to scale your objectives as your models get better. For instance, a spam filter that initially moves spam messages to a junk folder could later improve to delete spam messages outright. And a manufacturing defect detection system might flag objects for further inspection as a first objective, and later discard defective objects automatically as models improve. It’s important to set an objective that you can achieve with the models you can build today—and it’s great when you can grow your machine learning process to achieve more and more interesting objectives over time. Implementing with Machine Learning Systems built to address big, open-ended, time-changing or intrinsically hard problems require many updates during their lifetimes. The implementation of the system can make these updates cheap and safe—or they can make them expensive and risky. There are many options for making a system based on machine learning more flexible and efficient over time. Common investments include: The Intelligence Runtime To use machine learning you need to do the basics, like implement a runtime that loads and executes models, featurizes the application context and gives users the right experiences based on what the models say. A runtime can be simple, like linking a library into your client, or it can be complex, supporting things like: - Changes to the types of models used over time, moving from simple rules toward more complex machine learning approaches as you learn more about your problem. - Combining models that run on the client, in the service, and in the back end, and allowing models to migrate between these locations over time based on cost and performance needs. - Supporting reversion when deployments go wrong, and ways to rapidly override specific costly mistakes that machine learning will almost certainly make. Intelligence Management As new models become available, they must be ingested and delivered to where they’re needed. For example, models may be created in a lab at corporate headquarters, but must execute on clients across the world. Or maybe the models run partially in a back end and partially in a service. You can rely on the people producing the models to do all the deployments, the verification, and keep everything in sync, or you could build systems to support this. Intelligence Telemetry An effective telemetry system for machine learning collects data to create increasingly better models over time. The intelligence implementation must decide what to observe, what to sample, and how to digest and summarize the information to enable intelligence creation—and how to preserve user privacy in the process. Telemetry can be very expensive and telemetry needs will change during the lifetime of a machine learning-based system, so it often makes sense to implement tools to allow adaptability while controlling costs. The Intelligence Creation Environment For machine learning-based systems to succeed, there needs to be a great deal of coordination between the runtime, delivery, monitoring and creation of your models. For example, in order to produce accurate models, the model creator must be able to recreate exactly what happens at runtime, even though the model creator’s data comes from telemetry and runs in a lab, while the runtime data comes from the application and runs in context of the application. Mismatches between model creation and runtime are a common source of bugs, and machine learning professionals often aren’t the best people to track these issues down. Because of this, an implementation can make machine learning professionals much more productive by providing a consistent intelligence creation experience. For all of these components (the runtime, the intelligence management, intelligence telemetry and intelligence creation) you might implement something bare bones that does the basics and relies on ongoing engineering investments to adapt over time. Or you might create something flexible with slick tools for non-engineers so they can rebalance toward new objectives cheaply, quickly and with confidence that they won’t mess anything up. Intelligence Orchestration Intelligence orchestration is a bit like car racing. A whole team of people builds a car, puts all the latest technology into it, and gets every aerodynamic wing, ballast, gear-ratio, and intake valve set perfectly. They make an awesome machine that can do things no other machine can do. And then someone needs to get behind the wheel, take it on the track and win! Intelligence orchestrators are those drivers. They take control of the Intelligent System and do what it takes to make it achieve its objectives. They use the intelligence creation and management systems to produce the right intelligence at the right time and combine it in the most useful ways. They control the telemetry system, gathering the data needed to make their models better. And they deal with all the mistakes and problems, balancing everything so that the application produces the most value it can for users and for your business. Right about now you might be saying, “Wait, I thought machine learning was supposed to tune the system throughout its lifecycle. What is this? Some kind of joke?” Unfortunately, no. Artificial intelligence and machine learning will only get you so far. Orchestration is about taking those tools and putting them in the best situations so they can produce value—highlighting their strengths and compensating for their weaknesses—while also reacting as things change over time. Orchestration might be needed because: Your objective changes: As you work on something, you’ll come to understand it better. You might realize that you set the wrong objective to begin with, and want to adapt. Heck, maybe the closed loop between your users and your models turns out to be so successful that you want to aim higher. Your users change: New users will come (and you will cheer) and old users will leave (and you might cry), but these users will bring new contexts, new behavior, and new opportunities to adapt your models. The problem changes: The approaches and decisions you made in the past might not be right for the future. Sometimes a problem might be easy (like when all the spammers are on vacation). At other times it might get very hard (like near the holidays). As a problem changes, there’s almost always opportunity to adapt and achieve better outcomes through orchestration. The quality of your models changes: Data unlocks possibilities. Some of the most powerful machine learning techniques aren’t effective with “small” data, but become viable as users come to your system and you get lots and lots of data. These types of changes can unlock all sorts of potential to try new experiences or target more aggressive objectives. The cost of running your system changes: Big systems will constantly need to balance costs and value. You might be able to change your experience or models in ways that save a lot of money, while only reducing value to users or your business by a little. Someone tries to abuse your system: Unfortunately, the Internet is full of trolls. Some will want to abuse your service because they think that’s fun. Most will want to abuse your service (and your users) to make money—or to make it harder for you to make money. Left unmitigated, abuse can ruin your system, making it such a cesspool of spam and risk that users abandon it. One or more of these will almost certainly happen during the lifecycle of your machine learning-based system. By learning to identify them and adapt, you can turn these potential problems into opportunities. Implementing machine learning-based systems and orchestrating them are very different activities. They require very different mindsets. And they’re both absolutely critical to achieving success. Good orchestrators will: - Be domain experts in the business of your system so they understand your users’ objectives instinctively. - Comprehend experience and have the ability to look at interactions and make effective adaptations in how model outputs are presented to users. - Understand the implementation so they know how to trace problems and have some ability to make small improvements. - Be able to ask questions of data and understand and communicate the results. - Know applied machine learning and be able to control your model creation processes and inject new models into the system. - Get satisfaction from making a system execute effectively day in and day out. Wrapping Up Machine learning is a fantastic tool. But getting the most out of machine learning requires a lot more than building a model and making a few predictions. It requires adding the machine learning skills to the other techniques you use for organizing large software systems and the teams of people building them. This article gave a very brief overview of one design pattern for using machine learning at scale, the Closed-Loop Intelligence System pattern. This includes knowing when you need machine learning; what it means to close the loop between users and machine learning; how to rebalance the system to achieve more meaningful objectives over time; what implementations can make it more efficient and safer to adapt; and some of the things that might happen as you run the system over time. Artificial intelligence and machine learning are changing the world, and it’s an exciting time to be involved. Geoff Hulten is the author of “Building Intelligent Systems” (intelligentsystem.io/book/). He’s managed applied machine learning projects for more than a decade and taught the master's level machine learning course at the University of Washington. His research has appeared in top international conferences, received thousands of citations, and won a SIGKDD Test of Time award for influential contributions to the data mining research community that have stood the test of time. Thanks to the following Microsoft technical expert for reviewing this article: Dr. James McCaffrey Discuss this article in the MSDN Magazine forum
https://docs.microsoft.com/en-us/archive/msdn-magazine/2019/april/machine-learning-closed-loop-intelligence-a-design-pattern-for-machine-learning
2020-01-17T19:39:36
CC-MAIN-2020-05
1579250590107.3
[]
docs.microsoft.com
In-Memory OLTP and Memory-Optimization SQL Server Azure SQL Database Azure Synapse Analytics (SQL DW) Parallel Data Warehouse In-Memory OLTP can significantly improve the performance of transaction processing, data ingestion and data load, and transient data scenarios. To jump into the basic code and knowledge you need to quickly test your own memory-optimized table and natively compiled stored procedure, see We have uploaded to YouTube a 17-minute video explaining In-Memory OLTP on SQL Server, and demonstrating the performance benefits. For a more detailed overview of In-Memory OLTP and a review of scenarios that see performance benefits from the technology: (13.x) and SQL Server 2017 (14.x), as well as in Azure SQL Database. The Transact-SQL surface area has been increased to make it easier to migrate database applications. Support for performing ALTER operations for memory-optimized tables and natively compiled stored procedures has been added, to make it easier to maintain applications. Note Try it out In-Memory OLTP is available in Premium and Business Critical tier Azure SQL databases and elastic pools. To get started with In-Memory OLTP, as well as Columnstore in Azure SQL Database, see Optimize Performance using In-Memory Technologies in SQL Database. In this section This section provides includes the following topics: Links to other websites This section provides links to other websites that contain information about In-Memory OLTP on SQL Server. Video explaining In-Memory OLTP, and demonstrating the - 17 minute video, indexed - Video title: In-Memory OLTP in SQL Server 2016 - Published date: 2019-03-10, on YouTube.com. - Duration: 17:32 (See the following Index for links into the video.) - Hosted by: Jos de Bruijn, Senior Program Manager on SQL Server Demo can be downloaded At the time mark 08:09, the video runs a demonstration twice. You can download the source code for runnable performance demo that is used in the video, from the following link: The general steps seen in the video are as follows: - First the demo is run with a regular table. - Next we see a memory-optimized edition of the table being created and populated by a few clicks in SQL Server Management Studio (SSMS.exe). - Then the demo is rerun with the memory-optimized table. An enormous speed improvement is measured. Index to each section in the video See also Feedback
https://docs.microsoft.com/en-us/sql/relational-databases/in-memory-oltp/in-memory-oltp-in-memory-optimization?view=sql-server-linux-2017
2020-01-17T18:46:49
CC-MAIN-2020-05
1579250590107.3
[]
docs.microsoft.com
This feature is part of the MLDB Pro Plugin and so can only be used in compliance with the trial license unless a commercial license has been purchased The Behaviour Dataset is used to store behavioural data. It is designed for the following situations: The reason for these restrictions is that the underlying data structure stores (userid,feature,timestamp) tuples, rather than the (userid,key,value,timestamp) format for MLDB. A new "feature" is created for every combination of (key,value) which can lead to a lot of storage being taken up if a key has many values. It stores its data in a binary file format, normally with an extension of .beh, which is specified by the dataFileUrl parameter. This file format is allows full random access to both the matrix and its inverse and is very efficient in memory usage. This dataset type is read-only, in other words it can only load up datasets that were previously written from an artifact. See the beh.mutable dataset type for how to create these files. A new dataset of type beh named <id> can be created as follows: mldb.put("/v1/datasets/"+<id>, { "type": "beh", "params": { "dataFileUrl": <Url> } }) with the following key-value definitions for params: beh.mutabledataset type allows files of the given format to be created.
https://docs.mldb.ai/v1/plugins/pro/doc/BehaviourDataset.md.html
2020-01-17T19:04:04
CC-MAIN-2020-05
1579250590107.3
[]
docs.mldb.ai
Discrete sensors do not have thresholds. Their readings, displayed under the Current column in the SP CLI system sensors command output, do not carry actual meanings and thus are ignored by the SP. The Status column in the system sensors command output displays the status values of discrete sensors in hexadecimal format. Examples of discrete sensors include sensors for the fan, power supply unit (PSU) fault, and system fault. The specific list of discrete sensors depends on the platform. You can use the SP CLI system sensors get sensor_name command for help with interpreting the status values for most discrete sensors. The following examples show the results of entering system sensors get sensor_name for the discrete sensors CPU0_Error and IO_Slot1_Present: SP node1> system sensors get CPU0_Error Locating sensor record... Sensor ID : CPU0_Error (0x67) Entity ID : 7.97 Sensor Type (Discrete): Temperature States Asserted : Digital State [State Deasserted] SP node1> system sensors get IO_Slot1_Present Locating sensor record... Sensor ID : IO_Slot1_Present (0x74) Entity ID : 11.97 Sensor Type (Discrete): Add-in Card States Asserted : Availability State [Device Present] Although the system sensors get sensor_name command displays the status information for most discrete sensors, it does not provide status information for the System_FW_Status, System_Watchdog, PSU1_Input_Type, and PSU2_Input_Type discrete sensors. You can use the following information to interpret these sensors' status values. The System_FW_Status sensor's condition appears in the form of 0xAABB. You can combine the information of AA and BB to determine the condition of the sensor. For instance, the System_FW_Status sensor status 0x042F means "system firmware progress (04), ONTAP is running (2F)." The System_Watchdog sensor can have one of the following conditions: For instance, the System_Watchdog sensor status 0x0880 means a watchdog timeout occurs and causes a system power cycle. For direct current (DC) power supplies, the PSU1_Input_Type and PSU2_Input_Type sensors do not apply. For alternating current (AC) power supplies, the sensors' status can have one of the following values: For instance, the PSU1_Input_Type sensor status 0x0280 means that the sensor reports that the PSU type is 110V.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-B3846958-4732-4210-9059-9CAC8169C1A1.html
2020-01-17T19:16:09
CC-MAIN-2020-05
1579250590107.3
[]
docs.netapp.com
Bubble Chart A Bubble chart shows the data as points with coordinates and size defined by their items' values. You might think of a Bubble chart as a variation of the Scatter chart, in which the data points are replaced with bubbles. This allows a Bubble chart to display three dimensional data — two values for the items' coordinates and one for their size. A Bubble chart is useful for visualizing different scientific relationships (e.g, economical, social, etc.). This chart type's x-axis is also numerical and does not require items.This help article will describe how to set various properties for a Bubble chart and then Example 1 at the end of the article shows the code for how to create Figure 1. The size of the bubbles is scaled according to the values of the items in the current series.This means that items with different values that belong to separate series may be represented with the same bubble size, yet every bubble will be bigger than bubbles with smaller values from the same series and smaller than bubbles with higher values from the same series. Figure 1: A Bubble chart that shows correlation between sales, number of products and market share for different economic agents. You can customize a Bubble chart in several ways: The color of each series is controlled via the BackgroundColor property of the BubbleSeries > Appearance > FillStyle inner tag. The name that is shown in the legend is set via the Nameproperty of the series. You can hide the series from the legend either by omitting it, or by setting the VisibleInLegendproperty to false. The position of each item on the y-axis is controlled by the Y property of the BubbleSeriesItem. The position according to the x-axis is set with the X property. The size of each item is controlled by the Size property of the BubbleSeriesItem. Each item can have a label and a tooltip that follows the common pattern defined in the DataFormatString property of the LabelsAppearance and TooltipsAppearance sections of the series.The format string uses the X of the item for the first placeholder, the Y for the second placeholder, the Size for the third placeholder and Tooltip for the fourth placeholder. The text in the tooltip can also be configured directly in the Tooltip property. You can also load custom text from data source fields in labels and tooltips by using the composite ClientTemplate property. The axes are also fully customizable — they automatically adjust).This is also the place where the crossing value with the other axis can be set and whether the axis will be reversed. The inner tags of the axis tag can control the major and minor grid lines in terms of color and size and the labels can have a DataFormatString, position and visibility set through each inner tag's properties. The title, background colors and legend are controlled via the inner properties of the RadHtmlChart control and are common for all charts.You can find more information in the Server-side Programming Basic Configuration and in the Element structure articles. Example 1 shows how to create the Bubble chart shown in Figure 1. Not all properties are necessary. The RadHtmlChart will match the axes to the values ifyou do not declare explicit values, steps and tick properties. Example 1: Setting properties to configure the Bubble chart shown in Figure 1. <telerik:RadHtmlChart <ChartTitle Text="Market Share Study"> </ChartTitle> <PlotArea> <Appearance> <FillStyle BackgroundColor="White"></FillStyle> </Appearance> <XAxis MinValue="0" MaxValue="30" Step="10"> <TitleAppearance Text="Number of Products"></TitleAppearance> </XAxis> <YAxis MinValue="0" MaxValue="80000" Step="10000"> <LabelsAppearance DataFormatString="${0}"></LabelsAppearance> <TitleAppearance Text="Sales"></TitleAppearance> </YAxis> <Series> <telerik:BubbleSeries> <Appearance FillStyle- </Appearance> <TooltipsAppearance DataFormatString="Percentage of Market Share: {2}%<br /> Number of products: {0}<br /> Sales: ${1}" /> <SeriesItems> <telerik:BubbleSeriesItem <telerik:BubbleSeriesItem <telerik:BubbleSeriesItem <telerik:BubbleSeriesItem <telerik:BubbleSeriesItem </SeriesItems> </telerik:BubbleSeries> </Series> </PlotArea> <Legend> <Appearance Position="Right"></Appearance> </Legend> </telerik:RadHtmlChart>
https://docs.telerik.com/devtools/aspnet-ajax/controls/htmlchart/chart-types/bubble-chart
2020-01-17T18:53:52
CC-MAIN-2020-05
1579250590107.3
[array(['images/htmlchart-bubblechart-simple-example.png', 'htmlchart-bubblechart-simple-example'], dtype=object)]
docs.telerik.com
WP7 Silverlight TextBoxes No Longer Scroll There's a change in the pipeline that will be hitting the public Windows Phone 7 images at some point soon (post the current Beta), which removes the ScrollViewer from a TextBox's template. What does this mean? Basically, long TextBoxes will no longer scroll when you gesture over them - the gesture is ignored and there is no visual reaction. Tapping remains the same (the keyboard pops up) as does tapping and holding (the enlarged caret is shown, and you can use this to, sort of, scroll). How do I work around this? Most scenarios do not require scrollable TextBoxes, but if your's does you can either place the TextBox inside a ScrollViewer (though this can cause some texture issues), or preferably re-add the ScrollViewer into your TextBoxes template, so that it still scrolls as usual.
https://docs.microsoft.com/en-us/archive/blogs/oren/wp7-silverlight-textboxes-no-longer-scroll
2020-01-17T20:15:26
CC-MAIN-2020-05
1579250590107.3
[]
docs.microsoft.com
TreeViewDragDropService RadTreeView handles the whole drag and drop operation by its TreeViewDragDropService. It exposes the following public properties: ShowDropHint: Gets or sets a value indicating whether show drop hint should be shown. ShowDragHint: Gets or sets a value indicating whether show drag hint should be shown. DropHintColor: Gets or sets the color for the drop hint. Drag and Drop in Unbound Mode By default, RadTreeView supports nodes drag and drop functionality in unbound mode out of the box within the same RadTreeView and between two RadTreeView controls. It is necessary to set the RadTreeView.AllowDragDrop property to true and start reordering the nodes. Figure 1: Drag and drop in bound mode Drag and Drop in Bound Mode When RadTreeVew is in bound mode, drag and drop functionality is not supported out of the box because of the specificity of the DataSource collection of the source and target tree view. However, such a functionality can be easily achieved by the TreeViewDragDropService. As a descendant of RadDragDropService, TreeViewDragDropService handles the whole drag and drop operation. The PreviewDragOver event allows you to control on what targets the node element being dragged can be dropped on. The PreviewDragDrop event allows you to get a handle on all the aspects of the drag and drop operation, the source (drag) treeview, the destination (target) control, as well as the node being dragged. This is where we will initiate the actual physical move of the node(s) from one tree view to the target control. An alternative approach of handling the mentioned events is to override the relevant methods of the service. A sample implementation is demonstrated in the Modify the DragDropService behavior help article.
https://docs.telerik.com/devtools/winforms/controls/treeview/drag-and-drop/treeviewdragdropservice
2020-01-17T19:05:16
CC-MAIN-2020-05
1579250590107.3
[array(['images/treeview-drag-and-drop-treeviewdragdropservice001.gif', 'treeview-drag-and-drop-treeviewdragdropservice 001'], dtype=object)]
docs.telerik.com
Policy Level Class Definition Represents the security policy levels for the common language runtime. This class cannot be inherited. public ref class PolicyLevel sealed [System.Runtime.InteropServices.ComVisible(true)] [System.Serializable] public sealed class PolicyLevel type PolicyLevel = class Public NotInheritable Class PolicyLevel - Inheritance - - Attributes - Remarks Important Starting with the .NET Framework 4, the common language runtime (CLR) is moving away from providing security policy for computers. We recommend that you use Windows Software Restriction Policies (SRP) or AppLocker as a replacement for CLR security policy. The information in this topic applies to the .NET Framework version 3.5 and earlier; it does not apply to the .NET Framework 4 and later. For more information about this and other changes, see Security Changes..
https://docs.microsoft.com/en-us/dotnet/api/system.security.policy.policylevel?redirectedfrom=MSDN&view=netframework-4.8
2020-01-17T20:27:01
CC-MAIN-2020-05
1579250590107.3
[]
docs.microsoft.com
You can use System Manager to run deduplication immediately after creating a FlexVol volume or an Infinite Volume, or to schedule deduplication to run at a specified time. Deduplication is a background process that consumes system resources during the operation; therefore, it might affect other operations that are in progress. You must cancel deduplication before you can perform any other operation.
https://docs.netapp.com/ontap-9/topic/com.netapp.doc.onc-sm-help-930/GUID-A8911197-4761-4405-99F0-6841DAAB20A7.html
2020-01-17T18:57:45
CC-MAIN-2020-05
1579250590107.3
[]
docs.netapp.com
, template_name='BASIC', description='This is a sample basic job. It can't actually compute anything.', extended_properties={ 'app_spcific_property': 'default_value', } ) Before a controller returns a response the job must be saved or else all of the changes made to the job will be lost (executing the job automatically saves it). If submitting the job takes a long time (e.g. if a large amount of data has to be uploaded to a remote scheduler) then it may be best to use AJAX to execute the job. API Documentation¶ - class tethys_compute.models. BasicJob(*args, **kwargs)¶ Basic job type. Use this class as a model for subclassing TethysJob
http://docs.tethysplatform.org/en/stable/tethys_sdk/jobs/basic_job_type.html
2020-01-17T19:06:31
CC-MAIN-2020-05
1579250590107.3
[]
docs.tethysplatform.org
What is rejected and skipped requests? If you're seeing rejected or skipped requests on your configuration then it usually means that there is something wrong with the configuration. What's a request? A request is a notification from your connected service. If you're monitoring your email list for new subscribers then the connected service (like MailChimp) will send a request to Coupon Carrier each time a new subscriber is added. This request can be rejected or skipped for different reasons in Coupon Carrier. You can view rejected and skipped requests by clicking on the number on the main configuration list: In many cases, the reason or error can help you determine why the requests failed. If you're still not sure why this failed or if you're seeing some strange errors then please contact us for more information.
https://docs.couponcarrier.io/article/36-what-is-rejected-and-skipped-requests
2020-01-17T18:35:15
CC-MAIN-2020-05
1579250590107.3
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/595227fa0428637ff8d41632/images/5a72de612c7d3a4a4198b2c7/file-bm5qSa4DO3.png', None], dtype=object) ]
docs.couponcarrier.io
. Advanced. Managing Security Context Constraints Added a new Example Security Context Constraints Settings section. All topics New guide comprised of topics previously found in the Installation and Configuration documentation.. Managing Nodes Added information about the --dry-run option for oc adm drain. --dry-run oc adm drain. Handling Out of Resource Errors Adjusted the math in the Example Scenario. Overcommitting Updated the Tune Buffer Chunk Limit section with instructions to use file buffering. Garbage Collection Added clarifying details about the default behavior of garbage collection. Managing Images Added Triggering Updates on Image Stream Changes section. Writing APBs → Getting Started Added a step to use openshift3/apb-base in the FROM directive. openshift3/apb-base FROM Architecture → Routes Added information on the ROUTER_LISTEN_ADDR and ROUTER_METRICS_TYPE variables to the Router Environment Variables section. ROUTER_LISTEN_ADDR ROUTER_METRICS_TYPE Architecture → HAProxy Router Plug-in New section describing the HAProxy Template Router Metrics. Managing Networking Changed the Disabling Host Name Collision Prevention For Routes and Ingress Objects section to mention the ability to give users the rights to edits host names on routes and ingress objects.. Optimizing Storage Added a new Choosing a Graph Driver section. Cluster Limits Added a new Planning Your Environment According to Application Requirements section. Pods and Services Added a new section on Pod Restart Policy. Infrastructure Components → Kubernetes Infrastructure Changed the maximum number of nodes from 300 to 2000. Build Inputs Added an example of a Dockerfile referencing secret data in the Docker Strategy section. Updated git clone behavior in the Git Source section. git clone Templates Corrected the template file example in the Exposing Object Fields section.. OpenShift Container Platform 3.7 Release Notes Added release notes for RHBA-2018:0113 - OpenShift Container Platform 3.7.23 Bug Fix and Enhancement Update. Added release notes for RHBA-2018:0076 - OpenShift Container Platform 3.7.14-9 Images Update. Jenkins Clarified ConfigMap and PodTemplate sync definitions in Using the Jenkins Kubernetes Plug-in to Run Jobs. Image Signatures Added information on configuring automatic image signature import. Added release notes for RHBA-2017:3464 - OpenShift Container Platform 3.7.14 Bug Fix and Enhancement Update. Fixed command error in the Migrating from ovs-multitenant to ovs-networkpolicy section. OpenShift Container Platform 3.7 Initial Release New guide providing a walkthrough and reference material on developing your own Ansible Playbook Bundle (APB)
https://docs.openshift.com/container-platform/3.7/welcome/revhistory_full.html
2020-01-17T18:59:27
CC-MAIN-2020-05
1579250590107.3
[]
docs.openshift.com
exportImage Exports the chart as an image. The result can be saved using kendo.saveAs. The export operation is asynchronous and returns a promise. The promise will be resolved with a PNG image encoded as a Data URI. Parameters options Object (optional)Parameters for the exported image. options.width StringThe width of the exported image. Defaults to the chart width. options.height StringThe height of the exported image. Defaults to the chart chart to an image <div id="chart"></div> <script> $("#chart").kendoChart({ transitions: false, series: [{ type: "column", data: [1, 2, 3] }, { type: "line", data: [2, 1, 3] }, { type: "area", data: [3, 1, 2] }] }); var chart = $("#chart").getKendoChart(); chart.exportImage().done(function(data) { kendo.saveAs({ dataURI: data, fileName: "chart.png" }); }); </script>
https://docs.telerik.com/kendo-ui/api/javascript/dataviz/ui/chart/methods/exportimage
2020-01-17T18:54:40
CC-MAIN-2020-05
1579250590107.3
[]
docs.telerik.com
A WordPress child theme is a theme that inherits the functionality of another theme called the parent theme. Child themes allow you to modify, or add to the functionality of that parent theme. —WordPress Codex: Child Themes Bookshop is a child theme for Storefront, the official WooCommerce theme. It features a classic design that presents books and other collectible products, such as wine. Installation ↑ Back to top Bookshop is a Storefront child theme, so you first need to install and set up Storefront and then Bookshop. - Download Storefront for free at Storefront theme file. - Download Bookshop from your WooCommerce.com account at My Downloads. - On your website, go to Appearance > Themes and click the Add New button. - Click Upload to upload the Storefront .zip file from step 1. - Go to Appearance > Themes to Activate. - Repeat steps 3-5 for the Bookshop theme from step 2. More information at: Installing and Configuring Storefront and Installing a Theme. Activate the Theme Key ↑ Back to top After installing your theme, a notification appears to activate your key via the WooCommerce Helper for the Bookshop theme. Follow the instructions at: Adding Keys. Setup and Configuration ↑ Back to top Once installation is complete and the key is activated for Bookshop, it’s time to configure and set up your themes. Bookshop: Author/Format ↑ Back to top On the demo, the product archives display an author and a format. To display on your store, you need to set up two attributes and apply them to products. The attributes are: - Format – Displays paperback, hardcover, special edition, etc. - Writer – Displays the author. The term ‘author’ is a protected value in WordPress core and cannot be used. Once added, these attributes are displayed automatically. If your attributes do not display, then this because the theme looks for the slug terms: format writer Here is how it needs to be set: More info at: Managing Product Taxonomies. Demo Content ↑ Back to top It’s possible to import WooCommerce Dummy Data to populate your site with demo products as a starting point. Note: We do not supply the exact images you see in our Bookshop demo due to copyright restrictions. Images used in this demo can be downloaded from the Open Library project. FAQ ↑ Back to top Can I change the attributes that are displayed on product archives? ↑ Back to top You can! By default, Bookshop will display the Author attribute (pa_author) and the Format attribute (pa_format). You can use the bookshop_author_attribute filter to change the which attribute is displayed above the product title and the bookshop_format_attribute filter to change which attribute is displayed beneath the product title. Here’s an example of how you’d change the author attribute (the one displayed above the product title) to display the term from the ‘newattribute’ attribute. Obviously just change ‘newattribute’ to whatever you’d like to use. I’m using the Storefront Designer extension but noticed some settings are missing. ↑ Back to top Bookshop there is a large drop down menu, how do I enable that? ↑ Back to top The demo uses the Storefront Mega Menu to create the large drop down menu. Bookshop is already a child theme, you are unable to do this. Any changes added to the Bookshop.
https://docs.woocommerce.com/document/bookshop-storefront-child-theme/
2020-01-17T18:32:28
CC-MAIN-2020-05
1579250590107.3
[array(['https://docs.woocommerce.com/wp-content/uploads/2016/04/storefront-bookshop-theme.png', 'storefront bookshop theme'], dtype=object) array(['https://docs.woocommerce.com/wp-content/uploads/2016/04/Screen-Shot-2016-04-21-at-11.49.04.png', 'The author and format attributes on display'], dtype=object) ]
docs.woocommerce.com
nmrglue.proc_autophase¶ Automated phase correction These functions provide support for automatic phasing of NMR data. They consist of the core autops function which performs the optimisation and a set of private functions for calculating a spectral phase quality score for a provided spectrum. This module is imported as nmrglue.proc_autophase and can be called as such.
https://nmrglue.readthedocs.io/en/latest/reference/proc_autophase.html
2020-01-17T19:29:55
CC-MAIN-2020-05
1579250590107.3
[]
nmrglue.readthedocs.io
When you read the user assistance of a product as broad and multifaceted as Sitefinity CMS, sometimes you struggle to find the term or topic that you need at the moment. The Reference section aims to help you find what you need through the path most convenient for you. Back To Top
https://docs.sitefinity.com/reference
2017-09-19T17:15:39
CC-MAIN-2017-39
1505818685912.14
[]
docs.sitefinity.com
You can batch receive items through a simple or an advanced interface. The simple interface does not allow you to add barcodes or use the copy template. These items are also not visible in the OPAC. The advanced interface enables you to use the copy templates that you created, add barcodes, and make items OPAC visible and holdable. You can access both Batch Receive interfaces from two locations in the ILS. From the Subscription Details screen, you can click Batch Item Receive. You can also access these interfaces by opening the catalog record for the serial, and clicking Actions for this Record → Serials Batch Receive. Follow these steps to receive items in batch in a simple interface. Follow these steps to receive items in batch in a simple interface. Evergreen will now display an alert if a duplicate barcode is entered in the Serials Batch Receive interface. If a staff member enters a barcode that already exists in the database while receiving new serials items in the Serials Batch Receive interface, an alert message will pop up letting the staff member know that the barcode is a duplicate. After the staff member clicks OK to clear the alert, he or she can enter a new barcode.
http://docs.evergreen-ils.org/2.11/_batch_receiving.html
2017-09-19T17:14:11
CC-MAIN-2017-39
1505818685912.14
[]
docs.evergreen-ils.org
The following statement could be used to grab the data in the folder and dump it with admin account as the owner and the first folder in your system. SELECT 1 as owner, name, description, data, 1 as folder FROM reporter.template or use the following to capture your folder names for export SELECT 1 as owner, t.name, t.description, t.data, f.name as folder FROM reporter.template t JOIN reporter.template_folder f ON t.folder=f.id
http://docs.evergreen-ils.org/2.11/_dump_data_with_an_sql_statement.html
2017-09-19T17:14:13
CC-MAIN-2017-39
1505818685912.14
[]
docs.evergreen-ils.org
To create an order, choose 'New Order' in 'Planning' section (left panel). New order menu consists of 4 blocks: order's parameters, map, contact details, files attaching. An order may contain the following parameters: An order may contain the following parameters: The indicated address is shown on the map by a marker. If necessary, a delivery destination address can be adjusted by dragging the marker to the corresponding point on the map. Moreover, a necessary destination address can be added directly from the map. To do so, click a corresponding destination point on the map. A separate block (bottom-right) is devoted to contact details such as client name, phone number, and e-mail. A phone number and an e-mail indicated by a flag are used to inform a client on a courier's arrival. Contact details are available to an operator on the stages of planning and delivery. Also, contact details are displayed for a courier in the mobile application. ые данные также отображаются и для курьера в мобильной версии приложения. A file (for example, a bill or consignment document) can be attached to an order. The file attached can be viewed by a courier in the mobile application. The file attachment block is situated below order parameters. To attach a file, click the corresponding button. Click 'Save' upon the form's completion. Saved order is moved to the 'Planning' section.
http://docs.wialon.com/en/local/logistics/orders
2017-09-19T17:10:47
CC-MAIN-2017-39
1505818685912.14
[]
docs.wialon.com
SignSecureChannel HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters Description Determines whether outgoing secure channel traffic is signed. This entry is used when negotiating the conditions of a secure channel with a domain controller. Channel traffic security is determined jointly by the value of this entry and the values of the RequireStrongKeyrequiresignorseal and sealsecurechannel entries. This entry is used only when the value of requiresignorseal is 0. Otherwise, the system requires that traffic at least be signed, and it does not consult this entry. Also, because encryption is more secure than signing, this entry is superceded when the value of sealsecurechannel is 1.<<.gif) .gif) .gif)
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-2000-server/cc937928(v=technet.10)
2018-04-19T14:24:04
CC-MAIN-2018-17
1524125936969.10
[array(['images%5ccc938288.regentry_reltopic(en-us,technet.10', 'Page Image'], dtype=object) array(['images%5ccc938288.regentry_reltopic(en-us,technet.10', 'Page Image'], dtype=object) array(['images%5ccc938288.regentry_reltopic(en-us,technet.10', 'Page Image'], dtype=object) ]
docs.microsoft.com
Checklist: Install, Configure, and Get Started With a Fax Server Applies To: Windows Server 2008 R2.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc753416(v=ws.11)
2018-04-19T14:08:54
CC-MAIN-2018-17
1524125936969.10
[]
docs.microsoft.com
<![CDATA[ ]]>User Guide > Library > Templates > Editing a Template Editing a Template A template is like a scene, you can open it and edit it as any regular project. If you want to do some modifications in your templates, you can edit them using the Edit Template command. To edit a template A new Animate Pro application opens. Related Topics
https://docs.toonboom.com/help/animate-pro/Content/HAR/Stage/010_Library/037_H2_Editing_a_Template.html
2018-04-19T13:55:44
CC-MAIN-2018-17
1524125936969.10
[]
docs.toonboom.com
._22<< - Do one of the following: - In the Export to Harmony window, click Export button. - Select Commands > Export to Harmony._23<<
https://docs.toonboom.com/help/harmony-15/advanced/import/export-fla-file.html
2018-04-19T13:54:02
CC-MAIN-2018-17
1524125936969.10
[array(['../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../Resources/Images/HAR/Install/WebCC/img_comp.png', None], dtype=object) array(['../Resources/Images/HAR/Stage/Import/HAR12/HAR12_chicken.png', None], dtype=object) ]
docs.toonboom.com
<![CDATA[ ]]>About > Release Notes > Harmony 15.0.1 Toon Boom Harmony 15.0.1 Release Notes IMPORTANT: An important fix was made to the licensing mechanic of Harmony in Harmony 15.0.1. If your studio uses a license server, you must upgrade Harmony 15.0 to Harmony 15.0.1 on the license server before upgrading your workstations. Here is the list of changes in Harmony Advanced 15.0.1, build 13289:
https://docs.toonboom.com/help/harmony-15/advanced/release-notes/harmony/harmony-15-0-1-release-notes.html
2018-04-19T13:54:32
CC-MAIN-2018-17
1524125936969.10
[array(['../../Resources/Images/_ICONS/Home_Icon.png', None], dtype=object) array(['../../Resources/Images/_ICONS/Producer.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Harmony.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyEssentials.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyAdvanced.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/HarmonyPremium.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Paint.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/StoryboardPro.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Activation.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/System.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Resources/Images/_ICONS/Adobe_PDF_file_icon_32x32.png', None], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.toonboom.com
Functions: julia> function f(x,y) x + y end f (generic function with 1 method) As with variables, Unicode can also be used for function names: julia> ∑(x,y) = x + y ∑ (generic function with 1 method) julia> ∑(2, 3) 5 Argument Passing Behavior J.: These functions are included in the Base.Operators module even though they do not have operator-like names. Anonymous Functions Functions in Julia are first-class objects: they can be assigned to variables, and called using the standard function call syntax from the variable they have been assigned to. They can be used as arguments, and they can be returned as values. They can also be created anonymously, without being given a name, using either of these syntax Array{Float64,1}: Array{Int64,1}: via an explicit usage of the return keyword: function foo(a,b) return a+b, a*b end This has the exact same effect as the previous definition of foo. Varargs Functions Array{Int64,1}: 3 4 julia> bar(1,2,x...) (1, 2, (3, 4)) julia> x = [1,2,3,4] 4-element Array{Int64,1}: 1 2 3 4 julia> bar(x...) (1, 2, (3, 4)) Also, the function that arguments are spliced into need not be a varargs function (although it often is): julia> baz(a,b) = a + b; julia> args = [1,2] 2-element Array{Int64,1}: 1 2 julia> baz(args...) 3 julia> args = [1,2,3] 3-element Array{Int64,1}: 1 2 3 julia> baz(args...) ERROR: MethodError: no method matching baz(::Int64, ::Int64, ::Int64) Closest candidates are: baz(::Any, ::Any) at none:1 As you can see, if the wrong number of elements are in the spliced container, then the function call will fail, just as it would if too many arguments were given explicitly. Optional Arguments In many cases, function arguments have sensible default values and therefore might not need to be passed explicitly in every call. For example, the library function parse(T, num, base) interprets a string as a number in some base. The base argument defaults to 10. This behavior can be expressed concisely as: function parse(T, num, base=10) ### end With this definition, the function can be called with either two or three arguments, and 10 is automatically passed when a third argument is not specified: julia> parse(Int,"12",10) 12 julia> parse(Int,"12",3) 5 julia> parse(Int,"12") 12 Optional arguments are actually just a convenient syntax for writing multiple method definitions with different numbers of arguments (see Note on Optional and keyword Arguments).64=1) ### end Extra keyword arguments can be collected using ..., as in varargs functions: function f(x; y=0, kwargs...) ### end Inside f, kwargs will be a collection of (key,value) tuples, where each key is a symbol. Such collections can be passed as keyword arguments using a semicolon in a call, e.g. f(x, z=1; kwargs...). Dictionaries can also be used for this purpose. One can also pass (key,value) tuples, or any iterable expression (such as a => pair) that can be assigned to such a tuple, explicitly after a semicolon. For example, plot(x, y; (:width,2)) and plot(x, y; :width => 2) are equivalent to plot(x, y, width=2). This is useful in situations where the keyword name is computed at runtime.. Evaluation Scope of Default Values Optional and keyword arguments differ slightly in how their default values are evaluated. When optional argument default expressions are evaluated, only previous arguments are in scope. In contrast, all the arguments are in scope when keyword arguments default expressions are evaluated. For example, given this definition: function f(x, a=b, b=1) ### end the b in a=b refers to a b in an outer scope, not the subsequent argument b. However, if a and b were keyword arguments instead, then both would be created in the same scope and the b in a=b would refer to the subsequent argument b (shadowing any b in an outer scope), which would result in an undefined variable error (since the default expressions are evaluated left-to-right, and b has not been assigned yet). Array{Float64,1}: 1.0 2.0 3.0 julia> sin.(A) 3-element Array{Float64,1}: 0.841471 0.909297 0.14112 Of course, you can omit the dot if you write a specialized "vector" method of f, e.g. via f(A::AbstractArray) = map(f, A), and this is just as efficient as f.(A). But that approach requires you to decide in advance which functions you want to vectorize. Array{Float64,1}: 13.4248 17.4248 21.4248 julia> f.(A, B) 3-element Array{Float64,1}:[2:end] .= sin.(Y), then it translates to broadcast! on a view, e.g. broadcast!(sin, view(X, 2:endof Array{Float64,1}: 0.514395 -0.404239 -0.836022 -0.608083 Binary (or unary) operators like .+ are handled with the same mechanism: they are equivalent to broadcast calls and are fused with other nested "dot" calls. X .+= Y etcetera is equivalent to X .= X .+ Y and results in a fused in-place assignment; see also dot operators..
https://docs.julialang.org/en/stable/manual/functions/
2018-04-19T13:52:27
CC-MAIN-2018-17
1524125936969.10
[]
docs.julialang.org
public class IncompatibleSystemException extends GemFireException IncompatibleSystemExceptionis thrown when a new GemFire process tries to connect to an existing distributed system and its version is not the same as that of the distributed system. In this case the new member is not allowed to connect to the distributed system. As of GemFire 5.0 this exception should be named IncompatibleDistributedSystemException
http://gemfire-91-javadocs.docs.pivotal.io/org/apache/geode/IncompatibleSystemException.html
2018-04-19T13:28:04
CC-MAIN-2018-17
1524125936969.10
[]
gemfire-91-javadocs.docs.pivotal.io
System Testing From Joomla! Documentation Revision as of 10:43, 24 May 2010 by Betweenbrain (Talk | contribs) System testing is an essential part of a good Quality Control program. For a good general discussion of system testing, visit the Wikipedia article. System Testing in Open: - System tests help highlight cases where changes in one element of the system might cause breakage in other, unexpected areas. - System tests help clearly specify how the application should behave. System Testing versus Unit Testing. Test Objects. System Testing in Joomla!). Writing System Tests, click on Tools -> Selenium IDE. You will notice that on the right of the window that appears near the top, that there is a red circle that is highlighted. This is the start/stop recording button. When you start Selenium IDE, recording starts right away. For our first test, all we will do is load up the home page and check to make sure that all the items in the Main Menu are present. The commands that we use to check that items are present are called assertions. Basically, you perform your actions and make assertions about the results. In our case, we are going to use the command assertText. This command will read the text of a specified element and ensure it matches a particular value. our first test is done. To run your test, you use the icons on the bar between the Base URL address bar and the Table and Source tabs. The two important buttons are the buttons. Tips & Tricks - Undo all changes your test may make: All tests should make all efforts to leave the sample data and test site content as it was before the test started. In some cases, a test may break due to a change in the test site's content. For example, if you add a test user, delete the test user at the end of your test or series of tests that require that user. - Maximize flexibility through test methods: In many tests, similar or identical actions are repeated. In these cases, these actions can be reproduced as a granular test method added to /tests/system/SeleniumJoomlaTestCases.php. Good examples include: - doAdminLogin - gotoAdmin - gotoSite (see Running System Tests Joomla Core Selenium testcases.
https://docs.joomla.org/index.php?title=System_Testing&oldid=28050
2015-10-04T09:23:11
CC-MAIN-2015-40
1443736673081.9
[]
docs.joomla.org
Difference between revisions of "JDocumentRendererModule" From Joomla! Documentation Latest revision as of 12:32,DocumentRendererModule is responsible for rendering the output from a single module. It is called whenever a <jdoc:include statement is encountered in the document template. Contents Defined in libraries/joomla/document/html/renderer/module.php Methods Importing jimport( 'joomla.document.html.renderer.module' ); [Edit See Also] SeeAlso:JDocumentRendererModule Examples <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API15:JDocumentRendererModule&diff=prev&oldid=26100
2015-10-04T09:30:40
CC-MAIN-2015-40
1443736673081.9
[]
docs.joomla.org
scipy.sparse.linalg.expm_multiply¶ - scipy.sparse.linalg.expm_multiply(A, B, start=None, stop=None, num=None, endpoint=None)[source]¶ Compute the action of the matrix exponential of A on B. Notes The optional arguments defining the sequence of evenly spaced time points are compatible with the arguments of numpy.linspace. The output ndarray shape is somewhat complicated so I explain it here. The ndim of the output could be either 1, 2, or 3. It would be 1 if you are computing the expm action on a single vector at a single time point. It would be 2 if you are computing the expm action on a vector at multiple time points, or if you are computing the expm action on a matrix at a single time point. It would be 3 if you want the action on a matrix with multiple columns at multiple time points. If multiple time points are requested, expm_A_B[0] will always be the action of the expm at the first time point, regardless of whether the action is on a vector or a matrix. References
http://docs.scipy.org/doc/scipy/reference/generated/generated/scipy.sparse.linalg.expm_multiply.html
2015-10-04T09:14:38
CC-MAIN-2015-40
1443736673081.9
[]
docs.scipy.org
Difference between revisions of "Creating a CSS Drop down Menu" From Joomla! Documentation Revision as of 13:19, 5 April 2009 (view source)Couch Guy (Talk | contribs)← Older edit Latest revision as of 17:30, 27 May 2013 (view source) Tom Hutchison (Talk | contribs) m (Hutchy68 moved page Talk:Creating a CSS Drop down Menu to J1.5 talk:Creating a CSS Drop down Menu: archiving) (3 intermediate revisions by 3 users not shown)Line 1: Line 1: −Where do I put the css code exactly? −Travis − −Well! This is a tough issue. The template_css.css file is were it is put. I put this ccs code at the end. − −It works put it only shows the text in orange (I am not sure why) and it is a horizontal layout of the main menu items. − −I put the main menu to the left and it wants to wrap the menu items looks goofy. − − − −With out a deeper undesrstanding of the css or the direction from the author I am at a loss as to how to change these defects. − −Conch Guy Latest revision as of 17:30, 27 May 2013 Retrieved from ‘’
https://docs.joomla.org/index.php?title=J1.5_talk:Creating_a_CSS_Drop_down_Menu&diff=99653&oldid=13828
2015-10-04T10:39:25
CC-MAIN-2015-40
1443736673081.9
[]
docs.joomla.org
Language switcher frontend From Joomla! Documentation Revision as of 02:52, 4 June 2011 by Infograf768 (Talk | contribs) Language switcher idea The Sw.:
https://docs.joomla.org/index.php?title=Language_switcher_frontend&oldid=59215
2015-10-04T09:55:20
CC-MAIN-2015-40
1443736673081.9
[array(['/images/d/da/Compat_icon_1_6.png', 'Joomla 1.6'], dtype=object)]
docs.joomla.org
The following information is for online merchants transitioning to Miva Merchant 5 from any version of Miva Merchant 4. If you have a Miva Merchant 4 store, contact Miva Merchant, or your e-commerce hosting company, for information on upgrading your Miva Merchant license. New users, who are just starting out in Miva Merchant 5, can disregard these instructions. It is easy to transition from Miva Merchant 4 to Miva Merchant 5. Basically, you will use a new, free module to export your store to a file. When you run Miva Merchant 5, the entire store will be imported. Copy the image files to a new directory, and install any new modules, and you're off and running. The following instructions will guide you through the process. A store is moved to Miva Merchant 5 via a provisioning file, provide.xml. This is a single XML file that contains all the information from a given store, including the products, categories, image references, customers, and so on. Note, order information, including batches, is not transferred to the new store. The URLs to the pages within your store will change. Information is included here to guide you in updating links and search engines. Due to the significant differences between Miva Merchant 5 and previous versions, third-party modules have significant differences. Some products you have purchased in the past may be redundant. For others, contact the module developers to assess compatibility issues and/or the need for upgrades. The Miva Merchant 4 store is left intact following this process. It is not "upgraded" in the way that software applications are upgraded. That is, everything about the store (except for order information) is exported via the provisioning file, then imported into a new Miva Merchant 5 store. The upgrade license allows the two stores to be run side-by-side for up to six months, to give ample time for the transition. Note that orders placed in the old Miva Merchant 4 store are not transferred to the new Miva Merchant 5 store. After you have upgraded to the new store, leave the old store in maintenance mode. Additional stores are handled in the same way, one after another. That will be discussed below. If your Miva Merchant 5 store is part of an e-commerce hosting package, the Web hosting company will have installed Miva Merchant 5 for you. If your store is licensed direction from Miva Merchant, download the files from the FTP site, following the instructions in the upgrade e-mail message sent to you from Miva Merchant. If your store is hosted by an e-commerce Web host, that company may have preconfigured the setup information for you. Otherwise, go to the URL given at the end of the installation, and the setup process will begin automatically. Before the store can be created via the provisioning file, the user who was the store manager in Miva Merchant 4 must exist in the new domain-level users list in new Miva Merchant 5. A store cannot be imported if the manager is not present in the users list. The store manager's password in Miva Merchant 5 does not need to match their old password. If the Miva Merchant 5 domain owner is also the store manager, they will already exist as a user. In that case, all you need to do is ensure that the spelling and capitalization is the same as it was in the Miva Merchant 4 store. To add a user, in the Miva Merchant 5 administration interface, locate Users [Add] in the left navigation area. Click [Add]. If you want to designate a new store manager, either make that change in your existing store, before creating the provisioning file, or add the outgoing manager to the users list in Miva Merchant 5 for the purpose of importing the store, then specify a new manager after the store has been imported. Put the Miva Merchant 4 store into maintenance mode. Detailed instructions on working with Maintenance Mode are available in the Edit Store / Maintenance Mode Help topic.. If orders were to be placed in both stores, inventory levels would get out of synch, and you would have two sets of data to maintain, and two sets of incoming orders to manage. There is a 6-month transition period, during which your Miva Merchant 4 store remains fully functional. Once you are secure with the new Miva Merchant 5 store, and have completed any unfinished business in the old store, you can delete the store. Also see the suggested meta refresh technique, described below under Handle Links Coming in to The Store. By using this technique, you can direct shoppers to your new store. If your store is part of a Web hosting package, contact the hosting company to get the upgrade module you will need for this step. If your store is licensed direction from Miva Merchant, download the appropriate module upgrade5.mv (for Miva Merchant versions 4.00 to 4.13) or upgrade5.mvc (for Miva Merchant versions 4.14 to 5.00 ) from the Miva Merchant FTP site or the Miva Merchant Downloads page. You should have received the FTP information in an e-mail message from Miva Merchant. Save the file to any convenient location on your computer, and make a note of it. In the left navigation area, at the domain level, expand the Modules menu, and click Add Module. Click the round Upload File button to locate the file on your computer. Click Add to add the export module to your store. Once the appropriate module is installed at the domain level, it will be available in the Utilities area of each store in the domain. In the left navigation area, under Stores, expand the menu for the store you want to export. Expand the Utilities submenu, then the Export Data submenu. Select Export Store to Miva Merchant 5.x. Click Export, or change the name of the export file and then click Export. Check marks will indicate progress through list of tasks as they are completed. When the entire file has been created, Export Complete will appear at the bottom of the task list. Note that if you name the file something other than provide.xml, it will need to be renamed back to provide.xml before it can be imported into your Miva Merchant 5 store, but you may want to change the name here to reflect the store name, such as MyStoreNo3Export.xml, if there are several stores in the domain. In the store data directory for your Miva Merchant 4 store, there will be an export folder, such as DATA\Merchant2\00000001\export, for the first store. In that directory, locate the file provide.xml (or a file by the name you specified). Copy this file to your Miva Merchant 5.00 root data directory - the same directory where the Merchant2 and Merchant5 directories are located. If you specified a different name for the file earlier, rename it now to provide.xml. When you log in to the new administration interface, Miva Merchant will detect the provide.xml file, and will automatically import the store. A message will appear to inform you that Miva Merchant is performing automated configuration operations. Do not close the browser window during the import process. This will create the store, and populate it with the products, categories, settings, an so on that you exported from your Miva Merchant 4 store. Any errors (invalid codes, etc.) will be recorded in the Miva Merchant 5.00 data directory in a file named provide.log. Once the file has been processed, it will be renamed provide.xml-processed-yyyymmdd-hhmm, where yyyymmdd is the date, and hhmm is the time that the provide.xml file was processed. For each Miva Merchant 4 store, there will be a graphics directory containing all the images used in the store, including product images, category images, and the images used for buttons and navigation features. There is a separate graphics directory for each store, such as HTML (or Web)\Merchant2\graphics\00000001 for the first store. So that your images will appear in the new Miva Merchant 5 store, copy all the files in the store graphics directory into the corresponding new store graphics directory, such as HTML (or Web)\mm5\graphics\00000001. Note - Do not copy the entire \graphics directory, only each individual store directory. If you were to replace the entire Miva Merchant 5 graphics directory with the one from Miva Merchant 4, it would overwrite the images used throughout the administration interface and wizards. Because Miva Merchant 5 and previous versions use significantly different database technology, most third-party modules designed for earlier versions will need to be upgraded. Some modules may no longer be necessary. Before adding any third-party module to your Miva Merchant 5 system, contact the module developer to assess compatibility issues and/or the need for upgrades. When you are ready to add new modules to the system, in the left navigation area, at the domain level, locate Modules [Add]. Click [Add]. Click the Upload button to locate the module file on your computer. Click Add to add the module to your store. Repeat for each new module. Your own site, and others, including search engines, may link to your store, or even to individual product screens. Take advantage of these sources of shoppers by providing an easy way for them to continue to find your store. If your Miva Merchant store is incorporated into a Web site, update any links to reflect the new store URLs. To learn the URL for a store screen, go to that page directly, and make a note of it, or click the Links button, available throughout the administration interface, in the upper-right corner of many screens. You can alert others to the change, and give them an opportunity to update their links or bookmarks, by using a HTML meta refresh tag in the maintenance message for the old store. This works much like a paper Change of Address card for a brick and mortar store. It alerts people to the fact you have a new address, and it is only available for a limited time. When a shopper attempts to access any area of your store, they will see the maintenance message. After a few moments (an interval which you specify), they can be automatically taken to the storefront screen for your Miva Merchant 5 store. Here is a simple example of a maintenance message that will send a visitor on to your new storefront. Insert your new store URL in place of the example URL shown here. Notice that content="10", in the first line, gives an approximately 10 second pause before going to the new screen. You can specify any amount of time you like. Keep in mind that people need longer to read text than you might expect. <meta http- %store_name% has moved to a new location. <br><br> Please update your bookmarks, and visit us at:<br> <a href="">Visit Our New Store</a><br> Click the link above, or wait to be taken to the above site in about 10 seconds. The message will appear, and send visitors to your new store, during the 6-month transition period, while your old store is left in maintenance mode. If you manage your own server, you can use a server redirect. If your store is provided as part of an e-commerce package, contact your hosting company for assistance. The server redirect can be used in addition to the meta refresh described above, and should be, if your server or host supports it. There are several advantages to using this method. First, the shopper is delivered to the exact page they were looking for, rather than being taken to the storefront. Second, you can keep the server redirect available as long as you like, where the meta refresh technique will only be available during the 6-month transition period from Miva Merchant 4 to 5. If your store is listed with services that submit it to search sites, update your listings to include the new URLs. If you only have a few products listed, or just your storefront, manually update those listings to point to the updated URLs. Remember, order information is not transferred to the new store. Leave the old store in maintenance mode (to prevent shoppers from placing any new orders), and finish processing any orders that are still open in your old store. The old store will remain available to you for six months. That is, the license allows for a 6-month overlap period, when you can get the new store up and running, and close out the old one. Log out of Miva Merchant 5. Follow the same steps outlined above, beginning with creating the provisioning file. Remember to find the provisioning file, provide.xml, in the export directory for the store you are upgrading at the moment. If you named the file anything other than provide.xml, rename it to provide.xml now. Remember that the store manager must already be a user in your Miva Merchant 5 system before their store can be created. When you are ready, log in to Miva Merchant 5 again, and the additional store will be created. Be careful to copy any store images to the correct store graphics directory. If you encounter any of the following situations, try the following solutions. There are several possible reasons that the store would not be created. Depending on the situation, there may be a log file, provide.log, which you can open and read with any text editor. Remember that you must copy the image files into the new store graphics directory. They are not imported via the provisioning file (but all references to them are). Review the instructions above, under Copy the Store Image Files. Once you have copied the image files, refresh your browser window to see them appear in the new store. Edit Store / Maintenance Mode
http://docs.smallbusiness.miva.com/en-US/merchant/5/webhelp/providexml.html
2015-10-04T10:43:32
CC-MAIN-2015-40
1443736673081.9
[]
docs.smallbusiness.miva.com
value (HTMLProgressElement) value (HTMLProgressElement)]
http://docs.webplatform.org/wiki/html/attributes/value_(HTMLProgressElement)
2015-10-04T09:11:46
CC-MAIN-2015-40
1443736673081.9
[]
docs.webplatform.org
Access Control List From Joomla! Documentation):-
https://docs.joomla.org/index.php?title=Access_Control_List&oldid=99404
2015-10-04T10:19:45
CC-MAIN-2015-40
1443736673081.9
[]
docs.joomla.org
Revision history of "Detailed instructions for updating from 3.1.2 to 3.1.4" View logs for this page Diff selection: Mark the radio boxes of the revisions to compare and hit enter or the button at the bottom. Legend: (cur) = difference with latest revision, (prev) = difference with preceding revision, m = minor edit.
https://docs.joomla.org/index.php?title=J3.2:Detailed_instructions_for_updating_from_3.1.2_to_3.1.4&action=history
2015-10-04T09:51:38
CC-MAIN-2015-40
1443736673081.9
[]
docs.joomla.org
User Guide Local Navigation Connect to a Wi-Fi network - On the home screen, click the connections area at the top of the screen, or click the Manage Connections icon. - Click Wi-Fi Network. - If you want to connect to a public hotspot or to a Wi-Fi® network that does not require authentication, select the Show Open networks only checkbox. -.
http://docs.blackberry.com/en/smartphone_users/deliverables/32004/Connect_to_a_Wi-Fi_network_70_1716021_11.jsp
2015-10-04T09:42:58
CC-MAIN-2015-40
1443736673081.9
[]
docs.blackberry.com
tox26,py27 [testenv] deps=pytest # install pytest in the venvs commands=py.test # or 'nosetests' or ... To sdist-package, install and test your project against Python2.6 and Python2.7, just type: tox and watch things happening (you must have python2.6 and python2. automation of tedious Python related test activities test your Python package against many interpreter and dependency configs - automatic customizable (re)creation of virtualenv test environments - installs your setup.py based project into each virtual environment - test-tool agnostic: runs py.test, nose or unittests in a uniform manner supports using different / multiple PyPI index servers uses pip (for Python2 environments) and distribute (for all environments) by default cross-Python compatible: Python2.4 up to Python2.7, Jython and Python3 support as well as for pypy cross-platform: Windows and Unix style environments integrates with continous integration servers like Jenkins (formerly known as Hudson) and helps you to avoid boilerplatish and platform-specific build-step hacks. unified automatic artifact management between tox runs both in a local developer shell as well as in a CI/Jenkins context. driven by a simple ini-style config file documented examples and configuration concise reporting about tool invocations and configuration errors
http://tox.readthedocs.org/en/1.1/
2015-10-04T09:11:06
CC-MAIN-2015-40
1443736673081.9
[]
tox.readthedocs.org
API Tester Guides Search… API Tester features Guides GitHub GraphQL API Guide Google Cloud APIs Guide YouTube API Guide Twitter API Guide Coingecko API Guide Discord API Guide Cloudflare API Guide Add your guide Media kit GitBook Coingecko API Guide The CoinGecko API allows us to retrieve cryptocurrency data such as price, volume, market cap, and exchange data from CoinGecko using HTTP requests. No registration and access tokens are required to work with the CoinGecko REST API! All you need to send GET requests is just an endpoint and in some cases query parameters. In this guide we will review the following requests: 1. get the current price of a coin, 2. get the current data of a coin, 3. get the exchange list, 4. trending statistics. Get the current price There is a common parameter for all requests - the coin ID. You can find it on the coin page in the Info section for the "API id" label. To find the coin page, you can use the site search. Let's take this as an example of BNB: To get the current price of any cryptocurrencies in any other supported currencies that you need: 1. Launch API Tester and create a new request by tapping on the “plus” icon. 2. Select the GET request type. 3. In the screen that opens, enter the following endpoint URL in the address field: 4. Add the "ids" parameter and set the coin IDs separated by commas as the value in the Query Params section. This parameter will be automatically added to the query string. 5. Add the "vs_currences" parameter. As a value, set the currencies in which you want to get the price. Enter the values for the parameters separated by commas without spaces! After setting all neccessary parameters tap the send request button. The server will return the following response: You can import this request into API Tester app via these links: curl -X GET '' Get the current data Let`s get current data (name, price, market, ... including exchange tickers) for a coin! The coin id is a required parameter for this request. The "id" parameter is part of the query string, it may look like this: But this request also has a lot of optional parameters that affect what data the server will return. We use only one of the optional parameters, this will allow us to receive data in one language: localization=false. We will get the response as in the second screenshot below: You can import this request into API Tester app via these links: curl -X GET '' Exchanges Now we will get a list of all exchanges active with trading volumes. Endpoint for this request is following: There are two optional but very important parameters for this request: " per_page" and "page": 1. The first one is responsible for the number of elements on the page. 2. The second one is responsible for the number of the current page. We will use pagination to avoid overloading the server and client. The server will return the data in parts. Let's set the page size as 10 and the page number as 1. We will get the response as in the second screenshot below: You can import this request into API Tester app via these links: curl -X GET '' Trending And finally, let's get some interesting statistics provided by CoinGecko: Top-7 trending coins on CoinGecko as searched by users in the last 24 hours (Ordered by most popular first). Endpoint for this request following: There are no additional parameters here. The server returned LUNA in the first place, what will the server return when you will make a request? You can import this request into API Tester app via these links: curl -X GET '' Guides - Previous Twitter API Guide Next - Guides Discord API Guide Last modified 1mo ago Copy link Outline Get the current price Get the current data Exchanges Trending
https://docs.apitester.org/guides/coingecko-api-guide
2022-09-25T02:35:45
CC-MAIN-2022-40
1664030334332.96
[]
docs.apitester.org
Using LabelsUsing Labels As your add more Candidates, Assessments & Challenges to your team account, keeping everything straight may start to get unwieldy. This is where labels comes in. You can assign any label you wish to one or more of these record types. When viewing lists of these records, you can easily filter the by one or more of those labels. How Labels WorkHow Labels Work Labels are tags which can be applied to your data. You do not need to predefine a label before apply it, you simply need to use whatever text value you want when applying a label. Later you can use list filters to filter by one or more labels. Managing LabelsManaging Labels For all three record types, labels can be managed by editing the given record. For candidates specifically, labels can also be applied when sending invitations or when bulk updating. Apply Labels via InvitationsApply Labels via Invitations When sending an invitation to one or more candidates, you can specify one or more labels to add. This is the most often used way of applying labels, as you will typically know the cohort when sending invitations. Recommendation We highly recommend you get into the process of utilizing labels when sending invites, as this practice will help you to keep your process organized from the start. Bulk UpdateBulk Update Candidates have the ability to be bulk edited, as well as to have actions taken on them. For example, you can bulk approve, bulk cancel invitations, or simply just bulk edit them. When taking any of these actions, you will have the option of adding or removing labels. Challenge TopicsChallenge Topics In additional to labels, which are supported in the Candidates, Assessments, and Challenges views - you can also apply topics to the challenges that you define. Topics work the same as labels, in that you can apply them to any record without having to predefine it. Their purpose is to provide a separate grouping of tag data which focuses on quickly identifying what the challenge aims to assess. Use-case ExamplesUse-case Examples Let's briefly go over some common usages for labels. These are examples, but feel free to use labels for whatever organizational purposes you may have. CandidatesCandidates Organize By CohortsOrganize By Cohorts If you are assessing candidates for hiring process, you could use labels to organize candidates by how they were sourced, or perhaps by which jobs they are applying to. If you are assessing students in an educational setting, you could use labels to organize candidates by classroom. Cohort and classroom organization is discussed further in our education article. Organize by Process StageOrganize by Process Stage For recruiting use-cases, labels are useful to organize candidates by which part of the process they are in. This is particularly useful if you are not using an ATS system, or simply have a multi-part assessment process which you want to track outside of the ATS, where as in the ATS you may simply treat the assessment process as a single stage. Assessments & ChallengesAssessments & Challenges Organize By Target CohortOrganize By Target Cohort If you are managing a large number of content to be used with different groups of candidates, we recommend that you organize the content by cohort. For example, job role, classroom, or seniority are all groupings that are commonly used.
https://docs.qualified.io/for-teams/process/labels/
2022-09-25T02:54:36
CC-MAIN-2022-40
1664030334332.96
[]
docs.qualified.io
Documentation for a newer release is available. View Latest Golang 1.18 The Go language/Golang has been updated to version 1.18 in Fedora 36. The new version includes improved support of the RISC-V processor architecture and added support for Aarch64 based Darwin (macOS) machines, among other bug fixes, enhancements and new features. All Go packages will require a rebuild against the new version. For full information about Go 1.18, see the upstream release notes.
https://docs.fedoraproject.org/jp/fedora/f36/release-notes/developers/Development_Go/
2022-09-25T01:16:00
CC-MAIN-2022-40
1664030334332.96
[]
docs.fedoraproject.org
Help Scout On This Page Help Scout is an email-based customer support software that assists small businesses and teams manage their customer relationships. Help Scout is similar to your email with a mailbox at the top of the hierarchy. All customer-related communication is tracked through conversations and threads in the mailbox, eliminating the need to manage ticket numbers and case numbers. In Help Scout, you can create multiple mailboxes for each shared email address. This allows your users across various departments, such as support, marketing, and customer success, to collaborate and manage different products or brands from a single account. Help Scout also provides your users visibility into the emails being responded to in real-time. You can replicate the data from your HelpScout account to a Destination database or data warehouse using Hevo Pipelines. Refer to the Data Model section for information on the objects that Hevo creates in your Destination. Prerequisites An active account in Help Scout. An active Help Scout user with access to at least one customer mailbox. A subscription to Help Scout’s Plus or Company plan if you want to read data from the Custom Fields object. Configuring Help Scout as a Source Perform the following steps to configure Help Scout as the Source in your Pipeline: Click PIPELINES in the Asset Palette. Click + CREATE in the Pipelines List View. In the Select Source Type page, select Help Scout. In the Configure your Help Scout account page, click + ADD HELP SCOUT ACCOUNT. Log in to your Help Scout account, and click Authorize, providing Hevo access to your Help Scout data. In the Configure your Help Scout Source page, specify the following: Pipeline Name: A unique name for the Pipeline, not exceeding 255 characters. Authorized Account (Non-editable): This field is pre-filled with the email address that you selected earlier when connecting to your Help Scout account. Historical Sync Duration: The duration for which the existing data in the Source must be ingested. Default value: 3 Months. Note: If you select All Available Data, Hevo fetches all the data created since January 01, 2011, for your the historical data for all the objects and loads it to the Destination. For the Conversation and Customer objects, the historical data is ingested based on the historical sync duration selected when creating the Pipeline. Default duration: 3 Months. For the objects, Mailbox, Tag, Team, User, and Workflow, Hevo ingests all historical data present in your account. Incremental Data: Once the historical load is complete, all new and updated records for the Conversation and Customer objects are synchronized with your Destination as per the Pipeline frequency. For Mailbox, Tag, Team, User, and Workflow, which are Full Load objects, Hevo fetches all the data but loads only the new and updated records to the Destination as per the Pipeline frequency. It achieves this by filtering the previously ingested data based on the position stored at the end of the last ingestion run. Schema and Primary Keys Hevo uses the following schema to upload the records in the Destination: Data Model The following is the list of tables (objects) that are created at the Destination when you run the Pipeline: Source Considerations Help Scout restricts the number of API requests and access to certain API endpoints based on your Help Scout pricing plan. If Hevo exceeds the number of calls allowed by your plan, data ingestion is deferred until the limits are reset (approximately five minutes). Refer to the following table for the applicable rate limits: See Also Revision History Refer to the following table for the list of key updates made to this page:
https://docs.hevodata.com/sources/sns-analytics/helpscout/
2022-09-25T02:29:45
CC-MAIN-2022-40
1664030334332.96
[]
docs.hevodata.com
Transferred chat interactions with contextual information When your agents know who your customers are, what they’re looking for, and what they’ve already shared with another agent, they can give better service, faster. Contextual information informs agents facilitating more productive conversations while handling customer issues. 8x8 Contact Center introduces the ability to hand off the conversation with interaction details when transferring a live chat. Let’s say a contact center agent interacting with a customer via chat, has to transfer the customer to another department. Transfer the live chat interaction to another queue. Upon transfer, the agent receiving the transferred chat interaction can view the customer details gathered via the pre-chat form, the agent name transferring the interaction, the channel the interaction was initiated on, and the queue to which it is transferred. With all this information, the second agent quickly reviews the customer information as well the context of the conversation, processes the chat interaction more effectively. At the termination of the chat, the chat log includes the original transaction ID along with the chat transcript. The chat log created with the first agent indicates the chat was transferred. The transfer of information is supported throughout the customer journey, in a chain interaction where a chat interaction is transferred to and processed by multiple agents. Every agent receives the information until that point. As a supervisor, you can review the path of an interaction by accessing the transfer details via Monitoring. Who accepted the chat first, which queue did they transfer it to, who was the second agent to receive the interaction and so on. Note: The chat history is bound to the customer and the channel, so agents can see chat history from all previous interactions with the customer on the same channel. Agents cannot see what happened on another channel that the same customer might have used in the past. Use case At AcmeJets, agent Robin accepts a chat request from customer Mia, who is looking for information regarding a recent sales order. Agent Robin pulls up the order information and processes the request. Mia then has a billing related question. Robin must now transfer the live chat to their billing queue. She then informs Mia about the transfer and transfers the conversation to the billing queue. Agent John receives the chat request, takes a quick look at the customer information collected during pre-chat, accepts the chat to view the conversation details with agent Robin. Agent John in Billing accepts the chat and receives all the information about the customer along with the chat transcript with agent Robin. The Transaction tab displays the following information: - Transfer from: Indicates the agent who transferred the interaction - Channel Name: The chat channel that received the interaction - Queue: The chat queue that is currently offering the interaction - Customer: Customer name if this is an existing customer - Company: Company name the customer is affiliated with - Pre-chat information: Information collected via pre-chat (such as language and customer name) - Transaction ID: The transaction ID of the chat interaction with this agent Agent John chats with Mia, processes the interaction, and ends the conversation. Upon termination of the chat, the chat log pops. It includes the current transaction ID as well as the previous transaction ID. It also indicates that the chat is transferred from agent Robin. The chat logged with agent Robin indicates that it is transferred to the billing queue. Monitoring transferred chat interactions As a contact center supervisor, you want to track how efficiently the chat interactions are being handled by agents. When agents transfer interactions, you want to understand the reason for transfer, was it transferred to the right department? Did the agent ask all the right questions before transferring? Did the agent accepting the transferred conversation receive all the necessary information to handle the chat? You now get answers to all your questions in your Monitoring tool. - Log in to Agent Console as a supervisor. - From the menu, go to Monitoring. - In the Playback tab, select Chat. You will see the list of chat interactions for a specified time range. - From the list, select a specific transaction to view the transaction details as well as the chat transcription for that leg. If the details indicate that it was a transferred chat, you can fetch the previous transaction ID, look for it in the list, and bring up the details (see below).
https://docs.8x8.com/8x8WebHelp/VCC/release-notes/Content/9-10-release/chat-transfer-context.htm
2022-09-25T01:58:23
CC-MAIN-2022-40
1664030334332.96
[array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) array(['../../Skins/Default/Stylesheets/Images/transparent.gif', 'Closed'], dtype=object) ]
docs.8x8.com
Cosmos DB General guidelines and best practice for working with Cosmos DB Querying documents in CosmosDB Use FeedResponse to retrieve all hits The documents in a collection might be stored in different partitions. When querying documents in a collection, the response will only contain documents from a single partition at a time, to retrieve your hits across all partitions you may utilize the the continuation token or the HasMoreResults property of the DocumentQuery. Avoid expensive queries CosmosDB uses indexes to find matches for the queries, if there is not value for the indexed property, all instances will have to be checked to figure out if there is a match for the query. This occurs in cases where we assert that a property is null, so queries like this should be avoided. Always try to assert on an existing value, if this is not possible modifying the data model should be considered.
https://docs.altinn.studio/community/contributing/handbook/cosmosdb/
2022-09-25T01:58:55
CC-MAIN-2022-40
1664030334332.96
[]
docs.altinn.studio
# Malware: Sality # Problem description A Windows computer in your network is likely infected with the Sality malware. Sality is a very common type of malicious code that affects only Windows systems. It usually infects your PC when you open a file with malicious contents. It can be used to perform many kinds of bad actions, such as using your computer to send and receive spam emails, stealing your sensitive personal or financial data, or performing computing tasks such as mining cryptocurrency or cracking use Windows Defender to fix your computer: (opens new window). You can also consider reinstalling your computer with a fresh Windows installation to make sure you get rid of the malware infection.
https://docs.badrap.io/types/malware-sality.html
2022-09-25T02:33:20
CC-MAIN-2022-40
1664030334332.96
[]
docs.badrap.io
- - DarkLight Contributors - - DarkLight Contributors are the users added to the project. These could be from the organization owning the project, for higher-level roles, or from other organizations (or no organization at all). A project has 1 or more ‘Project owners’, and also ‘Developers’. The project owner can add ‘Annotation managers’, which manage the daily work of all ‘Annotators’ The Contributor hierarchy is as follows: - Project Owner: this role has access to all. As a project owner you can create projects, manage datasets, assign contributors, change roles, export data and more... - Developer: as a developer, you can manage datasets, set recipes, create tasks and export data within a project. - Annotation Manager: as an annotation manager you can create annotations or QA tasks, redistribute and reassign these tasks to annotators, as well as review their tasks. - Annotator: annotators can only work on annotation and QA assignments assigned to them. Adding New Contributors To send out Tasks & Assignments , you'll need to add Contributors to your project. The list of project contributors appears on the top left of the Project dashboard. Use it to add, edit or remove contributors. Once added, new contributors will receive an email notification, including a link to the selected project. To access the project, users who are new to the platform must first sign up. Add new contributors by: 1. Click the "Add new" button. 2. Type in their email address 3. Select a role 4. Click "Enter." Edit the roles of existing contributors by: 1. Click the box by their email 2. Select the desired role from the drop-down list. 3. To delete one or more contributors, click the red “minus” icon to the left of their email address. 4. Click “DONE” to exit the editing mode.
https://docs.dataloop.ai/docs/contributor-roles
2022-09-25T01:47:56
CC-MAIN-2022-40
1664030334332.96
[array(['https://cdn.document360.io/53f32fe9-1937-4652-8526-90c1bc78d3f8/Images/Documentation/20e32b1-projecthierarchy%281%29.jpg', '20e32b1-projecthierarchy.jpg'], dtype=object) array(['https://cdn.document360.io/53f32fe9-1937-4652-8526-90c1bc78d3f8/Images/Documentation/final_60d2e0a889e21200cd4321a1_82036.gif', None], dtype=object) ]
docs.dataloop.ai
CLI is the interface between an application and the Micro Teradata Director Program (MTDP). CLI does the following: - Builds parcels that MTDP packages for sending to the database using the Micro Operating System Interface (MOSI), and - Provides the application with a pointer to each of the parcels returned from the database. MTDP is the interface between CLI and MOSI. MOSI is the interface to the database.
https://docs.teradata.com/r/Teradata-Call-Level-Interface-Version-2-Reference-for-Workstation-Attached-Systems/October-2021/Introduction-to-CLI/Interface-Overview/Logical-Structure
2022-09-25T01:11:53
CC-MAIN-2022-40
1664030334332.96
[]
docs.teradata.com
Trigger a Subworkflow This action allows you to trigger one or more workflows for sub-data within your original workflow. Common examples for subworkflows include: - Order > Line Items (multiple) - Order > Customer - Order > Fulfillments (multiple) - Product > Variants (multiple) In the subworkflow action, select the data you want to create a subworkflow for. Order example: After you select the data you want to send to the subworkflow you can edit it: In your subworkflow, you can now add conditions and actions based on the data that is passed. Be sure to include any condition if you want to exclude certain items from running in the subworkflow. Additional Notes & Considerations - To test a Subworkflow, select an item to test with on the main workflow page. Once selected, go back to the subworkflow. The subworkflow edit screen will now allow you to select a relevant test item. For example, select an order and on a line item subworkflow, specific line items will be available for testing. - You can view subworkflow logs by going to the subworkflow and clicking the "Logs" tab at the top. From there you can view success messages, errors, and what's in queue. - Subworkflows can generate a lot of actions which can cause delays if you are not careful. Be sure that you have appropriate conditions to only run subworkflows on items you need it to run on. You may run into issues if you try to process too many subworkflow items, such as an order with 50+ line items, or a product with 50+ variants.
https://arigato.docs.bonify.io/article/44-trigger-a-subworkflow
2022-09-25T01:05:43
CC-MAIN-2022-40
1664030334332.96
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefa332c7d3a3dea3d21e3/img-15304-1590621982-720838557.png', 'Screenshot_2020-04-30_13.17.27.png'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefa330428632fb90067bb/img-15304-1590621983-1118210349.png', 'Screenshot_2020-04-30_13.21.01.png'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefa342c7d3a3dea3d21e4/img-15304-1590621984-1415997956.png', 'Screenshot_2020-04-30_13.21.35.png'], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/5ecd69412c7d3a3dea3d0b36/images/5ecefa340428632fb90067bc/img-15304-1590621985-191983138.png', 'Screenshot_2020-04-30_13.23.28.png'], dtype=object) ]
arigato.docs.bonify.io
This is a bugfix release, for a full list of features see the Arnold 6.0.1.0 release notes. Bug Fixes - #9229 AiProceduralViewport doesn't honor procedural_searchpath - #9216 [Alembic] Normals not read from polymeshes in some archives - #9227 [Alembic] Visibility overridden for ginstances of Alembic procedural - #9201 [GPU] A polymesh with step_size>0 and volume_padding>0 crashes - #9293 [GPU] Multi-GPU partially hangs on scenes with textures - #9185 Potential crash in node initialization of ginstances with parallel initialization - #9230 Unable to install licensing components on debian based linux 6.0.1.0 6.0.1.06.0.1.0
https://docs.arnoldrenderer.com/pages/diffpages.action?originalId=111838347&pageId=111838348
2022-09-25T01:45:07
CC-MAIN-2022-40
1664030334332.96
[]
docs.arnoldrenderer.com
How to add multiple IPv4 service addresses. To configure additional IPv4 service Edit Interfaces pop-up window closes. - Under Addresses, expand to view the newly added IPv4 service address.
https://docs.bluecatnetworks.com/r/Address-Manager-Administration-Guide/Configuring-additional-IP-service-addresses/9.2.0
2022-09-25T01:54:38
CC-MAIN-2022-40
1664030334332.96
[]
docs.bluecatnetworks.com
Simple Interaction GUI Let’s create a simple interaction GUI for our Simple Machine. We’ll use it to read the current data and to reset the counters. The Widget The very first thing we do is to create the GUI itself. You do this by creating a new BP-Class of type FGInteractWidget. This works like every other regular UMG widget, so be sure you’re familiar with how they work. In our case we’ll add two TextBlocks, and a button with a TextBlock in it which has the label "Rest". The FGInteractWidget has the M Interact Object variable which represents the object that the player interacted with (aka. our simple machine). You can cast it to the simple machine so we can access the counters. Bind the content of two new TextBlocks to individual functions. One of them should return an appended text of "Count: " plus the itemCount of the machine. The other should return typeCount and "different Types". Place them wherever you want. Now we have the display of our text, but we still need to add the reset functionality. For that bind a event to the on click event of a new button. Set both counters of the machine to 0. There’s still a problem if you use the machine, the user can’t physically interact with the widget. The reason is that we don’t capture any input from the user. To fix that you’ll need to change some of the class default values on the widget. - M Use Keyboard Activate this so keystrokes get used to trigger events in out widget. ( Escto exit the widget) - M Use Mouse Activate this so the mouse movements get used to actually make the mouse visible and allow click events to happen. - M Capture Input Activate this so the input events don’t get further used in the game. (like walking around) Make machine interactable Now we need to attach our UI to the simple machine. There’s very little to do, we just need to enable M Is Useable and set M Interact Widget Class to your newly created widget. Done! Now you can interact with your machine by looking at it and hitting E. Use the SF Window If you want you can make your widget look more like the other SF widgets by using the "WindowDark" Widget provided by the modding starter content. Use it like every other widget in your new widgets and add the content you want in the widget slot. Then you can set the title by defining the title variable’s default value. If you want to add the functionality to close the whole interact widget by clicking the "X" Button, not just the the window, you need to bind the OnEscapePressed Event of the interact widget (or custom stuff) to the OnClose Event on the WindowDark.
https://docs.ficsit.app/satisfactory-modding/v3.1.1/Development/BeginnersGuide/SimpleMod/machines/SimpleInteraction.html
2022-09-25T01:49:50
CC-MAIN-2022-40
1664030334332.96
[]
docs.ficsit.app
Graphical’s..ttk— Tk themed widgets - Using Ttk - Ttk Widgets - Widget - Combobox - Spinbox - Notebook - Progressbar - Separator - Sizegrip - Treeview - Ttk Styling tkinter.tix— Extension widgets for Tk tkinter.scrolledtext— Scrolled Text Widget - IDLE - Menus - Editing and navigation - Startup and code execution - Help and preferences - Other Graphical User Interface Packages
https://docs.python.org/3.8/library/tk.html
2022-09-25T01:54:13
CC-MAIN-2022-40
1664030334332.96
[]
docs.python.org
sched — Event scheduler¶ Código-fonte: Lib/sched.py The sched module defines a class which implements a general purpose event scheduler: - class sched. scheduler(timefunc=time.monotonic, delayfunc=time.sleep)¶ The schedulerclass defines a generic interface to scheduling events. It needs two functions to actually deal with the “outside world” — timefunc should be callable without arguments, and return a number (the “time”, in any units whatsoever). The delayfunc function should be callable with one argument, compatible with the output of timefunc, and should delay that many time units. delayfunc will also be called with the argument 0after each event is run to allow other threads an opportunity to run in multi-threaded applications. Alterado na versão 3.3: timefunc and delayfunc parameters are optional. Exemplo: >>> Objetos Scheduler¶ scheduler instances have the following methods and attributes: scheduler. enterabs(time, priority, action, argument=(), kwargs={})¶ Schedule a new event. The time argument should be a numeric type compatible with the return value of the timefunc function passed to the constructor. Events scheduled for the same time will be executed in the order of their priority. A lower number represents a higher priority. Executing the event means executing action(*argument, **kwargs). argument is a sequence holding the positional arguments for action. kwargs is a dictionary holding the keyword arguments for action. Return value is an event which may be used for later cancellation of the event (see cancel()). Alterado na versão 3.3: argument parameter is optional. Alterado na versão 3.3: o parâmetro kwargs foi adicionado. scheduler. enter(delay, priority, action, argument=(), kwargs={})¶ Schedule an event for delay more time units. Other than the relative time, the other arguments, the effect and the return value are the same as those for enterabs(). Alterado na versão 3.3: argument parameter is optional. Alterado na versão 3.3: o parâmetro kwargs foi adicionado. scheduler. cancel(event)¶ Remove the event from the queue. If event is not an event currently in the queue, this method will raise a ValueError. scheduler. run(blocking=True)¶).. Alterado na versão 3.3: blocking parameter was added. scheduler. queue¶ Read-only attribute returning a list of upcoming events in the order they will be run. Each event is shown as a named tuple with the following fields: time, priority, action, argument, kwargs.
https://docs.python.org/pt-br/3/library/sched.html
2022-09-25T02:44:57
CC-MAIN-2022-40
1664030334332.96
[]
docs.python.org
Managing content sets Content sets overview A content set is a group of sensors, saved questions, packages, dashboards, categories, filter groups, and plugins to which a permission applies. Tanium provides several predefined content sets through the Default Content pack and through Tanium modules and shared services. You can create a content set to contain custom content or to accommodate changes in the role-based access control (RBAC) configuration of your Tanium deployment. For example, you can create a content set for sensors and packages related to Tanium Client maintenance, and then configure roles that allow a wide group of users read access to the content but write access to a smaller group of users. You can assign content to only one content set. A role can specify permissions for multiple content sets. Configure custom roles to define platform content permissions for content that is used across all modules and module permissions for module-specific content. Tanium also provides pre-defined module roles for module-specific content. The following figure shows the relationship between contents sets and content, permissions, and roles. For details about roles, see Managing roles. To see and use the Content Sets page, and to View content set details - From the Main menu, go to Administration > Permissions > Content Sets. - (Optional) In the Filter items field, enter a search string to find specific content sets based on Name or Description values. The Used By column indicates which Tanium modules or shared services use the content that is in a content set. If the column displays no value for a content set, that means its content is used across the Tanium Core Platform and is not module-specific. - Click the Name of the content set for which you want to review content and permissions. - Expand the content type that you want to review. The top grid lists all the objects of that type in the content set. The bottom grid displays the Roles , Users , and User Groups with permissions that are associated with the content. - When you finish reviewing, click Exit to return to the Content Sets summary page. Create a content set - From the Main menu, go to Administration > Permissions > Content Sets and click New Content Set. - Enter a Content Set Name and optional Description, and then click Save. - Perform the following tasks to assign content to the content set: Move content between content sets Move content between content sets as necessary to accommodate changes to the RBAC configuration of your Tanium deployment. For example, if a sensor collects sensitive information from endpoints, you might want to move that sensor to a content set that only highly privileged user roles can access. Before moving content, be sure that you understand how the move affects workflows. For example, if a user configures a scheduled action, and you later move the associated package to a content set for which that user does not have permission, the Tanium Server will not deploy the action. Keep predefined content that is included in Tanium modules and content packs in the original predefined content sets. As much as possible, create copies of Tanium-provided content and move the copies to other content sets when necessary. Contact Tanium Support before proceeding if moving original Tanium-provided content becomes necessary. To move content between content sets, you require the - The Reserved content set, which includes fundamental sensors that the Tanium Core Platform uses. - Certain Tanium solution-based content sets. Perform the following steps to move content: - From the Main menu, go to Administration > Permissions > Content Sets. - Click the Name of the content set that contains the content you want to move. - Expand the content type and select the content that you want to move. - Click Move to Content Set, select the target content set, and click Confirm. Export or import content sets The following procedures describe how to export Test content sets and roles in your lab environment before importing their configuration into your production environment. Export content sets Export content sets as a file in one of the following formats: CSV: When you open the file in an application that supports CSV format, it lists the content sets with the same attributes (columns) as the Content Sets page displays. - JSON: If you are assigned a role with the Export Content permission, you can export content set configurations as a JSON file to import them into another Tanium Server. The Administrator reserved role has that permission. The content set section of the file includes the content set names but not the content set assignments. Perform the following steps to export content sets: - From the Main menu, go to Administration > Permissions > Content Sets. - (Optional, CSV exports only) To add or remove attributes (columns) for the CSV file, click Customize Columns in the grid and select the attributes. - Select rows in the grid to export only specific content sets. If you want to export all content sets, skip this step. Click Export . - (Optional) Edit the default export File Name. The file suffix (.csv or .json) changes automatically based on the Format selection. - Select an Export Data option: All content sets in the grid or just the Selected content sets. Select the file Format: - List of Content Sets - CSV Content Set Definitions - JSON (Administrator reserved role only) - Click Export. Tanium Cloud The Tanium Serverexports the file to the downloads folder on the system that you used to access the Tanium Console. Import content sets. Delete a content set You must empty a content set configuration before you can delete it. To empty a content set, move its content to another set or delete the content. To move content, see Move content between content sets. - From the Main menu, go to Administration > Permissions > Content Sets and click the content set Name. - Click Delete Content Set. Last updated: 9/22/2022 1:53 PM | Feedback
https://docs.tanium.com/interact/platform_user/console_content_sets.html
2022-09-25T01:53:45
CC-MAIN-2022-40
1664030334332.96
[array(['images/content_sets_configuration_2.1.png', None], dtype=object)]
docs.tanium.com
Summary views provide tabular data about the use of resources in the monitored environment. Where You Find the Summary View From the left menu, click Views panel, click Create. Click Summary from the right panel.. From the Name and Configuration Tab Data Tab The data definition process includes adding properties, metrics, policies, or data that adapters provide to a view. These are the items by which vRealize Operations Cloud collects, calculates, and presents the information for the view. How to Add Data to a View If you selected more than one subject, click on the subject for which you want to add data. Double-click either a metric or a property from the tree in the left panel to add it to the view. For each subject that you select, the data available to add might be different. The Data, Transformation, and Configuration details are displayed. You can see a live preview of the view type when you select a subject and associated data, and then click Select preview source. Time Settings Tab. Filter Tab The filter option allows you to add additional criteria when the view displays too much information. For example, a view shows information about the health of virtual machines. From the Filter tab, you add a risk metric less than 50%. The view displays the health of all virtual machines with risk less than 50%. To add a filter to a view, from an existing or new view dialog box, click the Filter tab. Fill in the details for each row and click Add. Summary Tab the Summary tab in the right pane. Click the plus sign to add a summary row. For the Summary view, the summary column shows aggregated information by the items provided on the Data tab. Previous, Next, Create, and Cancel Options At the end of each tab, you can go to the previous or next tab. You can also cancel the creation of the view. After you have added all the details, click Create to create the view.
https://docs.vmware.com/en/vRealize-Operations/Cloud/com.vmware.vcom.config.doc/GUID-39FE0B5C-E1E1-45E9-8964-4A4BDED0B74C.html
2022-09-25T01:52:50
CC-MAIN-2022-40
1664030334332.96
[]
docs.vmware.com
Help Center How do I open a Zivver message? If you received a Zivver message and you do not have a Zivver account, the message is secured. Depending on the verification method selected by the sender of the message, you can open the message by following one of these three methods: - Open the message with a SMS-code Click the Not my number button or reach out to the sender of the message when the shown number is incorrect. - Open the message with an access code Reach out to the sender of the message when you do not know the access code. - Open the message with a verification email
https://docs.zivver.com/en/faq/open-zivver-message.html
2022-09-25T02:08:27
CC-MAIN-2022-40
1664030334332.96
[]
docs.zivver.com
Release Notes Not all plugins are maintained by Veertu Inc developers. You might not see them listed here. Current Versions) Jenkins Plugin 2.3.0 - Dec 1st, 2020 - New Feature: Disable appending timestamp to Cache Builder/tags TeamCity Plugin version 1.7.1 - July 7, 2020 - Bug Fix: Long-running threads were being created - Bug Fix: UI Slowness the more Instances/Agents you created - Bug Fix: HTTPS without certificate authentication enabled doesn't work Previous Versions Feedback Was this page helpful? Glad to hear it! Please tell us how we can improve. Sorry to hear that. Please tell us how we can improve.
http://docs-1.12.0-and-2.3.1.s3-website.us-west-1.amazonaws.com/docs/release-notes/
2021-06-12T22:59:38
CC-MAIN-2021-25
1623487586465.3
[]
docs-1.12.0-and-2.3.1.s3-website.us-west-1.amazonaws.com
, which allows you to configure the software, packages, libraries, and drivers that you need. Domino comes with a default environment called the Domino Analytics Distribution, which includes Python, R, Jupyter, RStudio, and hundreds of data science related packages and libraries. We’ll choose the default environment which includes a recent version of R. Click on the Compute Environment dropdown menu to choose the Environment. 1.1. Choose the Domino Analytics Distribution. *. - .
https://docs.dominodatalab.com/en/4.4/get_started_r/2-configure_project.html
2021-06-12T23:03:39
CC-MAIN-2021-25
1623487586465.3
[array(['../_images/collaborator_panel.png', '../_images/collaborator_panel.png'], dtype=object)]
docs.dominodatalab.com
cts:geospatial-region-query( $geospatial-region-reference as cts:reference*, $operation as xs:string, $regions as cts:region*, [$options as xs:string*], [$weight as xs:double?] ) as cts:geospatial-region-query Construct a query to match regions in documents that satisfy a specified relationship relative to other regions. For example, regions in documents that intersect with regions specified in the search criteria. This function matches regions in documents in the database satisfying the relationship R1 op R2, where R1 is a region in a database document, op is the operator provided in the operation parameter, and R2 is any of the regions provided in the regions parameter. The R1 regions under considerations are those in the indexes provided in the geospatial-region-reference parameter. The database configuration must include a geospatial path region index corresponding to each R1 region. For details, see Geospatial Region Queries and Indexes in the Search Developer's Guide. The operations are defined by the Dimensionally Extended nine-Intersection Model (DE-9IM) of spatial relations. They have the following semantics: - "contains" - R1 contains R2 if every point of R2 is also a point of R1, and their interiors intersect. - "covered-by" - R1 is covered-by R2 if every point of R1 is also a point of R2. - "covers" - R1 covers R2 if every point of R2 is also a point of R1. - "disjoint" - R1 is disjoint from R2 if they have no points in common. - "intersects" - R1 intersects R2 if the two regions have at least one point in common. - "overlaps" - R1 overlaps R2 if the two regions partially intersect -- that is, they have some but not all points in common -- and the intersection of R1 and R2 has the same dimension as R1 and R2. - "within" - R1 is within R2 if every point of R1 is also a point of R2, and their interiors intersect. - "equals" - R1 equals R2 if every point of R1 is a point of R2, and every point of R2 is a point of R1. That is, the regions are topologically equal. - "touches" - R1 touches R2 if they have a boundary point in common but no interior points in common. - "crosses" - R1 crosses R2 if their interiors intersect and the dimension of the intersection is less than that of at least one of the regions. Note: the operation covers differs from contains only in that covers does not distinguish between points in the boundary and the interior of geometries. In general, covers should be used in preference to contains. Similarly, covered-by should generally be used in preference to within. If either the geospatial-region-reference or regions parameter is an empty list, the query will not match any documents. The query uses the coordinate system and precision of the geospatial region index reference supplied in the geospatial-region-reference parameter. If multiple index references are specified and they have conflicting coordinate systems, an XDMP-INCONSCOORD error is thrown. cts:geospatial-region-query( cts:geospatial-region-path-reference("//item/region"), "contains", cts:box(10, 20, 30, 40)) Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/9.0/cts:geospatial-region-query
2021-06-13T00:05:49
CC-MAIN-2021-25
1623487586465.3
[]
docs.marklogic.com
The Trail Renderer component renders a trail of polygons behind a moving GameObjectThe fundamental object in Unity scenes, which can represent characters, props, scenery, cameras, waypoints, and more. A GameObject’s functionality is defined by the Components attached to it. More info See in Glossary.. Trail Renderers must be laid out over a sequence of frames; they cannot appear instantaneously. The Trail Renderer uses the same algorithm for trail renderingThe process of drawing graphics to the screen (or to a render texture). By default, the main camera in Unity renders its view to the screen. More info See in Glossary as the Line RendererA component that takes an array of two or more points in 3D space and draws a straight line between each one. You can use a single Line Renderer component to draw anything from a simple straight line to a complex spiral. More info See in Glossary.A program that runs on the GPU. More info See in Glossary.
https://docs.unity3d.com/2019.4/Documentation/Manual/class-TrailRenderer.html
2021-06-13T00:46:05
CC-MAIN-2021-25
1623487586465.3
[]
docs.unity3d.com
After successfully deploying a node on Ankr you can get the endpoint from the Application details section as seen below: The Endpoint URL will have the following format: http://<your_app_id>.ankr.com Example (as shown in the image above): //requestcurl -H "Content-Type: application/json"{"accounts":[{"balance":{"timestamp":"1604296800.106706008","balance":696858661816},"account":"0.0.1001","expiry_timestamp":null,"auto_renew_period":null,"key":null,"deleted":false}],"links":{"next":null}} In the next section, you will find a detailed description of the provided Hedera API endpoints which can also be consulted on the Hedera Official Documentation
https://docs.ankr.com/enteprise-solutions/hedera-hashgraph/ankr-json-rpc-endpoint
2021-06-13T00:28:01
CC-MAIN-2021-25
1623487586465.3
[]
docs.ankr.com
A newer version of this page is available. Switch to the current version. How to: Create Named Formulas - 2 minutes to read This example demonstrates how to define names for formulas. To do this, call the DefinedNameCollection.Add method with a name to be associated with a formula and the formula string passed as parameters. Use the Worksheet.DefinedNames or Workbook.DefinedNames property to access and modify the collection of defined names of a particular worksheet or entire workbook, depending on which scope you want to specify for a name. NOTE A complete sample project is available at Worksheet worksheet1 = workbook.Worksheets["Sheet1"]; Worksheet worksheet2 = workbook.Worksheets["Sheet2"]; // Create a name for a formula that sums up the values of all cells included in the "A1:C3" range of the "Sheet1" worksheet. // The scope of this name will be limited by the "Sheet1" worksheet. worksheet1.DefinedNames.Add("Range_Sum", "=SUM(Sheet1!$A$1:$C$3)"); // Create a name for a formula that doubles the value resulting from the "Range_Sum" named formula and // make this name available within the entire workbook. workbook.DefinedNames.Add("Range_DoubleSum", "=2*Sheet1!Range_Sum"); // Create formulas that use other formulas with the specified names. worksheet2.Cells["C2"].Formula = "=Sheet1!Range_Sum"; worksheet2.Cells["C3"].Formula = "=Range_DoubleSum"; worksheet2.Cells["C4"].Formula = "=Range_DoubleSum + 100"; The image below shows how to use named formulas in worksheet cells (the workbook is opened in Microsoft® Excel®). See Also Feedback
https://docs.devexpress.com/OfficeFileAPI/14708/spreadsheet-document-api/examples/formulas/how-to-create-named-formulas?v=19.1
2021-06-12T22:46:23
CC-MAIN-2021-25
1623487586465.3
[array(['/OfficeFileAPI/images/spreadsheet_namedformulas18709.png?v=19.1', 'Spreadsheet_NamedFormulas'], dtype=object) ]
docs.devexpress.com
cts:element-attribute-value-query( $element-name as xs:QName*, $attribute-name as xs:QName*, $text as xs:string*, [$options as xs:string*], [$weight as xs:double?] ) as cts:element-attribute-value-query Returns a query matching elements by name with attributes by name with text content equal. When multiple element and/or attribute QNames are specified, then all possible element/attribute QName combinations are used to select the matching values. cts:search(//module, cts:element-attribute-value-query( xs:QName("function"), xs:QName("type"), "MarkLogic Corporation")) => .. relevance-ordered sequence of 'module' element ancestors (or self) of 'function' elements that have an attribute 'type' whose value equals 'MarkLogic Corporation'. cts:search(//module, cts:and-query(( cts:element-attribute-value-query( xs:QName("function"), xs:QName("type"), "MarkLogic Corporation", (), 0.5), cts:element-word-query( xs:QName("title"), "faster")))) => .. relevance-ordered sequence of 'module' element ancestors (or self) of both: (a) 'function' elements with attribute 'type' whose value equals the string 'MarkLogic Corporation', ignoring embedded punctuation, AND (b) 'title' elements whose text content contains the word 'faster', with the results from (a) given weight 0.5, and the results from (b) given default weight 1.0. Stack Overflow: Get the most useful answers to questions from the MarkLogic community, or ask your own question.
https://docs.marklogic.com/9.0/cts:element-attribute-value-query
2021-06-12T23:09:08
CC-MAIN-2021-25
1623487586465.3
[]
docs.marklogic.com
Build System Support¶ What is it?¶ Python packaging has come a long way. The traditional setuptools way of packaging Python modules uses a setup() function within the setup.py script. Commands such as python setup.py bdist or python setup.py bdist_wheel generate a distribution bundle and python setup.py install installs the distribution. This interface makes it difficult to choose other packaging tools without an overhaul. Because setup.py scripts allowed for arbitrary execution, it proved difficult to provide a reliable user experience across environments and history. PEP 517 therefore came to rescue and specified a new standard to package and distribute Python modules. Under PEP 517: a pyproject Starting with a package that you want to distribute. You will need your source scripts, a pyproject.toml file and a setup.cfg file: ~/meowpkg/ pyproject.toml setup.cfg meowpkg/__init__.py The pyproject.toml file is required to specify the build system (i.e. what is being used to package your scripts and install from source). To use it with setuptools, the content would be: [build-system] requires = ["setuptools", "wheel"] build-backend = "setuptools.build_meta" Use setuptools’ declarative config to specify the package information: [metadata] name = meowpkg version = 0.0.1 description = a package that meows [options] packages = find: Now generate the distribution. To build the package, use PyPA build: $ pip install -q build $ python -m build And now it’s done! The .whl file and .tar.gz can then be distributed and installed: dist/ meowpkg-0.0.1.whl meowpkg-0.0.1.tar.gz $ pip install dist/meowpkg-0.0.1.whl or: $ pip install dist/meowpkg-0.0.1.tar.gz
https://setuptools.readthedocs.io/en/latest/build_meta.html
2021-06-12T23:52:13
CC-MAIN-2021-25
1623487586465.3
[]
setuptools.readthedocs.io
Deployment - 3 minutes to read This document describes which assemblies are required by applications that use the functionality of the XtraSpreadsheet Suite. Some of the assemblies are essential, while others that provide additional functionality can be optionally deployed, depending on your requirements. If you use other DevExpress components in your application, their use and deployment should comply with the corresponding EULA documents. For more information on licensing and the redistribution policy of DevExpress, refer to Redistribution and Deployment. Required Libraries Below are the essential libraries that are required by applications that use the XtraSpreadsheet Suite. These libraries are considered redistributable under the DevExpress EULA, intended for distribution by you to the end-users of the software applications that you create. Additional Libraries The following libraries provide additional functionality for applications that use the functionality of the XtraSpreadsheet Suite. Non-Redistributable Libraries Distributing any DevExpress design-time libraries ending with “.Design” (for instance, DevExpress.XtraEditors.v18.2.Design.dll), is strictly prohibited. Please consult the EULA for additional information on which libraries, tools and executables are considered redistributable.
https://docs.devexpress.com/WindowsForms/12070/controls-and-libraries/spreadsheet/product-information/deployment?v=18.2
2021-06-13T00:24:23
CC-MAIN-2021-25
1623487586465.3
[]
docs.devexpress.com
Parties store data in local systems of record (Mongo, Oracle, SAP, etc). Components involved in the baseline process are given CRUD access to this and conduct a series of operations to serialize records (including any associated business logic), send those records to counterparties, receive the records, sign them, generate proofs, and store these proofs to a Merkle Tree on the Mainnet. Connectors for various systems can be found here. The first step in baselining is setting up the counterparties that will be involved in a specific Workflow or set of Workflows. This is called the Workgroup. One initiating party will set this up by either: Adding an entry to an existing OrgRegistry smart contract on the Mainnet; Selecting existing entries on a universal OrgRegistry; Creating a new OrgRegistry and adding entries to it. It is possible over time for a single instance of an orgRegistry contract on the Mainnet to become a defacto "phone book" for all baselining counterparties. This would provide a convenient place to look up others and to quickly start private Workflows with them. For this to become a reality, such an orgRegistry would need to include appropriate and effective ways to verify that the entry for any given company is the authentic and correct entry for baselining with that entity. This is an opportunity for engineers and companies to add functionality to the Baseline Protocol. Next, establish point-to-point connectivity with the counterparties in your Workgroup by: Pull their endpoint from the OrgRegistry Send an invitation to connect to the counterparties and receive authorization credentials Now the counterparties are connected securely. A walk-through of this process is here. A Workgroup may run one or more Workflows. Each Workflow consists of one or more Worksteps. Before creating a Workflow, you must first create the business rules involved in it. The simplest Workflow enforces consistency between records in two or more Counterparties' respective databases. More elaborate Workflows may contain rules that govern the state changes from one Workstep to the next. These can be written in zero knowledge circuits, and in a future release, one will be able to send business logic to counterparties without constructing special zk circuits (but allowing the core zk "consistency" circuit to check both code and data). To set up this business logic, use the Baseline Protocol Privacy Package here. Once the business logic is rendered into circuits, deploy the Workflow as follows: First deploy a Node that has the baseline protocol RPC interface implemented. The Nethermind Ethereum Client is the first to implement this code. Alternatively, you can deploy the commit-mgr Ethereum client extension plus a client type of your choice (i.e. Besu, Infura, etc.) Next, use the IBaselineRPC call in the Client to deploy the Shield and Verifier contracts on-chain. This can be found here. Now that the Workgroup and Workflow have been established, counterparties can send each other serialized records, confirm consistency between those records, and enforce business rules on the state changes from Workstep to Workstep. An example of this is in the BRI-1 Reference implementation here. And a walkthrough of an "Alice and Bob" simple case is here and here.
https://docs.baseline-protocol.org/baseline-protocol/baseline-process
2021-06-12T22:26:56
CC-MAIN-2021-25
1623487586465.3
[]
docs.baseline-protocol.org