content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
{"_id":"591f17b021d2ff0f00cf5a55",73dd709b3f88f0e00dcae1f","githubsync":"","},"project":"547cd7662eaee50800ed1089","__v":0,"parentDoc":null,"updates":[],"next":{"pages":[],"description":""},"createdAt":"2016-06-08T11:28:39.972Z","link_external":false,"link_url":"","sync_unique":"","hidden":false,"api":{"results":{"codes":[]},"settings":"","auth":"required","params":[],"url":""},"isReference":false,"order":4,"body":"After you integrated DevMate Kit, you can add the first version of your application:\n1. Open the Applications tab of your DevMate account and click on your application in the list that appears.\n2. Navigate to the Release Management tab by clicking a corresponding item in the DevMate's left-side pane. \n3. Click the Add New Application Version button. The Add New Version wizard will help you to create a new version of your app. \n\nThere are the following fields available in the Advanced mode:\n[block:parameters]\n{\n \"data\": {\n \"0-0\": \"Status\",\n \"0-1\": \"Draft / Testing / Live\",\n \"0-2\": \"The status that is related to a new version. It is always set to Draft when a new version is added for the first time. \\nPlease Edit newly added Release to change the app's status.\",\n \"h-2\": \"Comments\",\n \"h-1\": \"Choice\",\n \"h-0\": \"Name\",\n \"1-0\": \"Bundle version\",\n \"1-1\": \"CFBundleVersion key from your app\",\n \"2-0\": \"Short Bundle Version\",\n \"2-1\": \"CFBundleShortVersionString key from your app\",\n \"1-2\": \"The key from Xcode and info.plist. It is important because the Updates Framework refers to this key. Requires a \\\"letter\\\" to consider version as a Beta.\",\n \"2-2\": \"The key from Xcode and info.plist. It is important as the Updates Framework refers to this key. Requires a \\\"letter\\\" to consider version as a Beta.\",\n \"3-0\": \"Version Codename\",\n \"3-1\": \"Any code name you want to add to your app\",\n \"3-2\": \"Internal setting that is visible only in Release Management.\",\n \"4-0\": \"Release date\",\n \"4-1\": \"Exact date choice\",\n \"4-2\": \"Defined date of the app release.\",\n \"5-0\": \"OS Limitation\",\n \"5-1\": \"Control lowest OS X version the app is available to\",\n \"5-2\": \"Not required. Can be set to make an app support only specific OS versions, starting from the specified one (minimum required OS X version).\",\n \"6-0\": \"Update Method\",\n \"6-1\": \"Either a regular In-app update or External update\",\n \"6-2\": \"DevMate supports not only regular updates via Update Framework but also External Updates via external link.\",\n \"7-0\": \"Release Files\",\n \"7-1\": \"ZIP, DMG and .dSYM.ZIP\",\n \"7-2\": \"All three files are needed to distribute them to customers and use dSYM for the crashes de-symbolication.\",\n \"8-0\": \"Release Notes\",\n \"8-1\": \"Release Notes of your app\",\n \"8-2\": \"An overview of changes and enhancements made to the application in the current update. Displayed to users of your app.\",\n \"9-0\": \"Add Localization Notes\",\n \"9-1\": \"Localized Release Notes\",\n \"9-2\": \"A way to add different release notes for different localizations of the app.\"\n },\n \"cols\": 3,\n \"rows\": 10\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"warning\",\n \"title\": \"Why keeping a correct version is important\",\n \"body\": \"While adding a new version, please pay attention to both **Bundle Version** and **Short Bundle Version** strings. If they don't coincide with the strings in the app's info.plist, the app may not see the update available in the Release Management.\"\n}\n[/block]\n\n[block:callout]\n{\n \"type\": \"info\",\n \"title\": \"Managing Beta versions\",\n \"body\": \"While creating a beta version of the app, please add letters to emphasise it. Otherwise, the version will be considered regular one. When the letter is added, DevMate shows \\\"ß-letter\\\" next to the app version and its status. It is also shown in the Beta Update Feed while checking for the updates.\\n\\nNote: Make sure you've configured the Updater to check for the Beta Updates. To find out more, read [Updates]() article.\"\n}\n[/block]","excerpt":"","slug":"add-new-application-version","type":"basic","title":"Add a New App Version"}
https://docs.devmate.com/docs/add-new-application-version
2019-08-17T11:22:51
CC-MAIN-2019-35
1566027312128.3
[]
docs.devmate.com
Editing Account Permissions Accounts of each type have permissions to perform actions in Partner Central according to their role. In addition, resellers, reseller’s operators, customers, and root operators have permissions to manage other accounts. If your account has the appropriate permissions, you can modify permissions of other accounts. To view and edit permissions of a customers and customer’s operators: On the Accounts tab, go to Customer Accounts or Operator Accounts to view customer’s operators. Click the name of the account you want to edit and go to the Permissions tab. Use the search box to find the accounts. To grant or revoke permissions, click Edit and select user roles. When you edit resellers, reseller’s operators, or root operators, you can modify their permissions on other accounts, not only on their own account. Such accounts are called objects. Resellers, reseller’s operators, and root operators have access to their top-level object (account) and subordinate objects (accounts). The permissions that you can grant cannot exceed your own permissions. To view and edit permissions of a resellers, reseller’s operators, and root operators: On the Accounts tab, go to Resellers or Operator Accounts to view reseller’s operators and root operators. Click the name of the account you want to edit and go to the Permissions tab. The Permissions tab shows the list of objects, which contains: - The top-level object. This is the selected account (if you view a reseller) or its parent account (if you view an operator). If you view a root operator account, it is the root account (Plesk). - Subordinate accounts for which the default permissions were modified. To modify permissions on a subordinate account that were not modified yet, click Add Permissions. Select the subordinate account (object) in the Object box. Select permissions you want to grant the selected account (reseller or operator) on the selected object (top-level or subordinate account). To cancel all modifications to the default permissions: On the Accounts tab, go to Resellers or Operator Accounts to view reseller’s operators and root operators. Click the name of the account you want to edit and go to the Permissions tab. In the list of accounts, select the subordinate object and click Remove. Permissions of the selected account on the top-level account cannot be removed.
https://docs.plesk.com/en-US/obsidian/partner-central-user-guide/user-accounts/editing-accounts-and-permissions/editing-account-permissions.78229/
2019-08-17T11:02:57
CC-MAIN-2019-35
1566027312128.3
[]
docs.plesk.com
The Alerts and Events section shows notifications, alerts, and an audit trail of cluster configuration changes. Alerts The fields on this answer have the following meaning: For a full reference of possible alerts, see the Alert code reference. Configuration Events This system answer displays recent events that changed the configuration of the system. This list can contain the same types of information available on the Admin System Health > Overview page. This answer displays the Time, the User that performed the action, and a Summary of the action. Notification events This answer displays notifications of data loads. The display the Time, the User that performed the action, and a Summary of the action. Notifications are kept for 90 days before being discarded.
https://docs.thoughtspot.com/5.2/admin/system-monitor/alerts-events.html
2019-08-17T11:52:31
CC-MAIN-2019-35
1566027312128.3
[array(['/5.2/images/contro_center_configuration_events.png', 'Partial view of the **System Health** center: Events and Alerts'], dtype=object) ]
docs.thoughtspot.com
How do you define an identifiable abandoned cart? - 1 - A customer visits your site and carts one or more items. - 2 - The customer starts the checkout process, making it as far as entering an email address. Alternatively, an email address is passed using our JavaScript API. - 3 - Your abandonment buffer elapses and the customer has not completed the order. Resources
http://docs.rejoiner.com/article/77-how-do-you-define-an-abandoned-cart
2017-07-20T16:45:02
CC-MAIN-2017-30
1500549423269.5
[]
docs.rejoiner.com
Authentication¶ NSoT supports two methods of authentication and these are implemented in the client: auth_token auth_header The client default is auth_token, but auth_header is more flexible for “zero touch” use. If sticking with the defaults, you’ll need to retrieve your key from /profile in the web interface. Refer to Configuration Reference for setting these in your pynsotrc. Python Client¶ Assuming your configuration is correct, the CLI interface doesn’t need anything special to make authentication work. The following only applies to retrieving a client instance in Python. from pynsot.client import AuthTokenClient, EmailHeaderClient, get_api_client # This is the preferred method, returning the appropriate client according # to your dotfile if no arguments are supplied. # # Alteratively you can override options by passing url, auth_method, and # other kwargs. See `help(get_api_client) for more details c = get_api_client() # OR using the client objects directly email = 'jathan@localhost' secret_key = 'qONJrNpTX0_9v7H_LN1JlA0u4gdTs4rRMQklmQF9WF4=' url = '' c = AuthTokenClient(url, email=email, secret_key=secret_key) # Email Header Client domain = 'localhost' auth_header = 'X-NSoT-Email' c = EmailHeaderClient( url, email=email, default_domain=domain, auth_header=auth_header, )
http://pynsot.readthedocs.io/en/latest/auth.html
2017-07-20T16:22:25
CC-MAIN-2017-30
1500549423269.5
[]
pynsot.readthedocs.io
Updating a Device Registration The information that you store about a device may change in time. That is why it is recommended to periodically update the device registration. Before you do that, you may want to read the device's object to identify which parameters have changed values. After you have identified them, pass them to Telerik Platform using a PUT request as shown below. Note that you must URL-encode the hardware ID ( encodedDeviceId) before sending it. The registration object is detailed in Table of Registration Object Fields. Use the following RESTful call to update a device registration: Request: PUT<encodedDeviceId> Headers: Content-Type: application/json Body: { "PlatformVersion": "4.0.4", "Parameters.MyIntValue": 2 } Response: Status: 200 OK Content-Type: application/json var deviceInfo = { "PlatformVersion": "4.0.4", "Parameters.MyIntValue": 2 }; var encodedDeviceId = encodeURIComponent(device_id); $.ajax({ type: "PUT", url: '' + encodedDeviceId, contentType: "application/json", data: JSON.stringify(deviceInfo), success: function(data){ alert(JSON.stringify(data)); }, error: function(error){ alert(JSON.stringify(error)); } });
http://docs.telerik.com/platform/backend-services/rest/push-notifications/push-update-registration
2017-07-20T16:29:46
CC-MAIN-2017-30
1500549423269.5
[]
docs.telerik.com
General linux and other guide/troubleshooting basic¶ Some conventions used in these docs: - Wherever you see text that is formatted like this, it is a code snippet. You should copy and paste those code snippets instead of attempting to type these out; this will save you debugging time for finding your typos. - Double check that your copy-paste has copied correctly. Sometimes a paste may drop a character or two and that will cause an error in the command that you are trying to execute. Sometimes, depending on what step you are doing, you may not see the issue. So, do make a point of double checking the paste before pressing return. - You will see a $ at the beginning of many of the lines of code. This indicates that it is to be entered and executed at the terminal prompt. Do not type in the dollar sign $. - Wherever there are <bracketed_components>in the the code, these are meant for you to insert your own information. Most of the time, it doesn’t matter what you choose as long as you stay consistent throughout this guide. That means if you choose myedisonas your <edisonhostname>, you must use myedisonevery time you see <edisonhostname>. Do not include the < >brackets in your commands when you enter them. So for the example above, if the code snipped says ssh root@<edisonhostname>.local, you would enter ssh [email protected] Before you get started¶ Some familiarity with using the Terminal app (Mac computers) or Putty (Windows computers) will go a long way, but is not required for getting started. Terminal (or PuTTY) is basically a portal into your rig, allowing us to use our computer’s display and keyboard to communicate with the little [Edison or Pi] computer in your rig. The active terminal line will show your current location, within the computer’s file structure, where commands will be executed. The line will end with a $ and then have a prompt for you to enter your command. There are many commands that are useful, but some of the commands you’ll get comfortable with are: cdmeans “change directory” - you can cd <directorynamewithnobrackets>to change into a directory; and cd ..will take you backward one directory and cdwill take you back to the root directory. If you try to cdinto a file, your computer will tell you that’s not going to happen. lsmeans “list”, is also your friend - it will tell you what is inside a directory. If you don’t see what you expect, you likely want to cd ..to back up a level until you can orient yourself. If you aren’t comfortable with what cdand lsdo or how to use them, take a look at some of the Linux Shell / Terminal commands on the Troubleshooting page and the reference links on the Technical Resources page. catmeans “concatenation” - it will show you the contents of a file if you cat <filename>. Very useful when trying to see what you have in preferences or other oref0 files. viand nanoare both editing command prefixes. Using those will bring you into files for the purposes of editing their contents. It is like catexcept you will be able to edit. - Within vieditor, you will need to enter the letter ito begin INSERT mode (and a little INSERT word will be shown at the bottom of the screen once you do that). While in INSERT mode, you will be able to make edits. To exit INSERT mode, you will press esc. To save your changes and quit, you need to exit INSERT mode and then type :wq. - Within nanoeditor, you are automatically in editing mode. You can make your edits and then to exit and save, you’ll use control-x, y(to save the edits), and then returnto save the edits to the same filename you started with. - Up and Down arrow keys can scroll you back/forward through the previous commands you’ve entered in the terminal session. Very useful if you don’t want to memorize some of the longer commands. Control-rwill let you search for previous commands. One other helpful thing to do before starting any software work is to log your terminal session. This will allow you to go back and see what you did at a later date. This will also be immensely helpful if you request help from other OpenAPS contributors as you will be able to provide an entire history of the commands you used. To enable this, just run script <filename> at the beginning of your session. It will inform you that Script started, file is <filename>. When you are done, simply exit and it will announce Script done, file is <filename>. At that point, you can review the file as necessary. ls <myopenaps> will show the following files and subdirectories contained within the directory: - autotune - cgm - cgm.ini - detect-sensitivity.ini - determine-basal.ini - enact - get-profile.ini - iob.ini - meal.ini - mmtune_old.json - monitor - ns-glucose.ini - ns.ini - openaps.ini - oref0.ini - oref0-runagain.sh - pebble.ini - preferences.json - pump.ini - pump-session.json - raw-cgm - settings - tz.ini - units.ini - upload - xdrip.ini ls settings will show the contents of the settings subdirectory; the files which collect longer-term loop data. - autosens.json - autotune.json - basal_profile.json - bg_targets.json - bg_targets_raw.json - carb_ratios.json - insulin_sensitivities.json - insulin_sensitivities_raw.json - model.json - profile.json - pumphistory-24h.json - pumphistory-24h-zoned.json - pumpprofile.json - settings.json - temptargets.json ls monitor will show the contents of the monitor subdirectory; current data going on right now in your loop. - battery.json - carbhistory.json - clock.json - clock-zoned.json - edison-battery.json - glucose.json - iob.json - meal.json - meal.json.new - mmtune.json - pumphistory.json - pumphistory-zoned.json - reservoir.json - status.json - temp_basal.json ls enact will show the contents of the enact subdirectory; loop’s suggested and enacted temp basals and changes. - enacted.json - suggested.json
http://openaps.readthedocs.io/en/latest/docs/Troubleshooting/General_linux_troubleshooting.html
2017-07-20T16:30:00
CC-MAIN-2017-30
1500549423269.5
[]
openaps.readthedocs.io
Reference - SecurityGroupDefinition API supportCSOM SSOM Can be deployed underSite Notes Security group provision is enabled via SecurityGroupDefinition object. Provision checks if object exists looking up it by Name property, then creates a new object. You can deploy either single object or a set of the objects using AddSecurityGroup() extension method as per following examples. In some cases we need to refer to build-in SharePoint security groups, such as associated members, owners and visitors. That could be done with IsAssociatedMemberGroup, IsAssociatedOwnerGroup and IsAssociatedVisitorsGroup properties. Once you define suc a group, provision does not do anything but uses these flags to pass them into SecurityGroupLinkDefinition while linking a security group with SharePoint web, list, item, folder or other securable object. Check SecurityGroupLinkDefinition for more samples on how to use IsAssociatedMemberGroup, IsAssociatedOwnerGroup and IsAssociatedVisitorsGroup properties. Examples var auditors = new SecurityGroupDefinition { Name = "External Auditors", Description = "External auditors group." }; var reviewers = new SecurityGroupDefinition { Name = "External Reviewers", Description = "External reviewers group." }; var model = SPMeta2Model.NewSiteModel(site => { site .AddSecurityGroup(auditors) .AddSecurityGroup(reviewers); }); DeployModel(model); var model = SPMeta2Model.NewSiteModel(site => { site .AddSecurityGroup(DocSecurityGroups.ClientManagers) .AddSecurityGroup(DocSecurityGroups.ClientSupport) .AddSecurityGroup(DocSecurityGroups.Interns) .AddSecurityGroup(DocSecurityGroups.OrderApprovers); }); DeployModel(model);
http://docs.subpointsolutions.com/spmeta2/reference/sp-foundation-definitions/securitygroupdefinition?
2017-07-20T16:38:17
CC-MAIN-2017-30
1500549423269.5
[]
docs.subpointsolutions.com
User Guide Local Navigation Change your ringtone, notifiers, reminders or alerts In any sound profile, you can change your ringtonetone, Notifier Tone or Reminder Tone field, do one of the following: - Press the key > Save. Related tasks Next topic: View a location on a map Previous topic: Copy contacts from your SIM card to your contact list Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/37644/Change_your_ring_tone_61_1440842_11.jsp
2015-02-01T07:12:04
CC-MAIN-2015-06
1422115855897.0
[]
docs.blackberry.com
. The Server This part is given for building a light demo server based on Xfire (it is not yet so Groovy ). - Create a Service contract (a java interface) - Create an implementation of this service - Register your service - Start the server - You're done! The Client It's pretty easy to make the remote calls too - Import the Soap Client class - Create a proxy object to represent the remote server - Call the remote method via the proxy - You are done
http://docs.codehaus.org/pages/viewpage.action?pageId=49346
2015-02-01T07:27:11
CC-MAIN-2015-06
1422115855897.0
[]
docs.codehaus.org
attribute too have these two principles, but since we have different constrcuts, we may lay out these principles not easy for Java programers, to not to define the varibale before using it, and it will go into the binding. Any defined variable is local. Please note: the binding exists only for scripts. What is this "def" I heard of? . The assignment of a number, such as 2, to a String typed variable will fail. A variable typed with "def" allows this. A Closure is a block in the terms of the principles above a closure is a block. A variable defined in a block is visible in that block and all blocks that are defined in that block. For exmaple deinfe a local varibale named "parameter", but since these closures are not nested, this is allowed. Note: unlike early versions of Groovy and unlike PHP, a varibale. The keyword "static" "static" is a modifier for attributes. It defines the "static scope". that means all variables not defined with "static" are not part of that static scope and as such not visible there. There is no special magic to this in Groovy. So for an explanation of "static" use any Java book.
http://docs.codehaus.org/pages/viewpage.action?pageId=58033
2015-02-01T07:28:13
CC-MAIN-2015-06
1422115855897.0
[]
docs.codehaus.org
. 3) "Group Limit" option allows to set after how many events under the same rule log message should be sent as a single aggregated message instead of separate e-mails per each log message. 4) "Details Limit" option allows to set amount of characters for grouped messages. 5) New possibility to configure notification separately for Radius: Fraud Protection by tags. Screenshot: Watch Rule pop-up window settings For more detailed information about Events Log functionality, visit our User Guide.: A-Z import mode (that is used to close active or future rates) could be applied for one code. Screenshot: Import tab settings New "Taxes" table is available for usage in invoice templates. Therefore, taxes could be displayed separately in invoices. This table contains the next variables:".
https://docs.jerasoft.net/pages/diffpagesbyversion.action?pageId=9439864&selectedPageVersions=75&selectedPageVersions=74
2021-06-12T20:42:25
CC-MAIN-2021-25
1623487586390.4
[]
docs.jerasoft.net
You are viewing documentation for Kubernetes version: v1.19 Kubernetes v1.19 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Hypernetes: Bringing Security and Multi-tenancy to Kubernetes Today’s guest post is written by Harry Zhang and Pengfei Ni, engineers at HyperHQ, describing a new hypervisor based container called HyperContainer While many developers and security professionals are comfortable with Linux containers as an effective boundary, many users need a stronger degree of isolation, particularly for those running in a multi-tenant environment. Sadly, today, those users are forced to run their containers inside virtual machines, even one VM per container. Unfortunately, this results in the loss of many of the benefits of a cloud-native deployment: slow startup time of VMs; a memory tax for every container; low utilization resulting in wasting resources. In this post, we will introduce HyperContainer, a hypervisor based container and see how it naturally fits into the Kubernetes design, and enables users to serve their customers directly with virtualized containers, instead of wrapping them inside of full blown VMs. HyperContainer HyperContainer is a hypervisor-based container, which allows you to launch Docker images with standard hypervisors (KVM, Xen, etc.). As an open-source project, HyperContainer consists of an OCI compatible runtime implementation, named runV, and a management daemon named hyperd. The idea behind HyperContainer is quite straightforward: to combine the best of both virtualization and container. We can consider containers as two parts (as Kubernetes does). The first part is the container runtime, where HyperContainer uses virtualization to achieve execution isolation and resource limitation instead of namespaces and cgroups. The second part is the application data, where HyperContainer leverages Docker images. So in HyperContainer, virtualization technology makes it possible to build a fully isolated sandbox with an independent guest kernel (so things like top and /proc all work), but from developer’s view, it’s portable and behaves like a standard container. HyperContainer as Pod The interesting part of HyperContainer is not only that it is secure enough for multi-tenant environments (such as a public cloud), but also how well it fits into the Kubernetes philosophy. One of the most important concepts in Kubernetes is Pods. The design of Pods is a lesson learned (Borg paper section 8.1) from real world workloads, where in many cases people want an atomic scheduling unit composed of multiple containers (please check this example for further information). In the context of Linux containers, a Pod wraps and encapsulates several containers into a logical group. But in HyperContainer, the hypervisor serves as a natural boundary, and Pods are introduced as first-class objects: HyperContainer wraps a Pod of light-weight application containers and exposes the container interface at Pod level. Inside the Pod, a minimalist Linux kernel called HyperKernel is booted. This HyperKernel is built with a tiny Init service called HyperStart. It will act as the PID 1 process and creates the Pod, setup Mount namespace, and launch apps from the loaded images. This model works nicely with Kubernetes. The integration of HyperContainer with Kubernetes, as we indicated in the title, is what makes up the Hypernetes project. Hypernetes One of the best parts of Kubernetes is that it is designed to support multiple container runtimes, meaning users are not locked-in to a single vendor. We are very pleased to announce that we have already begun working with the Kubernetes team to integrate HyperContainer into Kubernetes upstream. This integration involves: - container runtime optimizing and refactoring - new client-server mode runtime interface - containerd integration to support runV The OCI standard and kubelet’s multiple runtime architecture make this integration much easier even though HyperContainer is not based on Linux container technology stack. On the other hand, in order to run HyperContainers in multi-tenant environment, we also created a new network plugin and modified an existing volume plugin. Since Hypernetes runs Pod as their own VMs, it can make use of your existing IaaS layer technologies for multi-tenant network and persistent volumes. The current Hypernetes implementation uses standard Openstack components. Below we go into further details about how all those above are implemented. Identity and Authentication In Hypernetes we chose Keystone to manage different tenants and perform identification and authentication for tenants during any administrative operation. Since Keystone comes from the OpenStack ecosystem, it works seamlessly with the network and storage plugins we used in Hypernetes. Multi-tenant Network Model For a multi-tenant container cluster, each tenant needs to have strong network isolation from each other tenant. In Hypernetes, each tenant has its own Network. Instead of configuring a new network using OpenStack, which is complex, with Hypernetes, you just create a Network object like below. apiVersion: v1 kind: Network metadata: name: net1 spec: tenantID: 065f210a2ca9442aad898ab129426350 subnets: subnet1: cidr: 192.168.0.0/24 gateway: 192.168.0.1 Note that the tenantID is supplied by Keystone. This yaml will automatically create a new Neutron network with a default router and a subnet 192.168.0.0/24. A Network controller will be responsible for the life-cycle management of any Network instance created by the user. This Network can be assigned to one or more Namespaces, and any Pods belonging to the same Network can reach each other directly through IP address. apiVersion: v1 kind: Namespace metadata: name: ns1 spec: network: net1 If a Namespace does not have a Network spec, it will use the default Kubernetes network model instead, including the default kube-proxy. So if a user creates a Pod in a Namespace with an associated Network, Hypernetes will follow the Kubernetes Network Plugin Model to set up a Neutron network for this Pod. Here is a high level example: {: HyperContainer wraps a Pod of li.big-img} Hypernetes uses a standalone gRPC handler named kubestack to translate the Kubernetes Pod request into the Neutron network API. Moreover, kubestack is also responsible for handling another important networking feature: a multi-tenant Service proxy. In a multi-tenant environment, the default iptables-based kube-proxy can not reach the individual Pods, because they are isolated into different networks. Instead, Hypernetes uses a built-in HAproxy in every HyperContainer as the portal. This HAproxy will proxy all the Service instances in the namespace of that Pod. Kube-proxy will be responsible for updating these backend servers by following the standard OnServiceUpdate and OnEndpointsUpdate processes, so that users will not notice any difference. A downside of this method is that HAproxy has to listen to some specific ports which may conflicts with user’s containers.That’s why we are planning to use LVS to replace this proxy in the next release. With the help of the Neutron based network plugin, the Hypernetes Service is able to provide an OpenStack load balancer, just like how the “external” load balancer does on GCE. When user creates a Service with external IPs, an OpenStack load balancer will be created and endpoints will be automatically updated through the kubestack workflow above. Persistent Storage When considering storage, we are actually building a tenant-aware persistent volume in Kubernetes. The reason we decided not to use existing Cinder volume plugin of Kubernetes is that its model does not work in the virtualization case. Specifically: The Cinder volume plugin requires OpenStack as the Kubernetes provider. The OpenStack provider will find on which VM the target Pod is running on Cinder volume plugin will mount a Cinder volume to a path inside the host VM of Kubernetes. The kubelet will bind mount this path as a volume into containers of target Pod. But in Hypernetes, things become much simpler. Thanks to the physical boundary of Pods, HyperContainer can mount Cinder volumes directly as block devices into Pods, just like a normal VM. This mechanism eliminates extra time to query Nova to find out the VM of target Pod in the existing Cinder volume workflow listed above. The current implementation of the Cinder plugin in Hypernetes is based on Ceph RBD backend, and it works the same as all other Kubernetes volume plugins, one just needs to remember to create the Cinder volume (referenced by volumeID below) beforehand. apiVersion: v1 kind: Pod metadata: name: nginx labels: app: nginx spec: containers: - name: nginx image: nginx ports: - containerPort: 80 volumeMounts: - name: nginx-persistent-storage mountPath: /var/lib/nginx volumes: - name: nginx-persistent-storage cinder: volumeID: 651b2a7b-683e-47e1-bdd6-e3c62e8f91c0 fsType: ext4 So when the user provides a Pod yaml with a Cinder volume, Hypernetes will check if kubelet is using the Hyper container runtime. If so, the Cinder volume can be mounted directly to the Pod without any extra path mapping. Then the volume metadata will be passed to the Kubelet RunPod process as part of HyperContainer spec. Done! Thanks to the plugin model of Kubernetes network and volume, we can easily build our own solutions above for HyperContainer though it is essentially different from the traditional Linux container. We also plan to propose these solutions to Kubernetes upstream by following the CNI model and volume plugin standard after the runtime integration is completed. We believe all of these open source projects are important components of the container ecosystem, and their growth depends greatly on the open source spirit and technical vision of the Kubernetes team. Conclusion This post introduces some of the technical details about HyperContainer and the Hypernetes project. We hope that people will be interested in this new category of secure container and its integration with Kubernetes. If you are looking to try out Hypernetes and HyperContainer, we have just announced the public beta of our new secure container cloud service (Hyper_), which is built on these technologies. But even if you are running on-premise, we believe that Hypernetes and HyperContainer will let you run Kubernetes in a more secure way. ~Harry Zhang and Pengfei Ni, engineers at HyperHQ
https://v1-19.docs.kubernetes.io/blog/2016/05/hypernetes-security-and-multi-tenancy-in-kubernetes/
2021-06-12T20:33:09
CC-MAIN-2021-25
1623487586390.4
[]
v1-19.docs.kubernetes.io
First Look: InterSystems IRIS Native API for Java This First Look explains how to access InterSystems IRIS® data platform globals from a Java application using the InterSystems IRIS Native functionality. Java Opens in a new window, see InterSystems First Looks Opens in a new window. Introduction to Globals Globals provide Opens in a new window. Why is IRIS Native Important? feature takes advantage of the JDBC connection to expose core ObjectScript functionality in Java applications. Importantly, since IRIS Native uses the same connection as JDBC, InterSystems IRIS data is exposed to your Java application as both relational tables through JDBC, and as globals through IRIS Native. InterSystems IRIS provides a unique set of capabilities to use the same physical connection and transaction context to manipulate data using multiple paradigms: native, relational, and object-oriented. Exploring IRIS Native The following brief demo shows you how to work with IRIS Native in a Java application. (Want to try an online video-based demo of InterSystems IRIS Java development and interoperability features? Check out the Java QuickStart Opens in a new window!). Connect your IDE to your InterSystems IRIS instance using the information in InterSystems IRIS Connection Information Opens in a new window and Java IDEs Opens in a new window in the same document. You will also need to add the InterSystems IRIS JDBC driver, intersystems-jdbc-3.0.0.jar, to your local CLASSPATH. You can download this file from Opens in a new window. If you have installed InterSystems IRIS on your local machine or another you have access to, you can find the file in install-dir\dev\java\lib\JDK18, where install-dir is the InterSystems IRIS installation directory. Connecting to InterSystems IRIS The InterSystems IRIS connection string syntax is: jdbc:IRIS://host_IP:superserverPort/namespace,username,password where the variables represent the connection settings described for your instance Opens in a new window in InterSystems IRIS Basics: Connecting an IDE. (This is the same information you used to connect your IDE to the instance). Set namespace to the predefined namespace USER, as shown in the code that follows, or to another namespace you have created in your installed instance (as long as you update the code). If you are connecting to an instance on the local Windows machine (using either the hostname localhost or the IP address 127.0.0.1), the connection automatically uses a special, high-performance local connection called a shared memory connection, which offers even better performance for IRIS Native. For more information, see First Look: JDBC and InterSystems IRIS. Using IRIS Native At this point, you are ready to experiment with IRIS Native. In your connected IDE (see Before You Begin), create a new Java project named IRISNative and paste in the following code. Make sure to edit the superserverPort, username, namespace, and password variables to reflect the correct values for your instance. import java.sql.DriverManager; import com.intersystems.jdbc.IRISConnection; import com.intersystems.jdbc.IRIS; import com.intersystems.jdbc.IRISIterator; public class IRISNative { protected static int superserverPort = 00000; // YOUR PORT HERE protected static String namespace = "USER"; protected static String username = "_SYSTEM"; protected static String password = "SYS"; public static void main(String[] args) { try { // open connection to InterSystems IRIS instance using connection string IRISConnection conn = (IRISConnection) DriverManager.getConnection ("jdbc:IRIS://localhost:"+superserverPort+"/"+namespace,username,password); // create IRIS Native object IRIS iris = IRIS.createIRIS(conn); System.out.println("[1. Setting and getting a global]"); // setting and getting a global // ObjectScript equivalent: set ^testglobal("1") = 8888 iris.set(8888,"^testglobal","1"); // ObjectScript equivalent: set globalValue = $get(^testglobal("1")) Integer globalValue = iris.getInteger("^testglobal","1"); System.out.println("The value of ^testglobal(1) is " + globalValue); System.out.println(); System.out.println("[2. Iterating over a global]"); // modify global to iterate over // ObjectScript equivalent: set ^testglobal("1") = 8888 // ObjectScript equivalent: set ^testglobal("2") = 9999 iris.set(8888,"^testglobal","1"); iris.set(9999,"^testglobal","2"); // iterate over all nodes forwards IRISIterator subscriptIter = iris.getIRISIterator("^testglobal"); System.out.println("walk forwards"); while (subscriptIter.hasNext()) { String subscript = subscriptIter.next(); System.out.println("subscript="+subscript+", value="+subscriptIter.getValue()); } System.out.println(); System.out.println("[3. Calling a class method]"); // calling a class method // ObjectScript equivalent: set returnValue = ##class(%Library.Utility).Date(5) String returnValue = iris.classMethodString("%Library.Utility","Date",5); System.out.println(returnValue); System.out.println(); // close connection and IRIS object iris.close(); conn.close(); } catch (Exception ex) { System.out.println(ex.getMessage()); } } } Java application using IRIS Native. If the example executes successfully, you should see Opens in a new window: First Look: JDBC and InterSystems IRIS Using the Native API for Java
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=AFL_DBNATIVE
2021-06-12T20:21:08
CC-MAIN-2021-25
1623487586390.4
[]
docs.intersystems.com
[−][src]Crate roulette A Rust implementation of roulette wheel selection using the Alias Method. This can be used to simulate a loaded die and similar situations. Initialization takes O(n) time; choosing a random element takes O(1) time. This is far faster than naive algorithms (the most common of which is commonly known as 'roulette wheel selection'). For an in-depth explanation of the algorithm, see. This code was translated from. Example extern crate rand; extern crate roulette; use roulette::Roulette; fn main() { let mut rng = rand::thread_rng(); let roulette = Roulette::new(vec![('a', 1.0), ('b', 1.0), ('c', 0.5), ('d', 0.0)]); for _ in 0..10 { let rand = roulette.sample(&mut rng); println!("{}", rand); } }
https://docs.rs/roulette/0.3.0/roulette/
2021-06-12T20:13:41
CC-MAIN-2021-25
1623487586390.4
[]
docs.rs
ubespray - Ubuntu 16.04, 18.04, 20.04 - CentOS/RHEL/Oracle Linux 7, 8 - Debian Buster, Jessie, Stretch, Wheezy - Fedora 31, 32 - Fedora CoreOS - openSUSE Leap 15 - Flatcar Container Linux by Kinvolk - continuous integration tests To choose a tool which best fits your use case, read this comparison to kubeadm and kops. Creating a cluster (1/5) Meet the underlay requirements Provision servers with the following requirements: - Ansible v2.9 and python-netaddr is installed on the machine that will run Ansible commands - Jinja 2.11 ) Compose an inventory file After you provision your servers, create an inventory file for Ansible. You can do this manually or via a dynamic inventory script. For more information, see "Building your own inventory". (3/5) Plan your cluster deployment
https://v1-20.docs.kubernetes.io/docs/setup/production-environment/tools/kubespray/
2021-06-12T20:47:19
CC-MAIN-2021-25
1623487586390.4
[]
v1-20.docs.kubernetes.io
Querying the Database This chapter discusses how to query data on InterSystems IRIS® data platform. It includes information on the following topics: Defining and Executing Named Queries Queries Invoking User-defined Functions Querying Serial Object Properties Querying Collection Properties Queries Invoking Free-text Search Pseudo-Field Variables: %ID, %TABLENAME, %CLASSNAME Terminating a Running Query Queries and Enterprise Cache Protocol (ECP) Types of Queries A query is a statement which performs data retrieval and generates a result set. A query can consist of any of the following: A simple SELECT statement that accesses the data in a specified table or view. A SELECT statement with JOIN syntax that accesses the data from several tables or views. A UNION statement that combines the results of multiple SELECT statements. A subquery that uses a SELECT statement to supply a single data item to an enclosing SELECT query. In Embedded SQL, a SELECT statement that uses an SQL cursor to access multiple rows of data using a FETCH statement. Using a SELECT Statement A SELECT statement selects one or more rows of data from one or more tables or views. A simple SELECT is shown in the following example: SELECT Name,DOB FROM Sample.Person WHERE Name %STARTSWITH 'A' ORDER BY DOB In this example, Name and DOB are columns (data fields) in the Sample.Person table. The order that clauses must be specified in a SELECT statement is: SELECT DISTINCT TOP ... select-items INTO ... FROM ... WHERE ... GROUP BY ... HAVING ... ORDER BY. This is the command syntax order. All of these clauses are optional, except SELECT select-items. (The optional FROM clause is required to perform any operations on stored data, and therefore is almost always required in a query.) Refer to the SELECT statement syntax for details on the required order for specifying SELECT clauses. SELECT Clause Order of Execution The operation of a SELECT statement can be understood by noting its semantic processing order (which is not the same as the SELECT syntax order). The clauses of a SELECT are processed in the following order: FROM clause — specifies a table, a view, multiple tables or views using JOIN syntax, or a subquery. WHERE clause — restricts what data is selected using various criteria. GROUP BY clause — organizes the selected data into subsets with matching values; only one record is returned for each value. HAVING clause — restricts what data is selected from groups using various criteria. select-item — selects a data fields from the specified table or view. A select-item can also be an expression which may or may not reference a specific data field. DISTINCT clause — applied to the SELECT result set, it limits the rows returned to those that contain a distinct (non-duplicate) value. ORDER BY clause — applied to the SELECT result set, it sorts the rows returned in collation order by the specified field(s). This semantic order shows that a table alias (which is defined in the FROM clause) can be recognized by all clauses, but a column alias (which is defined in the SELECT select-items) can only be recognized by the ORDER BY clause. To use a column alias in other SELECT clauses you can use a subquery, as shown in the following example: SELECT Interns FROM (SELECT Name AS Interns FROM Sample.Employee WHERE Age<21) WHERE Interns %STARTSWITH 'A' In this example, Name and Age are columns (data fields) in the Sample.Person table, and Interns is a column alias for Name. Selecting Fields When you issue a SELECT, InterSystems SQL attempts to match each specified select-item field name to a property defined in the class corresponding to the specified table. Each class property has both a property name and a SqlFieldName. If you defined the table using SQL, the field name specified in the CREATE TABLE command is the SqlFieldName, and InterSystems IRIS generated the property name from the SqlFieldName. Field names, class property names, and SqlFieldName names have different naming conventions: Field names in a SELECT statement are not case-sensitive. SqlFieldName names and property names are case-sensitive. Field names in a SELECT statement and SqlFieldName names can contain certain non-alphanumeric characters following identifier naming conventions. Property names can only contain alphanumeric characters. When generating a property name, InterSystems IRIS strips out non-alphanumeric characters. InterSystems IRIS may have to append a character to create a unique property name. The translation between these three names for a field determine several aspects of query behavior. You can specify a select-item field name using any combination of letter case and InterSystems SQL will identify the appropriate corresponding property. The data column header name in the result set display is the SqlFieldName, not the field name specified in the select-item. This is why the letter case of the data column header may differ from the select-item field name. You can specify a column alias for a select-item field. A column alias can be in any mix of letter case, and can contain non-alphanumeric characters, following identifier naming conventions. A column alias can be referenced using any combination of letter case (for example, in the ORDER BY clause) and InterSystems SQL resolves to the letter case specified in the select-item field. InterSystems IRIS always attempts to match to the list of column aliases before attempting to match to the list of properties corresponding to defined fields. If you have defined a column alias, the data column header name in the result set display is the column alias in the specified letter case, not the SqlFieldName. When a SELECT query completes successfully, InterSystems SQL generates a result set class for that query. The result set class contains a property corresponding to each selected field. If a SELECT query contains duplicate field names, the system generates unique property names for each instance of the field in the query by appending a character. For this reason, you cannot include more than 36 instances of the same field in a query. The generated result set class for a query also contains properties for column aliases. To avoid the performance cost of letter case resolution, you should use the same letter case when referencing a column alias as the letter case used when specifying the column alias in the SELECT statement. In addition to user-specified column aliases, InterSystems SQL also automatically generates up to three aliases for each field name, aliases which correspond to common letter case variants of the field name. These generated aliases are invisible to the user. They are provided for performance reasons, because accessing a property through an alias is faster than resolving letter case through letter case translation. For example, if SELECT specifies FAMILYNAME and the corresponding property is familyname, InterSystems SQL resolves letter case using a generated alias (FAMILYNAME AS familyname). However, if SELECT specifies fAmILyNaMe and the corresponding property is familyname, InterSystems SQL must resolves letter case using the slower letter case translation process. A select-item item can also be an expression, an aggregate function, a subquery, a user-defined function, as asterisk, or some other value. For further details on select-item items other than field names, refer to The select-item section of the SELECT command reference page. The JOIN Operation A JOIN provides a way to link data in one table with data in another table and are frequently used in defining reports and queries. Within SQL, a JOIN is an operation that combines data from two tables to produce a third, subject to a restrictive condition. Every row of the resulting table must satisfy the restrictive condition. InterSystems SQL supports five types of joins (some with multiple syntactic forms): CROSS JOIN, INNER JOIN, LEFT OUTER JOIN, RIGHT OUTER JOIN, and FULL OUTER JOIN. Outer joins support the ON clause with a full range of conditional expression predicates and logical operators. There is partial support for NATURAL outer joins and outer joins with a USING clause. For definitions of these join types and further details, refer to the JOIN page in the InterSystems SQL Reference. If a query contains a join, all of the field references within that query must have an appended table alias. Because InterSystems IRIS does not include the table alias in the data column header name, you may wish to provide column aliases for select-item fields to clarify which table is the source of the data. The following example uses a join operation to match the “fake” (randomly-assigned) zip codes in Sample.Person with the real zip codes and city names in Sample.USZipCode. A WHERE clause is provided because USZipCode does not include all possible 5-digit zip codes: SELECT P.Home_City,P.Home_Zip AS FakeZip,Z.ZipCode,Z.City AS ZipCity,Z.State FROM Sample.Person AS P LEFT OUTER JOIN Sample.USZipCode AS Z ON P.Home_Zip=Z.ZipCode WHERE Z.ZipCode IS NOT NULL ORDER BY P.Home_City Queries Selecting Large Numbers of Fields A query cannot select more than 1,000 select-item fields. A query selecting more than 150 select-item fields may have the following performance consideration. InterSystems IRIS automatically generates result set column aliases. These generated aliases are provided for field names without user-defined aliases to enable rapid resolution of letter case variations. Letter case resolution using an alias is significantly faster than letter case resolution by letter case translation. However, the number of generated result set column aliases is limited to 500. Because commonly InterSystems IRIS generates three of these aliases (for the three most common letter case variations) for each field, the system generates aliases for roughly the first 150 specified fields in the query. Therefore, a query referencing less than 150 fields commonly has better result set performance than a query referencing significantly more fields. This performance issue can be avoided by specifying an exact column alias for each field select-item in a very large query (for example, SELECT FamilyName AS FamilyName) and then making sure that you use the same letter case when referencing the result set item by column alias. Defining and Executing Named Queries You can define and execute a named query as follows: Define the query using CREATE QUERY. This query is defined as a stored procedure, and can be executed using CALL. Define a class query (a query defined in a class definition). A class query is projected as a stored procedure. It can be executed using CALL. A class query can also be prepared using the %SQL.Statement Opens in a new window %PrepareClassQuery() method, and then executed using the %Execute() method. See “Using Dynamic SQL”. CREATE QUERY and CALL You can define a query using CREATE QUERY, and then execute it by name using CALL. In the following example, the first is an SQL program that defines the query AgeQuery, the second is Dynamic SQL that executes the query: CREATE QUERY Sample.AgeQuery(IN topnum INT DEFAULT 10,IN minage INT 20) PROCEDURE BEGIN SELECT TOP :topnum Name,Age FROM Sample.Person WHERE Age > :minage ORDER BY Age ; END SET mycall = "CALL Sample.AgeQuery(11,65)" SET tStatement = ##class(%SQL.Statement).%New() SET qStatus = tStatement.%Prepare(mycall) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} SET rset = tStatement.%Execute() DO rset.%Display() DROP QUERY Sample.AgeQuery Class Queries You can define a query in a class. The class may be a %Persistent class, but does not have to be. This class query can reference data defined in the same class, or in another class in the same namespace. The tables, fields, and other data entities referred to in a class query must exist when the class that contains the query is compiled. A class query is not compiled when the class that contains it is compiled. Instead, compilation of a class query occurs upon the first execution of the SQL code (runtime). This occurs when the query is prepared in Dynamic SQL using the %PrepareClassQuery() method. First execution defines an executable cached query. The following class definition example defines a class query: Class Sample.QClass Extends %Persistent [DdlAllowed] { Query MyQ(Myval As %String) As %SQLQuery (CONTAINID=1,ROWSPEC="Name,Home_State") [SqlProc] { SELECT Name,Home_State FROM Sample.Person WHERE Home_State = :Myval ORDER BY Name } } The following example executes the MyQ query defined in the Sample.QClass in the previous example: SET Myval="NY" SET stmt=##class(%SQL.Statement).%New() SET status = stmt.%PrepareClassQuery("Sample.QClass","MyQ") IF status'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(status) QUIT} SET rset = stmt.%Execute(Myval) DO rset.%Display() WRITE !,"End of data" The following Dynamic SQL example uses %SQL.Statement Opens in a new window to execute the ByName query defined in the Sample.Person class, passing a string to limit the names returned to those that start with that string value: SET statemt=##class(%SQL.Statement).%New() SET cqStatus=statemt.%PrepareClassQuery("Sample.Person","ByName") IF cqStatus'=1 {WRITE "%PrepareClassQuery failed:" DO $System.Status.DisplayError(cqStatus) QUIT} SET rs=statemt.%Execute("L") DO rs.%Display() For further details, refer to “Defining and Using Class Queries” in Defining and Using Classes. For information on query names automatically assigned to executed queries, refer to the Cached Queries chapter of InterSystems SQL Optimization Guide. Queries Invoking User-defined Functions InterSystems SQL allows you to invoke class methods within SQL queries. This provides a powerful mechanism for extending the syntax of SQL. To create a user-defined function, define a class method within a persistent InterSystems IRIS class. The method must have a literal (non-object) return value. This has to be a class method because there will not be an object instance within an SQL query on which to invoke an instance method. It also has to be defined as being an SQL stored procedure. For example, we can define a Cube() method within the class MyApp.Person: Class MyApp.Person Extends %Persistent [DdlAllowed] { /// Find the Cube of a number ClassMethod Cube(val As %Integer) As %Integer [SqlProc] { RETURN val * val * val } } You can create SQL functions with the CREATE FUNCTION, CREATE METHOD or CREATE PROCEDURE statements. To call an SQL function, specify the name of the SQL procedure. A SQL function may be invoked in SQL code anywhere where a scalar expression may be specified. The function name may be qualified with its schema name, or unqualified. Unqualified function names take either a user-supplied schema search path or the default schema name. A function name may be a delimited identifier. An SQL function must have a parameter list, enclosed in parentheses. The parameter list may be empty, but the parentheses are mandatory. All specified parameters act as input parameters. Output parameters are not supported. An SQL function must return a value. For example, the following SQL query invokes a user-defined SQL function as a method, just as if it was a built-in SQL function: SELECT %ID, Age, MyApp.Person_Cube(Age) FROM MyApp.Person For each value of Age, this query will invoke the Cube() method and place its return value within the results. SQL functions may be nested. If the specified function is not found, InterSystems IRIS issues an SQLCODE -359 error. If the specified function name is ambiguous, InterSystems IRIS issues an SQLCODE -358 error. Querying Serial Object Properties A serial object property that is projected as a child table to SQL from a class using default storage (%Storage.Persistent) is also projected as a single column in the table projected by the class. The value of this column is the serialized value of the serial object properties. This single column property is projected as an SQL %List field. For example, the column Home in Sample.Person is defined as Property Home As Sample.Address;. It is projected to Class Sample.Address Extends (%SerialObject), which contains the properties Street, City, State, and PostalCode. See Embedded Object (%SerialObject) in the “Defining Tables” chapter for details on defining a serial object. The following example returns values from individual serial object columns: SELECT TOP 4 Name,Home_Street,Home_City,Home_State,Home_PostalCode FROM Sample.Person The following example returns the values for all of the serial object columns (in order) as a single %List format string, with the value for each column as an element of the %List: SELECT TOP 4 Name,$LISTTOSTRING(Home,'^') FROM Sample.Person By default, this Home column is hidden and is not projected as a column of Sample.Person. Querying Collections Collections may be referenced from the SQL WHERE clause, as follows: WHERE FOR SOME %ELEMENT(collectionRef) [AS label] (predicate) The FOR SOME %ELEMENT clause can be used for list collections and arrays that specify STORAGEDEFAULT="list". The predicate may contain one reference to the pseudo-columns %KEY, %VALUE, or both. A few examples should help to clarify how the FOR SOME %ELEMENT clause may be used. The following returns the name and the list of FavoriteColors for each person whose FavoriteColors include 'Red'. SELECT Name,FavoriteColors FROM Sample.Person WHERE FOR SOME %ELEMENT(FavoriteColors) (%Value = 'Red') Any SQL predicate may appear after the %Value (or %Key), so for example the following is also legal syntax: SELECT Name,FavoriteColors FROM Sample.Person WHERE FOR SOME %ELEMENT(Sample.Person.FavoriteColors) (%Value IN ('Red', 'Blue', 'Green')) A list collection is considered a special case of an array collection that has sequential numeric keys 1, 2, and so on. Array collections may have arbitrary non-null keys: FOR SOME (children) (%Key = 'betty' AND %Value > 5) In addition to the built-in list and array collection types, generalized collections may be created by providing a BuildValueArray() class method for any property. The BuildValueArray() class method transforms the value of a property into a local array, where each subscript of the array is a %KEY and the value is the corresponding %VALUE. In addition to simple selections on the %KEY or %VALUE, it is also possible to logically connect two collections, as in the following example: FOR SOME %ELEMENT(flavors) AS f (f.%VALUE IN ('Chocolate', 'Vanilla') AND FOR SOME %ELEMENT(toppings) AS t (t.%VALUE = 'Butterscotch' AND f.%KEY = t.%KEY)) This example has two collections: flavors and toppings, that are positionally related through their key. The query qualifies a row that has chocolate or vanilla specified as an element of flavors, and that also has butterscotch listed as the corresponding topping, where the correspondence is established through the %KEY. You can change this default system-wide using the CollectionProjection option of the $SYSTEM.SQL.Util.SetOption() Opens in a new window method. SET status=$SYSTEM.SQL.Util.SetOption("CollectionProjection",1,.oldval) to project a collection as a column if the collection is projected as a child table; the default is 0. Changes made to this system-wide setting takes effect for each class when that class is compiled or recompiled. You can use $SYSTEM.SQL.Util.GetOption("CollectionProjection") Opens in a new window to return the current setting. For information on indexing a collection, refer to Indexing Collections in the “Defining and Building Indices” chapter of the InterSystems SQL Optimization Guide. Usage Notes and Restrictions FOR SOME %ELEMENT may only appear in the WHERE clause. %KEY and/or %VALUE may only appear in a FOR predicate. Any particular %KEY or %VALUE may be referenced only once. %KEY and %VALUE may not appear in an outer join. %KEY and %VALUE may not appear in a value expression (only in a predicate). Queries Invoking Free-text Search InterSystems IRIS supports what is called “free-text search,” which includes support for: Wildcards Stemming Multiple-word searches (also called n-grams) Automatic classification Dictionary management This feature enables SQL to support full text indexing, and also enables SQL to index and reference individual elements of a collection without projecting the collection property as a child table. While the underlying mechanisms that support collection indexing and full text indexing are closely related, text retrieval has many special properties, and therefore special classes and SQL features have been provided for text retrieval. For further details refer to Using InterSystems SQL Search. Pseudo-Field Variables InterSystems SQL queries support the following pseudo-field values: %ID — returns the RowID field value, regardless of the actual name of the RowID field. %TABLENAME — returns the qualified name of an existing table that is specified in the FROM clause. The qualified table name is returned in the letter case used when defining the table, not the letter case specified in the FROM clause. If the FROM clause specifies an unqualified table name, %TABLENAME returns the qualified table name (schema.table), with the schema name supplied from either a user-supplied schema search path or the system-wide default schema name. For example, if the FROM clause specified mytable, the %TABLENAME variable might return SQLUser.MyTable. %CLASSNAME — returns the qualified class name (package.class) corresponding to an existing table specified in the FROM clause. For example, if the FROM clause specified SQLUser.mytable, the %CLASSNAME variable might return User.MyTable.Note: The %CLASSNAME pseudo-field value should not be confused with the %ClassName() Opens in a new window instance method. They return different values. Pseudo-field variables can only be returned for a table that contains data. If multiple tables are specified in the FROM clause you must use table aliases, as shown in the following Embedded SQL example: &sql(SELECT P.Name,P.%ID,P.%TABLENAME,E.%TABLENAME INTO :name,:rid,:ptname,:etname FROM Sample.Person AS P,Sample.Employee AS E) IF SQLCODE<0 {WRITE "SQLCODE error ",SQLCODE," ",%msg QUIT} ELSEIF SQLCODE=100 {WRITE "Query returns no results" QUIT} WRITE ptname,"Person table Name is: ",name,! WRITE ptname,"Person table RowId is: ",rid,! WRITE "P alias TableName is: ",ptname,! WRITE "E alias TableName is: ",etname,! The %TABLENAME and %CLASSNAME columns are assigned the default column name Literal_n, where n is the select-item position of the pseudo-field variable in the SELECT statement. Terminating a Running Query If a query is running for an excessive amount of time, you may wish to interrupt (terminate) the query execution. A query running from the Management Portal SQL interface can be terminated using the Cancel button. A query of any type can be terminated using the ObjectScript SQLInterrupt() method, as described here. You can terminate a query that has been invoked via any of the supported interfaces, including Dynamic SQL, Embedded SQL, Cached Query, SQL Server/xDBC, and CSP. Interrupting an executing query requires %Admin_Operate:USE privilege. Use the following method: $$SQLInterrupt^%apiSQL(pid,IntReason,timeout,interface) The SQLInterrupt() method returns a %Status value: Success returns a status of 1. Failure returns an object expression that begins with 0, followed by encoded error information. SQLInterrupt() for an Embedded SQL query running on process 12345 is shown in the following Terminal example: USER>SET stat=$$SQLInterrupt^%apiSQL(12345,,,0) The interrupted query generates an SQLCODE -456 with the %msg "SQL query execution interrupted by user". Query Metadata You can use Dynamic SQL to return metadata about the query, such as the number of columns specified in the query, the name (or alias) of a column specified in the query, and the data type of a column specified in the query. The following ObjectScript Dynamic SQL example returns the column name and an integer code for the column's ODBC data type for all of the columns in Sample.Person: SET myquery="SELECT * FROM Sample.Person" SET rset = ##class(%SQL.Statement).%New() SET qStatus = rset.%Prepare(myquery) IF qStatus'=1 {WRITE "%Prepare failed:" DO $System.Status.DisplayError(qStatus) QUIT} SET x=rset.%Metadata.columns.Count() WHILE x>0 { SET column=rset.%Metadata.columns.GetAt(x) WRITE !,x," ",column.colName," ",column.ODBCType SET x=x-1 } WRITE !,"end of columns" In this example, columns are listed in reverse column order. Note that the FavoriteColors column, which contains list structured data, returns a data type of 12 (VARCHAR) because ODBC represents an InterSystems IRIS list data type value as a string of comma-separated values. For further details, refer to the Dynamic SQL chapter of this manual, and the %SQL.Statement Opens in a new window class in the InterSystems Class Reference. Fast Select InterSystems IRIS supports Fast Select, an internal optimization for rapid query execution over ODBC and JDBC. This optimization maps InterSystems globals to Java objects. It passes the contents of a global node (a data record) as a Java object. Upon receiving these Java objects it extracts the desired column values from them and generates a result set. InterSystems IRIS automatically applies this optimization wherever possible. This optimization is automatic and invisible to the user; when a query is Prepared, InterSystems IRIS flags the query either for execution using the Fast Select mechanism or for execution using the standard query mechanism. Fast Select is applied to %PARALLEL queries and queries against a sharded table if the query only references fields, constants, or expressions that reference fields and/or constants. Fast Select must be supported on both the server and the client. To enable or disable Fast Select in the client, use Properties in the definition of the class instance as follows: Properties p = new Properties(); p.setProperty("FeatureOption","3"); / 1 is fast Select, 2 is fast Insert, 3 is both Because of the difference in performance, it is important for the user to know what circumstances restrict the application of Fast Select. Table Restrictions: the following types of tables cannot be queried using Fast Select: A table whose master/data map has multiple nodes A table that has multiple fields mapped to the same data location (this is only possible using %Storage.SQL) Field Restrictions: if the following columns are included in the select-item list, the query cannot be executed using Fast Select. These types of columns can be defined in the table, but the query cannot select them: A stream field (data type %Stream.GlobalCharacter or %Stream.GlobalBinary) A field that is computed when queried (COMPUTECODE Calculated or Transient) A field that is a list collection (has LogicalToOdbc conversion) A field that performs LogicalToOdbc conversion and is not of data type %Date, %Time, or %PosixTime A field that has overridden LogicalToOdbc conversion code A field that performs LogicalToStorage conversion A field whose map data entry uses retrieval code A field whose map data entry has a delimiter (not %List storage) A field that is mapped to a piece of nested storage Index Restriction: Fast Select is not used if the select-item list consists of only the %ID field and/or fields that are all mapped to the same index. If a query is executed using Fast Select, this fact is flagged in the SELECT audit event in the Audit Database, provided that %System/%SQL/XDBCStatement is enabled. For further details on system-wide SQL event auditing, refer to Auditing Dynamic SQL. Queries and Enterprise Cache Protocol (ECP) InterSystems IRIS implementations that use Enterprise Cache Protocol (ECP), such as distributed cache clusters, can synchronize query results. ECP is a distributed data caching architecture that manages the distribution of data and locks among a heterogeneous network of server systems. If ECP synchronization is active, each time a SELECT statement is executed InterSystems IRIS forces all pending ECP requests to the dataserver. On completion this guarantees that the client cache is in sync. This synchronization occurs in the Open logic of the query. This is in the OPEN cursor execution if this is a cursor query. To activate ECP synchronization system-wide, use the $SYSTEM.SQL.Util.SetOption() Opens in a new window method, as follows: SET status=$SYSTEM.SQL.Util.SetOption("ECPSync",1,.oldval); the default is 0. To determine the current setting, call $SYSTEM.SQL.CurrentSettings() Opens in a new window. For further details, refer to the “Horizontally Scaling Systems for User Volume with InterSystems Distributed Caching” chapter of the Scalability Guide.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=GSQL_QUERIES
2021-06-12T20:00:42
CC-MAIN-2021-25
1623487586390.4
[]
docs.intersystems.com
publishing-api: Writing data migrations Are you writing a migration to change Publishing API data? We have a lot of migrations added to the Publishing API to change data, our long term plan is to reduce these so that these can be done via the API itself and not need migrations. Until we are at that point please use the following guidelines when creating migrations. Run the migration on your machine first At the very least this will ensure that the code runs and that your data looks correct, but it also allows you to get an idea for how long the migration will take to finish. You should then include that information in the PR to help with whoever is doing the deployment. We also encourage you to run and check the results of the migration on integration, since the data there will be closer to production. Always include the schema.rb file Even if the change is just the timestamp, it's important to include this file in your commit otherwise when trying to use the app in testing an error saying there are pending migrations will appear. Target only content related to the application you are affecting Mistakes in migrations can and do happen, by at least targeting the application you are affecting (with .where(publishing_app: )) you can avoid unnecessary fallout. Don't assume the records you are altering exist Migrations get run in test environments, on old databases or when a developer first clones the project. You should always make sure that your migrations run fine without the data there that you are expecting. Do you need to represent this data downstream? Representing the data downstream won't work on local machines, so you will need to have a check in the migration making sure it is running in the right environment ( Rails.env.production?). Alternatively, and ideally, you can use the represent_downstream: class of rake tasks in Jenkins to achieve the same result. If you do decide to represent downstream in the migration itself, you must disable transactions for this migration (by running the disable_ddl_transaction! method) as otherwise you will be representing downstream before the data is committed to the database. If you are disabling the transaction, ensure the migration is idempotent If you have disable_ddl_transaction! in your migration for some reason, you should make sure that the migration will do the right thing if it gets run again. This could happen if it happens to fail the first time, but since it will not be running in a transaction, the data won't be rolled back.
https://docs.publishing.service.gov.uk/apps/publishing-api/data-migration.html
2021-06-12T19:34:39
CC-MAIN-2021-25
1623487586390.4
[]
docs.publishing.service.gov.uk
This documentation does not apply to the most recent version of Splunk. Click here for the latest version. app.conf The following are the spec and example files for app.conf. app.conf.spec Version 8.0.3 This file maintains the state of a given app in the Splunk platform. It can also be used to customize certain aspects of an app. An app.conf file can exist within each app on the Splunk platform. You must restart the Splunk platform to reload manual changes to app.conf. To learn more about configuration files (including precedence) please see the documentation located at Settings for how an app appears in the Launcher in the Splunk platform and online on Splunkbase. [id] group = <group-name> name = <app-name> version = <version-number> [launcher] global setting remote_tab = <boolean> * Set whether the Launcher interface connects to apps.splunk.com (Splunkbase). * This setting only applies to the Launcher app. Do not set it in any other app. * Default: true per-application settings version = <string> * Version numbers are a number followed by a sequence of dots and numbers. * The best practice for version numbers for releases is to use three digits formatted as Major.Minor.Revision. * Pre-release versions can append a single-word suffix like "beta" or "preview". * Use lower case and no spaces when, email address). Your app can include an icon which appears next to your app in Launcher and on Splunkbase. You can also include a screenshot, which shows up on Splunkbase when the user views information about your app before downloading it. If you include an icon file, the file name must end with "Icon" before the file extension and the "I" must be capitalized. For example, "mynewIcon.png". Screenshots are optional to include. There is no setting in app.conf for screenshot or icon images. Splunk Web places files you upload with your app into the <app_directory>/appserver/static directory. These images do not appear in your app. Move or place icon images in the <app_directory>/static directory. Move or place screenshot images in the <app_directory>/default/static directory. Launcher and Splunkbase automatically detect the images in those locations. = <string> * Omit this setting for apps that are for internal use only and = <boolean> * Determines whether Splunk Enterprise checks Splunkbase for updates to this app. * Default: true show_upgrade_notification = <boolean> * Determines whether Splunk Enterprise shows an upgrade notification in Splunk Web for this app. * Default: false Set install settings for this app [install] state = disabled | enabled * Set whether app is disabled or enabled in the Splunk platform. * If an app is disabled, its configurations are ignored. * Default: enabled state_change_requires_restart = <boolean> * Set whether changing an app's state ALWAYS requires a restart of Splunk Enterprise. * State changes include enabling or disabling an app. * When set to true, changing an app's state always requires a restart. * When set to false, modifying an app's state might or might not require a restart depending on what the app contains. This setting cannot be used to avoid all restart requirements. * Default: false is_configured = <boolean> * Stores indication of whether the application's custom setup has been performed * Default: = <boolean> * Set whether an app allows itself to be disabled. * Default: true install_source_checksum = <string> * Records a checksum of the tarball from which a given app was installed. * Splunk Enterprise automatically populates this value upon install. * Do not set this value explicitly within your app! install_source_local_checksum = <string> * Records a checksum of the tarball from which a given app's local configuration * was installed. Splunk Enterprise automatically populates this value upon * install. Do not set this value explicitly within your app! python.version = {default|python|python2|python3} * When 'installit.py' exists, selects which Python version to use. * Set to either "default" or "python" to use the system-wide default Python version. * Optional. * Default: Not set; uses the system-wide Python version. [shclustering] deployer_lookups_push_mode = preserve_lookups | always_preserve | always_overwrite * Determines the deployer_lookups_push_mode for the 'splunk apply shcluster-bundle' command. * If set to "preserve_lookups", the 'splunk apply shcluster-bundle' command honors the '-preserve-lookups' option as it appears on the command line. If '-preserve-lookups' is flagged as "true", then lookup tables for this app are preserved. Otherwise, lookup tables are overwritten. * If set to "always_preserve", the 'splunk apply shcluster-bundle' command ignores the '-preserve-lookups' option as it appears on the command line and lookup tables for this app are always preserved. * If set to "always_overwrite", the 'splunk apply shcluster-bundle' command ignores the '-preserve-lookups' option as it appears on the command line and lookup tables for this app are always overwritten. * Default: preserve_lookups deployer_push_mode = full | merge_to_default | local_only | default_only * How the deployer pushes the configuration bundle to search head cluster members. * If set to "full": Bundles all of the app's contents located in default/, local/, users/<app>/, and other app subdirs. It then pushes the bundle to the members. When applying the bundle on a member, the non-local and non-user configurations from the deployer's app folder are copied to the member's app folder, overwriting existing contents. Local and user configurations are merged with the corresponding folders on the member, such that member configuration takes precedence. This option should not be used for built-in apps, as overwriting the member's built-in apps can result in adverse behavior. * If set to "merge_to_default": Merges the local and default folders into the default folder and pushes the merged app to the members. When applying the bundle on a member, the default configuration on the member is overwritten. User configurations are copied and merged with the user folder on the member, such that the existing configuration on the member takes precedence. In versions 7.2 and prior, this was the only behavior. * If set to "local_only": This option bundles the app's local directory (and its metadata) and pushes it to the cluster. When applying the bundle to a member, the local configuration from the deployer is merged with the local configuration on the member, such that the member's existing configuration takes precedence. Use this option to push the local configuration of built-in apps, such as search. If used to push an app that relies on non-local content (such as default/ or bin/), these contents must already exist on the member. * If set to "default_only": Bundles all of the configuration files except for local and users/<app>/. When applying the bundle on a member, the contents in the member's default folder are overwritten. * Default: whether this app appears in the global app dropdown. is_manageable = <boolean> * Support for this setting has been removed. It no longer has any effect. label = <string> * Defines the name of the app shown in Splunk Web includes [<app-name>:<app-version>] * If specified, app-specific documentation link includes [<docs_section_override>] * This setting:// get interpreted as Quickdraw location strings and translated to internal documentation references. setup_view = <string> * Optional] verify_script = <string> * Optional setting. * Command line to invoke to verify credentials used for this app. * For scripts, the command line must python.version = {default|python|python2|python3} * This property is used only when verify_script begins with the canonical path to the Python interpreter, in other words, $SPLUNK_HOME/bin/python. If any other path is used, this property is ignored. * For Python scripts only, selects which Python version to use. * Set to either "default" or "python" to use the system-wide default Python version. * Optional. * Default: Not set; uses the system-wide Python version. [credential:<realm>:<username>] password = <string> * Password that corresponds to the given username for the given realm. * Realm is optional. * The password can be in clear text, but when saved from splunkd the password is always encrypted. diag app extensions, 6.4+ only [diag] extension_script = <filename> * Setting this variable declares that this app puts additional information into the troubleshooting & support oriented output of the 'splunk diag' command. * Must be a python script. * Must be a simple filename, with no directory separators. * The script must exist in the 'bin' subdirectory in the app. * Full discussion of the interface is located on the Splunk developer portal. See * Default: not set (no app-specific data collection will occur). data_limit = <positive integer>[b|kb|MB|GB] * Defines a soft ceiling for the amount of uncompressed data that can be added to the diag by the app extension. * Large diags damage the main functionality of the tool by creating data blobs too large to copy around or upload. * Use this setting to ensure that your extension script does not accidentally produce far too much data. * After data produced by this app extension reaches the limit, diag does not add any further files on behalf of the extension. * After diag has finished adding a file which goes over this limit, all further files are not be added. * Must be a positive number followed by a size suffix. * Valid suffixes: b: bytes, kb: kilobytes, mb: megabytes, gb: gigabytes * Suffixes are case insensitive. * Default: 100MB Other diag settings default_gather_lookups = <filename> [, <filename> ...] * Set this variable to declare that the app contains lookups that diag must always gather by default. * Essentially, if there are lookups which are useful for troubleshooting an app, and will never contain sensitive (user) data, add the lookups to this list so that they appear in generated diags for use when troubleshooting the app from customer diags. * Any files in lookup directories that are not listed here are not gathered by default. You can override this behavior with the diag flag --include-lookups. * This setting is new in Splunk Enterprise/Light version 6.5. Older versions gather all lookups by default. * This does not override the size-ceiling on files in etc. Large lookups are still excluded unless the etc-filesize-limit is raised or disabled. * This only controls = ... * Default: not set app.conf.example # Version 8.0> Last modified on 27 March, 2020 This documentation applies to the following versions of Splunk® Enterprise: 8.0.3 Feedback submitted, thanks!
https://docs.splunk.com/Documentation/Splunk/8.0.3/admin/Appconf
2021-06-12T20:06:22
CC-MAIN-2021-25
1623487586390.4
[array(['/skins/OxfordComma/images/acrobat-logo.png', 'Acrobat logo'], dtype=object) ]
docs.splunk.com
float The value that has been set by the user. A horizontal slider the user can drag to change a value between a min and a max. // Draws a horizontal slider control that goes from 0 to 10. using UnityEngine; using System.Collections; public class ExampleClass : MonoBehaviour { public float hSliderValue = 0.0F; void OnGUI() { hSliderValue = GUI.HorizontalSlider(new Rect(25, 25, 100, 30), hSliderValue, 0.0F, 10.0F); } }
https://docs.unity3d.com/es/2019.1/ScriptReference/GUI.HorizontalSlider.html
2021-06-12T21:43:13
CC-MAIN-2021-25
1623487586390.4
[]
docs.unity3d.com
string The object's data in JSON format. Generate a JSON representation of an object. This is similar to JsonUtility.ToJson, but it supports any engine object. The fields that the output will contain are the same as are accessible via the SerializedObject API, or as found in the YAML-serialized form of the object.
https://docs.unity3d.com/ru/2020.2/ScriptReference/EditorJsonUtility.ToJson.html
2021-06-12T22:07:27
CC-MAIN-2021-25
1623487586390.4
[]
docs.unity3d.com
Storage Structure¶ Partition Schemes¶ Source: diskutil(8) Filesystems¶ Source: diskutil(8) APFS¶ APFS is the new FileSystem that was announced at WWDC ‘16. It will be available on all Mac and iOS devices in 2017. It features awesome new and improved features such as: - Clones - Snapshots - Space Sharing - Encryption - Crash Protection - Sparse Files - Fast Directory Sizing - Atomic Safe-Save Rich Trouton did a very interesting talk at MacAdUk. Grab it here. Source: APFS Guide CoreStorage¶ Source: diskutil(8)
https://macadminsdoc.readthedocs.io/en/master/General/Files_and_Storage/Storage_Structure.html
2021-06-12T20:38:34
CC-MAIN-2021-25
1623487586390.4
[]
macadminsdoc.readthedocs.io
These configuration variables are in the [desktop] section of the /etc/hue/conf/hue.ini configuration file. Specify the Hue HTTP Address. Use the following options to change the IP address and port of the existing Web Server for Hue (by default, CherryPy). # Webserver listens on this address and port http_host=0.0.0.0 http_port=8000 The default setting is port 8000 on all configured IP addresses. Specify the Secret Key. To ensure that your session cookies are secure, enter a series of random characters (30 to 60 characters is recommended) as shown in the example below: secret_key=jFE93j;2[290-eiw.KEiwN2s3['d;/.q[eIW^y#e=+Iei*@Mn<qW5o Configure authentication. By default, the first user who logs in to Hue can choose any username and password and gets the administrator privileges. This user can create other user and administrator accounts. User information is stored in the Django database in the Django backend. (Optional) Configure Hue for SSL. Configure Hue to use your private key. Add the following to the /etc/hue/conf/hue.ini file: ssl_certificate=$PATH_To_CERTIFICATE ssl_private_key=$PATH_To_KEY ssl_cipher_list="DEFAULT:!aNULL:!eNULL:!LOW:!EXPORT:!SSLv2" (default)
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.4.3/bk_installing_manually_book/content/configure_web_server.html
2021-06-12T20:57:06
CC-MAIN-2021-25
1623487586390.4
[]
docs.cloudera.com
Translator documentation Translator is a cloud-based machine translation service you can use to translate text in with a simple REST API call. The service uses modern neural machine translation technology and offers statistical machine translation technology. Custom Translator is an extension of Translator, which allows you to build neural translation systems. The customized translation system can be used to translate text with Translator or Microsoft Speech Services.
https://docs.microsoft.com/en-us/azure/cognitive-services/translator/
2021-06-12T20:46:24
CC-MAIN-2021-25
1623487586390.4
[]
docs.microsoft.com
There aren’t any open sourced analytics adapters for Prebid Server, but there is an internal interface that host companies can use to integrate their own modules. Below is an outline of how it’s done for both versions of the server. Analytics adapters are subject to a number of specific technical rules. Please become familiar with the module rules that apply globally and to analytics adapters in particular. Analytics modules are enabled through Viper configuration. You’ll need to define any properties in config/config.go which are required for your module. Implement your module Your new module belongs in the analytics/{moduleName} package. It should implement the PBSAnalyticsModule interface from analytics/core.go Connect your Config to the Implementation The NewPBSAnalytics() function inside analytics/config/config.go instantiates Analytics modules using the app config. You’ll need to update this to recognize your new module. A simple filesystem analytics module is provided as an example. This module will log dummy messages to a file. It can be configured with: analytics: file: filename: "path/to/file.log Prebid Server will then write sample log messages to the file you provided. Define config params Analytics modules are enabled through the Configuration. Implement your module Your new module org.prebid.server.analytics.{module}AnalyticsReporter needs to implement the org.prebid.server.analytics.AnalyticsReporter interface. Add your implementation to Spring Context In order to make Prebid Server aware of the new analytics module it needs to be added to the Spring Context in org.prebid.server.spring.config.AnalyticsConfiguration as a bean. The log module is provided as an example. This module will write dummy messages to a log. It can be configured with: analytics: log: enabled: true Prebid Server will then write sample log messages to the log.
https://docs.prebid.org/prebid-server/developers/pbs-build-an-analytics-adapter.html
2021-06-12T21:08:39
CC-MAIN-2021-25
1623487586390.4
[]
docs.prebid.org
qifqif: enrich your .qif files with categories¶ Welcome to the documentation for qifqif, the CLI tool that alleviate the chore of pairing your financial transactions with categories/accounts. Be sure to read the Getting started guide that walks you through the install and expose the basic operation. Qifqif is dead simple but check the CLI usage and Advanced tips to get the know-how of some advanced usages. And if you still need help, don’t hesitate to send me a mail ( <kraymer+qifqif AT gmail DOT com>) or file a bug in the issue tracker.
https://qifqif.readthedocs.io/en/latest/
2021-06-12T21:32:10
CC-MAIN-2021-25
1623487586390.4
[]
qifqif.readthedocs.io
Getting Started¶ This section describes how to install the CernVM-FS client. The CernVM-FS client is supported on x86, x86_64, and ARM architectures running Linux and macOS \(\geq 10.14\) as well as on Windows Services for Linux (WSL2). There is experimental support for Power and RISC-V architectures. Docker Container¶ The CernVM-FS service container can expose the /cvmfs directory tree to the host. Import the container with docker pull cvmfs/service or with curl | docker load Run the container as a system service with docker run -d --rm \ -e CVMFS_CLIENT_PROFILE=single \ -e CVMFS_REPOSITORIES=sft.cern.ch,... \ --cap-add SYS_ADMIN \ --device /dev/fuse \ --volume /cvmfs:/cvmfs:shared \ cvmfs/service:2.8.0-1 Use docker stop to unmount the /cvmfs tree. Note that if you run multiple nodes (a cluster), you should use -e CVMFS_HTTP_PROXY to set a proper site proxy as described further down. Mac OS X¶ On Mac OS X, CernVM-FS is based on macFUSE. Note that as of macOS 11 Big Sur, kernel extensions need to be enabled to install macFUSE. Verify that fuse is available with kextstat | grep -i fuse Download the CernVM-FS client package in the terminal in order to avoid signature warnings curl -o ~/Downloads/cvmfs-2.8.0.pkg Install the CernVM-FS package by opening the .pkg file and reboot. Future releases will provide a signed and notarized package. Windows / WSL2¶ Follow the Windows instructions to install the Windows Subsytem for Linux (WSL2). Install any of the Linux distributions and follow the instructions for the distribution in this guide. Whenever you open the Linux distribution, run sudo cvmfs_config wsl2_start to start the CernVM-FS service. Due to the lack of autofs on macOS, mount the individual repositories manually like sudo mkdir -p /cvmfs/cvmfs-config.cern.ch sudo mount -t cvmfs cvmfs-config.cern.ch /cvmfs/cvmfs-config.cern.ch For optimal configuration settings, mount the config repository before any other repositories. For an individual workstation or laptop, set CVMFS_CLIENT_PROFILE=single If you setup a cluster of cvmfs nodes,. If there are no HTTP proxies yet at your site, see Setting up a Local Squid Proxy for instructions on how to set them up.
https://cvmfs.readthedocs.io/en/stable/cpt-quickstart.html
2021-06-12T20:40:59
CC-MAIN-2021-25
1623487586390.4
[]
cvmfs.readthedocs.io
Pool-Aware Scheduler Support¶ Man. Problem Description¶: After the scheduler selects a backend on which to place a new share, the backend may have to make a second decision about where to place the share within that backend. This logic is driver-specific and hard for admins to deal with. The capabilities that the backend reports back to the scheduler may not apply universally. A single backend may support both SATA and SSD-based storage, but perhaps not at the same time. Backends need a way to express exactly what they support and how much space is consumed out of each type of storage. Therefore, it is important to extend manila so that it is aware of storage pools within each backend and can use them as the finest granularity for resource placement. Proposed change¶ A pool-aware scheduler will address the need for supporting multiple pools from one storage backend. Terminology¶ - Pool A logical concept to describe a set of storage resources that can be used to serve core manila requests, e.g. shares/snapshots. This notion is almost identical to manila Share Backend, for it has similar attributes (capacity, capability). The difference is that a Pool may not exist on its own; it must reside in a Share Backend. One Share Backend can have multiple Pools but Pools do not have sub-Pools (meaning even if they have them, sub-Pools do not get to exposed to manila, yet). Each Pool has a unique name in the Share Backend namespace, which means a Share Backend cannot have two pools using same name. Design¶ The workflow in this change is simple: Share Backends report how many pools and what those pools look like and are capable of to scheduler; When request comes in, scheduler picks a pool that fits the need best to serve the request, it passes the request to the backend where the target pool resides; Share driver gets the message and lets the target pool serve the request as scheduler instructed. To support placing resources (share/snapshot) onto a pool, these changes will be made to specific components of manila: Share Backends reporting capacity/capabilities at pool level; Scheduler filtering/weighing based on pool capacity/capability and placing shares/snapshots to a pool of a certain backend; Record which backend and pool a resource is located on. Data model impact¶ No DB schema change involved, however, the host field of Shares table will now include pool information but no DB migration is needed. Original host field of Shares: HostX@BackendY With this change: HostX@BackendY#pool0 REST API impact¶ Notifications impact¶ Host attribute of shares now includes pool information in it, consumer of notification can now extend to extract pool information if needed. Other end user impact¶. Performance Impact¶. Developer impact¶', } ] }
https://docs.openstack.org/manila/latest/contributor/pool-aware-manila-scheduler.html
2021-06-12T20:56:27
CC-MAIN-2021-25
1623487586390.4
[]
docs.openstack.org
Crate memchr[−][src] This library provides heavily optimized routines for string search primitives. Overview This section gives a brief high level overview of what this crate offers. - The top-level module provides routines for searching for 1, 2 or 3 bytes in the forward or reverse direction. When searching for more than one byte, positions are considered a match if the byte at that position matches any of the bytes. - The memmemsub-module provides forward and reverse substring search routines. In all such cases, routines operate on &[u8] without regard to encoding. This is exactly what you want when searching either UTF-8 or arbitrary bytes. Example: using memchr This example shows how to use memchr to find the first occurrence of z in a haystack: use memchr::memchr; let haystack = b"foo bar baz quuz"; assert_eq!(Some(10), memchr(b'z', haystack)); Example: matching one of three possible bytes This examples shows how to use memrchr3 to find occurrences of a, b or c, starting at the end of the haystack. use memchr::memchr3_iter; let haystack = b"xyzaxyzbxyzc"; let mut it = memchr3_iter(b'a', b'b', b'c', haystack).rev(); assert_eq!(Some(11), it.next()); assert_eq!(Some(7), it.next()); assert_eq!(Some(3), it.next()); assert_eq!(None, it.next()); Example: iterating over substring matches This example shows how to use the memmem sub-module to find occurrences of a substring in a haystack. use memchr::memmem; let haystack = b"foo bar foo baz foo"; let mut it = memmem::find_iter(haystack, "foo"); assert_eq!(Some(0), it.next()); assert_eq!(Some(8), it.next()); assert_eq!(Some(16), it.next()); assert_eq!(None, it.next()); Example: repeating a search for the same needle It may be possible for the overhead of constructing a substring searcher to be measurable in some workloads. In cases where the same needle is used to search many haystacks, it is possible to do construction once and thus to avoid it for subsequent searches. This can be done with a memmem::Finder: use memchr::memmem; let finder = memmem::Finder::new("foo"); assert_eq!(Some(4), finder.find(b"baz foo quux")); assert_eq!(None, finder.find(b"quux baz bar")); Why use this crate? At first glance, the APIs provided by this crate might seem weird. Why provide a dedicated routine like memchr for something that could be implemented clearly and trivially in one line: fn memchr(needle: u8, haystack: &[u8]) -> Option<usize> { haystack.iter().position(|&b| b == needle) } Or similarly, why does this crate provide substring search routines when Rust’s core library already provides them? fn search(haystack: &str, needle: &str) -> Option<usize> { haystack.find(needle) } The primary reason for both of them to exist is performance. When it comes to performance, at a high level at least, there are two primary ways to look at it: - Throughput: For this, think about it as, “given some very large haystack and a byte that never occurs in that haystack, how long does it take to search through it and determine that it, in fact, does not occur?” - Latency: For this, think about it as, “given a tiny haystack—just a few bytes—how long does it take to determine if a byte is in it?” The memchr routine in this crate has slightly worse latency than the solution presented above, however, its throughput can easily be over an order of magnitude faster. This is a good general purpose trade off to make. You rarely lose, but often gain big. NOTE: The name memchr comes from the corresponding routine in libc. A key advantage of using this library is that its performance is not tied to its quality of implementation in the libc you happen to be using, which can vary greatly from platform to platform. But what about substring search? This one is a bit more complicated. The primary reason for its existence is still indeed performance, but it’s also useful because Rust’s core library doesn’t actually expose any substring search routine on arbitrary bytes. The only substring search routine that exists works exclusively on valid UTF-8. So if you have valid UTF-8, is there a reason to use this over the standard library substring search routine? Yes. This routine is faster on almost every metric, including latency. The natural question then, is why isn’t this implementation in the standard library, even if only for searching on UTF-8? The reason is that the implementation details for using SIMD in the standard library haven’t quite been worked out yet. NOTE: Currently, only x86_64 targets have highly accelerated implementations of substring search. For memchr, all targets have somewhat-accelerated implementations, while only x86_64 targets have highly accelerated implementations. This limitation is expected to be lifted once the standard library exposes a platform independent SIMD API. Crate features - std - When enabled (the default), this will permit this crate to use features specific to the standard library. Currently, the only thing used from the standard library is runtime SIMD CPU feature detection. This means that this feature must be enabled to get AVX accelerated routines. When stdis not enabled, this crate will still attempt to use SSE2 accelerated routines on x86_64. - libc - When enabled (not the default), this library will use your platform’s libc implementation of memchr(and memrchron Linux). This can be useful on non- x86_64targets where the fallback implementation in this crate is not as good as the one found in your libc. All other routines (e.g., memchr[23]and substring search) unconditionally use the implementation in this crate.
https://docs.rs/memchr/2.4.0/memchr/
2021-06-12T20:43:58
CC-MAIN-2021-25
1623487586390.4
[]
docs.rs
You are viewing documentation for Kubernetes version: v1.20 Kubernetes v1.20 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Set up an Extension API Server Setting up an extension API server to work with the aggregation layer allows the Kubernetes apiserver to be extended with additional APIs, which are not part of the core Kubernetes AP must configure the aggregation layer and enable the apiserver flags. Setup an extension api-server to work with the aggregation layer. - Make sure the APIService API is enabled (check --runtime-config). It should be on by default, unless it's been deliberately turned off in your cluster. - You may need to make an RBAC rule allowing you to add APIService objects, or get your cluster administrator to make one. (Since API extensions affect the entire cluster, it is not recommended to do testing/development/debug of an API extension in a live cluster.) - Create the Kubernetes namespace you want to run your extension api-service in. - Create/get a CA cert to be used to sign the server cert the extension api-server uses for HTTPS. - Create a server cert/key for the api-server to use for HTTPS. This cert should be signed by the above CA. It should also have a CN of the Kube DNS name. This is derived from the Kubernetes service and be of the form <service name>.<service name namespace>.svc - Create a Kubernetes secret with the server cert/key in your namespace. - Create a Kubernetes deployment for the extension api-server and make sure you are loading the secret as a volume. It should contain a reference to a working image of your extension api-server. The deployment should also be in your namespace. - Make sure that your extension-apiserver loads those certs from that volume and that they are used in the HTTPS handshake. - Create a Kubernetes service account in your namespace. - Create a Kubernetes cluster role for the operations you want to allow on your resources. - Create a Kubernetes cluster role binding from the service account in your namespace to the cluster role you created. - Create a Kubernetes cluster role binding from the service account in your namespace to the system:auth-delegatorcluster role to delegate auth decisions to the Kubernetes core API server. - Create a Kubernetes role binding from the service account in your namespace to the extension-apiserver-authentication-readerrole. This allows your extension api-server to access the extension-apiserver-authenticationconfigmap. - Create a Kubernetes apiservice. The CA cert above should be base64 encoded, stripped of new lines and used as the spec.caBundle in the apiservice. This should not be namespaced. If using the kube-aggregator API, only pass in the PEM encoded CA bundle because the base 64 encoding is done for you. - Use kubectl to get your resource. When run, kubectl should return "No resources found.". This message indicates that everything worked but you currently have no objects of that resource type created. What's next - Walk through the steps to configure the API aggregation layer and enable the apiserver flags. - For a high level overview, see Extending the Kubernetes API with the aggregation layer. - Learn how to Extend the Kubernetes API using Custom Resource Definitions.
https://v1-20.docs.kubernetes.io/docs/tasks/extend-kubernetes/setup-extension-api-server/
2021-06-12T19:50:11
CC-MAIN-2021-25
1623487586390.4
[]
v1-20.docs.kubernetes.io
Known Issues for AWS Glue Note the following known issues for AWS Glue. Preventing Cross-Job Data Access Consider the situation where you have two AWS Glue Spark jobs in a single AWS Account, each running in a separate AWS Glue Spark cluster. The jobs are using AWS Glue connections to access resources in the same virtual private cloud (VPC). In this situation, a job running in one cluster might be able to access the data from the job running in the other cluster. The following diagram illustrates an example of this situation. ![ AWS Glue job Job-1 in Cluster-1 and Job-2 in Cluster-2 are communicating with an Amazon Redshift instance in Subnet-1 within a VPC. Data is being transferred from Amazon S3 Bucket-1 and Bucket-2 to Amazon Redshift. ](images/escalation-of-privs.png) In the diagram, AWS Glue Job-1 is running in Cluster-1, and Job-2 is running in Cluster-2. Both jobs are working with the same instance of Amazon Redshift, which resides in Subnet-1 of a VPC. Subnet-1 could be a public or private subnet. Job-1 is transforming data from Amazon Simple Storage Service (Amazon S3) Bucket-1 and writing the data to Amazon Redshift. Job-2 is doing the same with data in Bucket-2. Job-1 uses the AWS Identity and Access Management (IAM) role Role-1 (not shown), which gives access to Bucket-1. Job-2 uses Role-2 (not shown), which gives access to Bucket-2. These jobs have network paths that enable them to communicate with each other's clusters and thus access each other's data. For example, Job-2 could access data in Bucket-1. In the diagram, this is shown as the path in red. To prevent this situation, we recommend that you attach different security configurations to Job-1 and Job-2. By attaching the security configurations, cross-job access to data is blocked by virtue of certificates that AWS Glue creates. The security configurations can be dummy configurations. That is, you can create the security configurations without enabling encryption of Amazon S3 data, Amazon CloudWatch data, or job bookmarks. All three encryption options can be disabled. For information about security configurations, see Encrypting Data Written by Crawlers, Jobs, and Development Endpoints. To attach a security configuration to a job Open the AWS Glue console at . On the Configure the job properties page for the job, expand the Security configuration, script libraries, and job parameters section. Select a security configuration in the list.
https://docs.aws.amazon.com/glue/latest/dg/glue-known-issues.html
2021-06-12T20:13:38
CC-MAIN-2021-25
1623487586390.4
[]
docs.aws.amazon.com
.org.uk (United Kingdom) - Registration and renewal period One to ten years. - Restrictions - Privacy protection All information is hidden. - If you're transferring a .org.uk domain to Route 53, you don't need to get an authorization code. Instead, use the method provided by your current domain registrar to update the value of the IPS tag for the domain to GANDI, all uppercase. (An IPS tag is required by Nominet, the registry for .uk domain names.) If your registrar will not change the value of the IPS tag, contact Nominet . Note When you register a .org.uk domain, Route 53 automatically sets the IPS tag for the domain to GANDI. - DNSSEC Supported for domain registration. For more information, see Configuring DNSSEC for a domain. - Deadlines for renewing and restoring domains Renewal is possible: Between 180 days before and 30 days after the expiration date Late renewal with Route 53 is possible: Between 30 days and 90 days after expiration Domain is deleted from Route 53: 90 days after expiration Restoration with the registry is possible: No Domain is deleted from the registry: 92 days after expiration - Registrar The registrar for this TLD is our registrar associate, Gandi. - Deletion of domain registration The registry for .org.uk domains doesn't allow you to delete domain registrations. Instead, you must disable automatic renewal and wait for the domain to expire. For more information, see Deleting a domain name registration.
https://docs.aws.amazon.com/Route53/latest/DeveloperGuide/org.uk.html
2021-06-12T20:26:14
CC-MAIN-2021-25
1623487586390.4
[]
docs.aws.amazon.com
Utilities¶ Custom Commands¶ Custom commands allow a versatile way for you to have the bot respond to different terms. This could be used to advertise your Twitter, display your Discord invite or have the bot ping someone! Random Announcements¶ Ever wanted to put a random stream in a channel? Of course you did and now you can!. Warning The following commands have no confirmation and will execute when run.
https://docs.couch.bot/utility.html
2021-06-12T20:13:06
CC-MAIN-2021-25
1623487586390.4
[]
docs.couch.bot
Biconomy¶ Scalable Relayer Infrastructure for Blockchain Transactions.¶ Disclaimer: Projects themselves entirely manage the content in this guide. Moonbeam is a permissionless network. Any project can deploy its contracts to Moonbeam. Introduction¶ Biconomy is a scalable transaction relayer infrastructure, which can pay blockchain transaction's gas fee for your dApp user, while collecting fees from you on monthly basis, in form of some stable token. - Dapps require way too much onboarding and are too hard to even begin using. We need a solution to make onboarding easy for users. Non-crypto savvy new users will have to pass KYC, purchase ETH from an exchange, download a wallet, then connect their wallet before they can go any further, which can take days! No one waits for days to try out an application. - One of the major issues is need to hold Native currency for using dapps. Users can only pay in ETH, which they may not have at that moment. Or the user may not want to spend their ETH investment. - The necessity to pay a gas fee every time the user uses your application. Netflix does not charge you their AWS fees for every time you watch a video, so why should Dapps charge you gas fees for every interaction you do? -. What are Meta-Transactions?¶ Meta Transactions are transactions whose data is created and signed off-chain by a user and relayed to the network by another party who pays the gas fees. Since meta transactions are not native to the protocol, you would need to either use a 3rd party setup (e.g. Biconomy) or set up a service on your own. We enable this at scale by providing a non-custodial and gas efficient relayer infrastructure network. We support different approaches for implementing meta Transactions:¶ 1. Smart Contract Wallet¶ In this approach, for each user, an upgradable contract wallet is created, which acts as a proxy contract & relays all transactions to the destination smart contract. As user needs to keep all of their assets under supervision of this proxy contract, all blockchain transactions to be routed via this proxy contract. Biconomy supports Gnosis contract wallet integration. Checkout how you can integrate meta transactions via Gnosis smart contract wallet here. 2. Custom Implementation¶ If dApps support native meta transactions, then Biconomy's relayers would directly relay the transactions to the network without the need for any proxy contract. 3. EIP 2771 Standard Implementation¶. Using Biconomy's Mexa SDK you can integrate Biconomy into your DApp seamlessly. Using Biconomy's dashboard gives you a fine control on settings and configuration for enabling MetaTransactions. Integration¶ Integration using Mexa is a simple step process: - Register your DApp on Biconomy Dashboard, a dashboard for developers, and copy API Key generated for your DApp. - Add the contract address and contract ABI on which you wanna enable Meta Transaction to your Biconomy dashboard. - Integrate Mexa SDK in your DApp code using API Key you got from dashboard. Dashboard¶ Follow the steps here, to register an account and add a DApp to get the keys, and configure functions that will accept signed transactions. Using mexa¶ - To use Biconomy on Moonbeam you need to select Moonbeam network on your dashboard and follow similar steps given in our docs. - Now you can see the app on your dashboard. Configure it by adding your contract and functions. - Now Get inside the DApp client code directory, to configure meta transactions. Let's first install @biconomy/mexafrom npm. npm install @biconomy/mexa --save - Now you need to initialize Biconomy & web3. In place of <web3 provider>you can use window.ethereumif your DApp users are using MetaMask. import { Biconomy } from "@biconomy/mexa"; const biconomy = new Biconomy(<web3 provider>, {apiKey: <API Key>}); web3 = new Web3(biconomy); biconomy.onEvent(biconomy.READY, () => { // Initialize your dapp here like getting user accounts, initialising contracts and etc }).onEvent(biconomy.ERROR, (error, message) => { // Handle error while initializing mexa }); 🥳 Congrats, you've successfully integrated Biconomy into your Dapp and now your dapp supports meta Tx. You can checkout more in following links.¶ Learn More¶ Wanna make your users pay gas-fee in ERC20 tokens? We can help you with that. Read more about that on our docs. Wanna get in touch?¶ Discord: Biconomy's Discord Telegram: Biconomy's Telegram Twitter: Biconomy's Twitter
https://docs.moonbeam.network/dapps-list/metaTransactions/biconomy/
2021-06-12T20:20:40
CC-MAIN-2021-25
1623487586390.4
[array(['../../images/biconomy/logo-biconomy.png', '../images/biconomy/logo-biconomy.png'], dtype=object) array(['../../images/biconomy/native-meta-tx.png', '../images/biconomy/native-meta-tx.png'], dtype=object) array(['../../images/biconomy/trust-forwarder.png', '../images/biconomy/trust-forwarder.png'], dtype=object)]
docs.moonbeam.network
Consult with one of our doctors to become a legal Texas Medical Patient. Obtain your recommendations to access natural holistic treatment centers and delivery services. If You don’t qualify. You don’t pay. Scientific research has proven that a medical recommendation can provide several different benefits to patients suffering from a variety of illnesses ranging from cancer to epilepsy to multiple neurological conditions. It is a widely accepted form of holistic medicine and Texas 420 Doctors is your best option for receiving this treatment in Texas. Schedule your appointment by phone or using the form below. Speak with a Doctor via tele-health meeting. Receive your recommendations & medication. We provide a wide variety of options that can help to treat a wide variety of ailments. Here is what you can expect from us: Nicole walked me though the whole process while the doctor made me feel comfortable trying this natural medication. It has made a huge difference in my life! Texas 420 Doctors made the process of getting my recommendations so easy. Friendly doctors and knowledgeable staff! The process was so easy and I love that there are no hidden fees. Thanks TX 420 doctors! We are a fully licensed, HIPAA compliant medical clinic that can serve any Texas resident. Make an appointment and receive the best service in the state. We accept Debit Cards and all major Credit Cards. No other payment methods will be accepted.
https://texas420docs.com/
2021-06-12T19:50:41
CC-MAIN-2021-25
1623487586390.4
[]
texas420docs.com
Filter Features for Deeper Analysis¶ Spatial filters¶ Spatial filters are used to select features from one layer based on their location in relation to features from another layer. The overlapping, or intersecting, data will be filtered in the attribute table, and can be used for additional analysis. Click a feature on the map to select it. This will set the boundaries for the filter, and all of the returned data will be within this feature. Click the filter button to Use this feature in a spatial filter. The selected feature will change colors. Click a feature from the layer you want to filter, and click the Show Table button in the information window. This will open the attribute table for the entire layer. All of the features in this layer will display in the attribute table. Click the Spatial Filter button in the Table View. This filters the data to display only the features intersecting the original feature. Filter intersecting feature attributes You can expand your spatial filter by selecting additional features from your layer. The results will be displayed in your attributes table. In the first example, there were 15 results using the spatial filter. By selecting additional features, there are now 42 results that intersect the layer. A spatial filter can also be created using an individual point with a given radius, allowing you to see how many features from a second layer fall within that radius. Click a point on the map from the desired layer. This will be the base point. A blue circle will highlight the point. Click the filter button to Use this feature in a spatial filter. The selected feature will change colors. Enter the desired radius in meters when prompted. Click the Add Spatial Filter button. Click a feature from the layer you want to filter, and click the Show Table button in the information window. This will open the attribute table, which will include all layer features. Click the Spatial Filter button in the Table View. This filters the data to display only the features within the radius on the original point. This example shows how many Department of Health facilities are within a 4000 meter radius of central Lake Charles, LA. The spatial filter narrows the results down to 17 facilities out of 1458. You can edit the geometry of an existing spatial filter to adjust the size of the filter area. Select a spatial filter feature on the map, and click the Edit Geometry button. The selected feature will change colors and the Editing Geometry window will open. A blue dot will appear over the point on the feature to be moved. Click and drag the point to its new location. Repeat this process until all of the points have been moved to their new location. Select the Accept Feature button to finish your edits, and apply the new shape to your spatial filter. Delete a spatial filter¶ Once you are finished with your spatial filter, you may want to clear the results, and remove the filter from your map. - From the Table View of your filtered results, select the Spatial Filter button. This will clear the filter, and show all features within the layer. Close the Table View window. - Click on the feature you used in your spatial filter, and select the Delete Feature button. Confirm that you want to delete the feature. Combine filters¶ Combining a filter by attribute and a spatial filter allows you to dig even deeper into your data to provide better analysis. Once you have completed your spatial filter, you can use an Advanced Filter to drill down even further. - With an existing spatial filter on the map, open the table view of the layer you want to further filter. Your table will display all of the features in the layer. - Click the Advanced Filters button, and select the attribute you’d like to add to the spatial filter. Click the drop down menu to select the appropriate criteria. - Add your search term to the text box, and click the Apply Filters button. This will filter your layer to those features containing the attribute you want to apply to the spatial filter. - Click the Spatial Filter button to apply the spatial filter. Not only will all of your results fall completely within the area you selected for your spatial filter, but they will also meet your advanced filter criteria. Using the Department of Health layer from the previous example, we want to find out how many of the facilities within our 4000 meter radius are hospitals. We filtered all facility types (in the FacilityTO attribute) to those containing the word hospital. There were 254 results. Next, we applied the spatial filter. Our search helped us determine that out of 1458 features, four are hospitals within a 4000 meter radius of Lake Charles, LA. Filter features by timeline¶ Features will often have a time attribute detailing the specific time an event has occurred, or when a feature has changed. This information can be displayed in two ways. Continuous time focuses on the changes of a singular feature, such as the path of a tornado, or the spread of disease. Temporal data also tracks multiple features in single locations over time, such as store openings, lightning strikes, or cell phones pinging cell towers. Temporal data can be displayed in Exchange either as a whole (the entire layer at once), or it can be played back, with the features populating the map as the time bar progresses. Note: For this feature, the layer must have a date/time attribute. The time attribute is configured when the layer is uploaded. Please see the section on Configuring Time Attributes under Working with layers for more information. A layer with temporal data will have a toolbar with playback options at the bottom of the map. - Add a layer with the temporal data to the map. The playback options will display at the bottom of your map. - Click the Play button to begin the playback for the layer. The features will populate, and display the date/time along the timeline. - Select additional playback options. Playback options include: Play / Pause - Begins and stops the playback feature. You can click and drag the time slider to display features at a specific time, or click on the red lines along the timeline. The spacing of the lines indicates the times on the layer. Repeat - Loops the playback so it automatically begins once all of the temporal features have displayed. Step Back / Step Forward - Displays the previous feature again or skips forward to the next feature. - Select the Filter Features by Timeline button to display all of the features at once, essentially turning off the playback. Filter features by timeline turns off the timeline feature for a layer. Create a heat map¶ A heat map is a visual representation of your data, and allows you to see where your data is concentrated. - Select a point feature layer from your layers list. - Click the Show heatmap button to create a heat map layer. On the heat map, red indicates a high area of data concentration.
https://docs.boundlessgeo.com/exchange/latest/using_exchange/guide/analysis/filtering.html
2021-06-12T21:05:59
CC-MAIN-2021-25
1623487586390.4
[]
docs.boundlessgeo.com
Diagnostics for InterSystems Business Intelligence DeepSeeButtons is a tool used to generate diagnostic reports about your Business Intelligence environment. The HTML-formatted report provides information on the following aspects of your system: Setup parameters Server details A list of cubes and their properties For each cube, a list of dimensions and their properties For each cube, a list of other elements such as pivot variables, named sets, and listing fields Business Intelligence Logs The content of the iris.cpf file The content of the messages.log file In order to generate this report, you may launch the DeepSeeButtons tool from the terminal by ensuring you are in the %SYS namespace and execute the following code: Do ^DeepSeeButtons Follow the subsequent prompts to generate the report. InterSystems recommends that you view the generated HTML in Chrome or Firefox.
https://docs.intersystems.com/irislatest/csp/docbook/DocBook.UI.Page.cls?KEY=D2IMP_BUTTONS
2021-06-12T21:20:57
CC-MAIN-2021-25
1623487586390.4
[]
docs.intersystems.com
JeraSoft Documentation Portal Docs for all releases This guide provides you with the basic information and best practices guidelines you'll need to integrate JeraSoft Billing software. There are two main technical ways of collecting the call data from any switch system: via the xDR-files or via the RADIUS-server. 1) xDR-files import Advantages: high reliability and stability, no calls information may be lost due to server or client side; Disadvantages: the data is received with delays because the files are copying at the defined periods of time. 2) RADIUS-server Advantages: data is received in real-time mode (i.e., the call packet is sent right after the end of the call), call authorization procedure, possibility to use advanced routing features; Disadvantages: when network or server hardware is unstable, possible loss of data (although, this data can be restored later with xDR-files). It is possible to combine both methods, but take into account the capability of your switching equipment, because not all equipment can support both these methods at once. Some gateways support only one integration type (for example, Cisco gateways do not write any xDR files, so integration is possible only through RADIUS). For more advanced functionality use the left navigation bar and find instructions on how you can integrate your JeraSoft VCS with different kinds of switching equipment, for proper calls billing and processing.
https://docs.jerasoft.net/display/IM
2021-06-12T19:40:04
CC-MAIN-2021-25
1623487586390.4
[]
docs.jerasoft.net
Reporting issues No software is perfect and issues are to be expected. If you observe something that you think needs fixing, please provide issue report, which includes: From HomePort: - HAR request that has failed or that contains data required for rendering of the data. - Screenshot of the error in a high enough resolution. From MasterMind: - Latest mastermind logs that include stacktrace of the error, typically core.log and celery-celery.log. Warning Logs may contain sensitive data, please make sure that you either cleanup the data or share it with a trusted party before sharing the actual data. Last update: 2021-05-03
https://docs.waldur.com/about/reporting-issues/
2021-06-12T19:41:16
CC-MAIN-2021-25
1623487586390.4
[]
docs.waldur.com
2. Manager Configuration fails with Domain Authentication Error¶ When going through the installation process of the RaMP DCIM Manager Configuration Tool If you came across the following warning message: And you clicked [Yes] to proceed, then clicked [Next] on the screen below: Then ran into the following error: The issue is most likely due to an incorrect instance name, the standard SQL instance usually is: .SQLExpress . Please correct it then retry. When the installation has been completed successfully, you should see the following screen:
https://docs.tuangru.com/faq/questions/Q2.html
2019-02-16T03:54:14
CC-MAIN-2019-09
1550247479838.37
[]
docs.tuangru.com
7.x is the recommended platform. - CentOS or RedHat 6.x is still available but not advised for production installation. See dedicated guide. You must disable SELinux prior to the install. The server will need an Internet connection as it will download external packages. Installation¶ This configure the dependencies and download RPM packages - Install EPEL You will need EPEL for some dependencies.-7-rpms - Install remi-safe repository (needed for PHP dependencies): yum install You can find find more information about the installation of the remi-safe repository on the Remi’s RPM repositories Repository Configuration page. - \ rh-mysql57-mysql-server \¶ND. Store it securely and delete the file as soon as possible. Backups are under your responsibility so you probably want to take a look at the Backup/Restore guide. Next steps¶ Once you have a fully running Tuleap you can start using it: issue tracking, source code management, agile planning and more.
https://docs.tuleap.org/installation-guide/full-installation.html
2019-02-16T03:15:49
CC-MAIN-2019-09
1550247479838.37
[]
docs.tuleap.org
5 5/9/2011 - Created a sample log file for matlab (sampletracking_110505.txt). - Downloaded hourly pm2.5 data provided by DAQ from PCAPS page. - This data is in separate files for each day, so I concatenated them into one file with a bash script (see below) - Created a matlab script that reads sample log file and pm2.5 data and outputs a timeline of samples taken, and which have been run. Bash code to concatenate the daily pm2.5 files (minus headers in each file) # Run from directory containing all files # Last 24 lines in each file contain the data (the top is all header stuff) for i in *.txt; do tail -24 "$i" >> newfile.txt; done
https://earthscinotebook.readthedocs.io/en/latest/wasatchsnowdep/analysislog_1/
2019-02-16T04:22:51
CC-MAIN-2019-09
1550247479838.37
[]
earthscinotebook.readthedocs.io
More than 100k products to export?¶ In the past, we encountered use cases where partners would export 270k products and experienced issues with the memory usage. Most of PIM’s massive operations, such as imports and exports, process the products iteratively via a size configured subsets of products in order to minimize the memory usage. As each product may have different properties, the export operation would keep the transformed array in memory in order to add missing columns from one line to another. In version 1.5.0, we changed the internal behavior of the CsvProductWriter to use a file buffer to temporarily write each previously transformed array in order to aggregate the final result. We decoupled the CsvProductWriter and the buffer component so that it can be used in other contexts. As a conclusion, the main product export’s limitation is now the hard drive space and no longer the available memory. Please notice that the number of values per product will have an impact on the execution time and memory usage. Found a typo or a hole in the documentation and feel like contributing? Join us on Github!
https://docs.akeneo.com/latest/maintain_pim/scalability_guide/more_than_100k_products_to_export.html
2019-12-05T17:19:48
CC-MAIN-2019-51
1575540481281.1
[]
docs.akeneo.com
# What is a discussion and how can I use it to increase student interaction in their learning? > [!Alert] Please be aware that not all functionality covered in this and linked articles may be available to you. A Discussion is a great feature for enhancing learning by giving students and instructors a place to hold asynchronous, topic-structured conversations regarding a course. A Discussion contains topics that guide the conversation. When it is added to a course, a person with access to the course, its classes or course assignments may participate if they have the prerequisite permissions. Without the permissions, the discussion is not seen. There are three main “roles” with discussions, each engaging different permissions: 1. **Administrators** – Those who create discussions and their topics and associate them with courses. 1. **Moderators** – Those who monitor the discussion and can remove posts, responses, and comments as necessary. 1. **Participants** – Those who interact with the discussion by making posts, responses, and comments. By creating well-thought out discussions and topics and attaching them to relevant courses, you can engage your learners beyond the class enrollment or course assignment. It adds an element of social learning and provides your users with a great way of sharing ideas, asking questions, and growing knowledge beyond the courseware. ## Related Articles For more information on Discussions, please see: - [How do I create a discussion and attach it to a course?](create-discussion.md) - [How can I control posts on discussions?](add-moderators.md) - [How can I add a disclaimer to all my discussions?](add-disclaimer.md) - [How do my students and I participate in discussions?](participation.md) - [How can I be notified of activity on a discussion?](admin-follow.md)
https://docs.learnondemandsystems.com/tms/tms-administrators/discussions/what-is-discussion.md
2019-12-05T18:26:01
CC-MAIN-2019-51
1575540481281.1
[]
docs.learnondemandsystems.com
Contrary to what eng.mrkto.com said, getenv() isn't always case-insensitive. On Linux it is not: <?php var_dump(getenv('path')); // bool(false) var_dump(getenv('Path')); // bool(false) var_dump(getenv('PATH')); // string(13) "/usr/bin:/bin" (PHP 4, PHP 5, PHP 7) getenv — Restituisce il valore di una variabile d'ambiente $varname) : string Restituisce il valore della variabile d'ambiente Si può vedere una lista di tutte le variabili d'ambiente utilizzando phpinfo(). Molte di queste variabili sono elencate all'interno dell' » RFC 3875, nello specifico sezione 4.1, "Request Meta-Variables". varname Il nome della variabile. Restituisce il valore della variabile d'ambiente varname, oppure FALSE se la variabile d'ambiente varname non esiste. Example #1 Esempio di getenv() <?php // Esempio di uso di getenv() $ip = getenv('REMOTE_ADDR'); // O semplicemente di uso di una variabile Superglobal ($_SERVER o $_ENV) $ip = $_SERVER['REMOTE_ADDR']; ?> Contrary to what eng.mrkto.com said, getenv() isn't always case-insensitive. On Linux it is not: <?php var_dump(getenv('path')); // bool(false) var_dump(getenv('Path')); // bool(false) var_dump(getenv('PATH')); // string(13) "/usr/bin:/bin" This function is useful (compared to $_SERVER, $_ENV) because it searches $varname key in those array case-insensitive manner. For example on Windows $_SERVER['Path'] is like you see Capitalized, not 'PATH' as you expected. So just: <?php getenv('path') ?> From PHP 7.1 => getenv() no longer requires its parameter. If the parameter is omitted, then the current environment variables will be returned as an associative array. Source: As noted on httpoxy.org, getenv() can confuse you in having you believe that all variables come from a "safe" environment (not all of them do). In particular, $_SERVER['HTTP_PROXY'] (or its equivalent getenv('HTTP_PROXY')) can be manually set in the HTTP request header, so it should not be considered safe in a CGI environment. In short, try to avoid using getenv('HTTP_PROXY') without properly filtering it.. Beware that when using this function with PHP built-in server – i.e. php -S localhost:8000 – it will return boolean FALSE. As you know, getenv('DOCUMENT_ROOT') is useful. However, under CLI environment(I tend to do quick check if it works or not), it doesn't work without modified php.ini file. So I add "export DOCUMENT_ROOT=~" in my .bash_profile. When writing CLI applications, not that any environment variables that are set in your web server config will not be passed through. PHP will pass through system environment variables that are prefixed based off the safe_mode_allowed_env_vars directive in your php.ini for quick check of getenv() adding a new env variable - if you add a new env variable, make sure not only apache but xampp is also restarted. Otherwise getenv() will return false for the newly added env variable. It is worth noting that since getenv('MY_VARIABLE') will return false when the variable given is not set, there is no direct way to distinguish between a variable that is unset and one that is explicitly set to the value bool(false) when using getenv(). This makes it somewhat tricky to have boolean environment variables default to true if unset, which you can work around either by using "falsy" values such as 0 with the strict comparison operators or by using the superglobal arrays and isset(). SERVER_NAME is the name defined in the apache configuration. HTTP_HOST is the host header sent by the client when using the more recent versions of the http protocol. The example on how to fallback produces a syntax error on PHP 5.2: -bash-3.2$ cat test.php <?php $ip = getenv('REMOTE_ADDR', true) ?: getenv('REMOTE_ADDR') ?> -bash-3.2$ /web/cgi-bin/php5 test.php Content-type: text/html <br /> <b>Parse error</b>: syntax error, unexpected ':' in <b>/home/content/25/11223125/test.php</b> on line <b>3</b><br /> On PHP 5.2, one must write $ip = getenv('REMOTE_ADDR', true) ? getenv('REMOTE_ADDR', true) : getenv('REMOTE_ADDR') The function 'getenv' does not work if your Server API is ASAPI (IIS). So, try to don't use getenv('REMOTE_ADDR'), but $_SERVER["REMOTE_ADDR"].
http://docs.php.net/manual/it/function.getenv.php
2019-12-05T17:57:19
CC-MAIN-2019-51
1575540481281.1
[]
docs.php.net
Managing an Organization¶ Gandi products (domain names, Simple Hosting, Cloud Servers, etc.) are managed through a collaborative system of “organizations” that lets you share the management of your products through the use of teams. If you manage and purchase products for many clients you may also consider joining our Reseller program. Our Resellers vs Organizations page explains the differences between the two ways of working with your customers through Gandi. Pages in This Section
https://docs.gandi.net/en/managing_an_organization/index.html
2019-12-05T17:01:41
CC-MAIN-2019-51
1575540481281.1
[]
docs.gandi.net
New Tab Overview Sametab Browser Extension modifies the layout of the browser new tab so that when there’s a new annoucement it’s visible to everyone. 👉 Head over our getting started guide to see exactly how to get everyone in your team install the Sametab browser extension. 👉 Head over our New Tab feature section to dive deep in each section of the Sametab’s New Tab. If you already know what’s the full feature set of the Sametab Browser Extension, and you want to learn more on how to customize it, this is the right place for you. Tips! Only Workspace Owners or Workspace Admins can edit the global configuration of the New Tab. Make sure you have right level permission. If you’re unsure about what permission level you have, check out this guide. New Tab Branding In this section you’ll see how to customise Sametab to truly make it yours. New Tab Logo The first thing that you want to customize is the general branding. When you’re singin up for Sametab we use our own Sametab logo. To truly make it yours, you should upload your own company logo. Let’s get started. To edit the default Sametab logo, click on “Edit” on the New Tab Logo section. We gave some guidelines on how to make this looks good onto the New Tab. This logo (unlike your workspace logo) should be squared. It’d better be transparent and cropped at the edges. The dimensions that we recommend are 300 wide x 150 tall. The preview on the left will give a concrete idea of how your logo fits the recommended dimensions. Once you’ve uploaded your logo, make sure you see the preview by cliking See Preview adjacent. This will open a new static page with your logo in. If it doesn’t look good or it’s not yet sufficient, we do recommend to keep editing and repeat the procees. Make sure you don’t click Save until you’re very satisfied on how it looks from the preview. When you click Save the edits will be deployed to everyone’s browser new tab, so make sure you don’t do that by mistake. After you hit Savebutton Sametab takes up to 1 minuteto deploy the edits on all your team members’ browser new tabs. If you’re not seeing right away, that’s totally normal. We have to add this little extra-delay in order to make sure the Browser Extension is lean and fast. Accent color Now that you have a logo, you want to customize the accent color of your New Tab. When you pick an Accent Color we will automatically generate a rich color palette that we’ll use to make sure the details on your New Tab matches your company branding & identity. Once you’ve picked an accent color, make sure you see the preview by cliking adjacent button See Preview. This will open a new static page with your accent color. Interact with the search and make sure everything is clearly visible. Keep in min! After you hit Savebutton Sametab takes up to 1 minuteto deploy the edits on all your team members’ browser new tab. Theming Select Default Theme When people install the Sametab Chrome Extension for the first time by default will see the custom company theme (it’s the one we recommend). If you’d like to provide them a different first experience select one the other two available themes. If you want to know more about what changes between the Custom, Google Chrome and Wallpaper themes check out this in-depth guide. Bookmarks If you havent’read yet our guide about Bookmarks go here first. From your workspace you can control both Pinned Bookmarks and Searchable Bookmarks. How to add a Bookmark - Click on the Add Bookmarkbutton - Then type a description (this will be visible in the search bar) - Click Add Add a Pinned Bookmark If you want to make that bookmark Pinned onto the right side of your New Tab make sure you also enable the Pinned toggle. Tip! Pinned bookmark are also searchable form the gloabl search. That’s why we recommend you to type a description anyway. Remove a Bookmark from pinbar but leave it as searchable - Option button - Click Edit - Disable the Pinned - Click Save Remove a Bookmark from pinbar and search - Option button - Click Remove - Confirm Adjust order You can adjust the order of the bookmarks on the pinbar. Freely drag and drop them. Just keep in mind there’s no undo and changes are auto-saved. After you made some changes to the bookmark section Sametab takes up to 1 minute to deploy the edits across all your team members’ browser new tab. If you’re not seeing right away – that’s totally normal. Time zones If you havent’read yet our in-depth guide about time zones go here first. Adding and removing a new time zone is pretty straight forward. Go to the dedicated time zone section and create a new one by clickin the Add time zone button. You’ll need to specify a display name and a time zone from a dropdown menu. Tip! You can search from the dropdown menu. Feel free to type it. If you don’t find a time zone that you need you might looking for the wrong city. Contact us and we’ll help you as fast as we can. Edit a time zones - Option button - Click Edit - Click Save Remove a time zone - Option button - Click Remove - Confirm Adjust order You can adjust the order of the time zones. Freely drag and drop them. Just keep in mind there’s no undo and changes are auto-saved. Quotes If you havent’read yet our in-depth guide about Quotes go here first. To add a new quote go to the dedicated quote section and click the Add quote button. You’ll need to specify a display name and an (optional) Author. Not using Sametab yet? Get your free account here. 👈
https://docs.sametab.com/docs/admin/new-tab-configuration/
2019-12-05T18:29:07
CC-MAIN-2019-51
1575540481281.1
[array(['/docs/static/images/adminland/new-tab-logo.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/accent-color.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/default-theme.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/pinned-bookmarks.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/all-bookmarks.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/custom-timezones.png', 'write-new-sametab-announcement small-img'], dtype=object) array(['/docs/static/images/adminland/quotes.png', 'write-new-sametab-announcement small-img'], dtype=object)]
docs.sametab.com
Secure Shell is a standard for securely logging into a remote system and for executing commands on the remote system. It allows other connections, called tunnels, to be established and protected between the two involved systems. This standard exists in two primary versions, and only version two is used for the FreeBSD Project. The most common implementation of the standard is OpenSSH that is a part of the project's main distribution. Since its source is updated more often than FreeBSD releases, the latest version is also available in the ports tree. This, and other documents, can be downloaded from. For questions about FreeBSD, read the documentation before contacting <[email protected]>. For questions about this documentation, e-mail <[email protected]>.
https://docs.huihoo.com/freebsd/en_us/2006/dev-model/tool-ssh2.html
2019-12-05T17:08:53
CC-MAIN-2019-51
1575540481281.1
[]
docs.huihoo.com
# How do I access and change my user profile? If you want to change your username, contact information, or any other information in your user profile: 1. In the top right corner of any page, click **your name** to open your **User Profile**. 1. On your **User Profile** page, click **Edit**. 1. Revise any field in your user profile and then click **Save**.
https://docs.learnondemandsystems.com/tms/end-user-student-faqs/basics/change-user-profile.md
2019-12-05T18:23:21
CC-MAIN-2019-51
1575540481281.1
[]
docs.learnondemandsystems.com
Class extension - Method wrapping and Chain of Command Important Dynamics 365 for Finance and Operations is now being licensed as Dynamics 365 Finance and Dynamics 365 Supply Chain Management. For more information about these licensing changes, see Dynamics 365 Licensing Update. The functionality for class extension, or class augmentation, has been improved.. BusinessLogic1 object = new BusinessLogic1(); info(object The. Capabilities The following sections give more details about the capabilities of method wrapping and CoC. Wrapping public and protected methods Protected or public methods of classes, tables, data entities, or forms can be wrapped by using an extension class. The wrapper method must have the same signature as the base method. - When you augment form classes, only root-level methods can be wrapped. You can't wrap methods that are defined in nested classes. - Currently, only methods that are defined in regular classes can be wrapped. Methods that are defined in extension classes can't be wrapped by augmenting the extension classes. This capability is planned for a future update.(class. class A { public static void aStaticMethod(int parameter1) { // ... } } In this case, the wrapper method must resemble the following example. [ExtensionOf(classStr(A)] final class An_Extension { public static void aStaticMethod(int parameter1) { // ... next aStaticMethod(parameter1); } } Important The. Note If a method is replaceable, extenders don't have to unconditionally call next when wrapping the method by using chain of command. Although extenders can break the chain, the expectation is that they will only conditionally break it. The compiler doesn't enforce calls to next for methods with the attribute, Replaceable. Wrapping a base method in an extension of a derived class The following example shows how to wrap a base method in an extension of a derived class. For this example, the following class hierarchy is used.. [ExtensionOf(classStr(B))] final class B_Extension { public void salute(str message) { next salute(message); info("B extension"); } } Although the AnyClass2 { [Wrappable(false)] public void doSomething(str message) {...} [Wrappable(true)] final public void doSomethingElse(str message) {...} } Extensions of form-nested concepts such as data sources, data fields, and controls In order to implement CoC methods for form-nested concepts, such as data sources, data fields, and controls, an extension class is required for each nested concept. Form data sources In this example, FormToExtend is the form, DataSource1 is a valid existing data source in the form, and init and validateWrite are methods that can be wrapped in the data source. [ExtensionOf(formdatasourcestr(FormToExtend, DataSource1))] final class FormDataSource1_Extension { public void init() { next init(); //... //use element.FormToExtendVariable to access form's variables and datasources //element.FormToExtendMethod() to call form methods } public boolean validateWrite() { boolean ret; //... ret = next validateWrite(); //... return ret; } } Form data fields In this example, a data field is extended. FormToExtend is the form, DataSource1 is a data source in the form, Field1 is a field in the data source, and validate is one of many methods that can be wrapped in this nested concept. [ExtensionOf(formdatafieldstr(FormToExtend, DataSource1, Field1))] final class FormDataField1_Extension { public boolean validate() { boolean ret //... ret = next validate(); //... return ret; } } Controls In this example, FormToExtend is the form, Button1 is the button control in the form, and clicked is a method that can be wrapped on the button control. [ExtensionOf(formControlStr(FormToExtend, Button1))] final class FormButton1_Extension { public void clicked() { next clicked(); //... } } Requirements and considerations when you write CoC methods on extensions for form-nested concepts Like other CoC methods, these methods must always call next to invoke the next method in the chain, so that the chain can go all the way to the kernel or native implementation in the runtime behavior. The call to next is equivalent to a call to super() from the form itself to help guarantee that the base behavior in the runtime is always run as expected. Currently, the X++ editor in Microsoft Visual Studio doesn't support discovery of methods that can be wrapped. Therefore, you must refer to the system documentation for each nested concept to identify the correct method to wrap and its exact signature. You cannot add CoC to wrap methods that aren't defined in the original base behavior of the nested control type. For example, you can't add methodInButton1 CoC on an extension. However, from the control extension, you can make a call into this method if the method has been defined as public or protected. Here is an example where the Button1 control is defined in the FormToExtend form in such a way that it has the methodInButton1 method. [Form] public class FormToExtend extends FormRun { [Control("Button")] class Button1 { public void methodInButton1(str param1) { info("Hi from methodInButton1"); //... You do not have to recompile the module where the original form is defined to support CoC methods on nested concepts on that form from an extension. For example, if the FormToExtend form from the previous examples is in the ApplicationSuite module, you don't have to recompile ApplicationSuite to extend it with CoC for nested concepts on that form from a different module. Extensions of tables and data entities An extension class is required for each concept. Tables In this example, TableToExtend is the table and delete, canSubmitToWorkflow, and caption are methods that can be wrapped in the table. [ExtensionOf(tablestr(TableToExtend))] final class TableToExtend_Extension { public void delete() { next delete(); //... } public boolean canSubmitToWorkflow(str _workflowType) { boolean ret; //... ret = next canSubmitToWorkflow(_workflowType); //... return ret; } public str caption() { str ret; //... ret = next caption(); //... return ret; } } Data entities In this example, DataEntityToExtend is the data entity and validateDelete and validateWrite are methods that can be wrapped in the data entity. [ExtensionOf(tableStr(DataEntityToExtend))] final class DataEntityToExtend_Extension { public boolean validateDelete() { boolean ret; //... ret = next validateDelete(); //... return ret; } public boolean validateWrite() { boolean ret; //... ret = next validateWrite(); //... return ret; } } Restrictions on wrapper methods The following sections describe restrictions on the use of CoC and method wrapping.. Methods on types nested within forms can be wrapped in Platform update 16 and later The ability to wrap methods on types nested within forms (data sources and controls) by using class extensions was added in Platform update 16. This means that Chain of Command can be used to provide overrides for data source methods and form control methods. However, wrapping (extension) of purely X++ methods on those nested types (form controls and form data sources) is not yet supported like it is on other types (forms, tables, data entities). Currently, if a developer uses Chain of Command on purely X++ methods on types inside forms, then it compiles, but the extension methods are not invoked at runtime. This capability is planned for a future update. Unimplemented system methods on tables and data entities can be wrapped in Platform update 22 and later The ability to wrap methods in nested classes by using class extensions was added in Platform update 16. The concept of nested classes in X++ applies to forms for overriding data source methods and form control methods. Next calls can be put inside try/catch/finally in Platform update 21 and later In a CoC extension method, the next call must not be called conditionally. However, in Platform update 21 and later next calls can be placed inside a try/catch/finally to allow for standard handling of exceptions and resource cleanup. public void someMethod() { try { //... next updateBalances(); //... } catch(Exception::Error) { //... } } Extensions of extensions are not yet supported Currently, only methods that are defined in regular classes can be wrapped. Methods that are defined in extension classes can't be wrapped by augmenting the extension classes. This capability is planned for a future release. Tooling For the features that are described in this topic, the Microsoft Visual Studio X++ editor doesn't yet offer complete support for cross-references and Microsoft IntelliSense. Feedback
https://docs.microsoft.com/en-us/dynamics365/fin-ops-core/dev-itpro/extensibility/method-wrapping-coc
2019-12-05T18:19:56
CC-MAIN-2019-51
1575540481281.1
[]
docs.microsoft.com
Mule Runtime 3.8.1 Release Notes August 19, 2016 Mule Runtime 3.8.1 includes the following enhancements: GA support for RAML 1.0 Improved support for COBOL Copybook with DataWeave Support for user defined data types in the database connector DataWeave support for bigger files Many other bug fixes and improvements The DataWeave Maven artifactId has changed in this release. COBOL Copybook support for DataWeave is limited. The following release notes describe the change and limitation. Supported Software Mule was tested on the following software: The unified Mule Runtime 3.8.1 and API Gateway is compatible with the following software: APIkit 3.8.x Anypoint Studio 6.0.x and 6.1.0 Enhancements and Changes DataWeave Support for Bigger Files: DataWeave now supports processing files up to 2 GB for XML, JSON and CSV by using temporary files to store the data instead of loading everything in memory. No change is required to DataWeave applications to use this feature. The new capability takes effect automatically. When a file reaches the 1.5MB threshold, data storage to temporary files begins. Change to DataWeave Maven ArtifactId: The artifactId has been changed from mule-plugin-weave_2.11 to mule-plugin-weave. Change to DataWeave auto conversion algorithm: The algorithm used to auto convert data types has been modified to make it more predictable. Some conversions might work differently. Specifically sizeOf of a numeric attribute will return 1 now. Note that sizeOf() doesn’t support numeric arguments, so it will be autoconverted. In these cases it is recommended to use explicit type coersion. Example: sizeOf(payload.age as :string). Changes to Flat File Format Reader Configuration for DataWeave When using an input of type flat file in DataWeave, the encoding parameter is no longer available as a reader property. Character encoding is now defined from standard DataWeave conventions (normally: getting it from the Mule message properties). Added a reader property missingValues to control how to represent fields without values. Known issues COBOL Copybook support for DataWeave does not support the following: Zoned decimals REDEFINEs. You can modify your Copybook to remove the REDEFINE and specify which segment you wish to use. COMP-1, COMP-2, COMP-5 VALUE clauses for non-alphanumeric fields DATE FORMAT RENAME SYNCHRONIZED Periods in PIC clauses or in VALUE clauses Tab characters in the Copybook Migration Guide If you’re migrating from 3.8.0 and using Mule with Maven and you are using DataWeave in your projects, you must modify the POM reference to DataWeave, as it has now changed name slightly. See DataWeave XML Reference for more details. Bundled Runtime Manager Agent This version of Mule runtime comes bundled with the Runtime Manager Agent plugin version 1.5.1.
https://docs.mulesoft.com/release-notes/mule-runtime/mule-3.8.1-release-notes
2019-12-05T17:46:40
CC-MAIN-2019-51
1575540481281.1
[]
docs.mulesoft.com
Language definitions¶ To properly present different translations, Weblate needs some info about languages used. Currently definitions for about 350 languages are included, and the definition includes language name, text direction, plural definitions and language code. Parsing language codes¶ While parsing translations, Weblate attempts to map language code (usually the ISO 639-1 one) to any existing language object. If no exact match can be found, an attempt will be made to best fit into an existing language (e.g. ignoring default country code for a given language - choosing cs instead of cs_CZ). Should also fail, a new language definition will be created using the defaults (left to right text direction, one plural) and naming of the language :guilabel: xx_XX (generated). You might want to change this in the admin interface (see Changing language definitions) and report it to the issue tracker (see Contributing). Changing language definitions¶ You can change language definitions in the admin interface (see Django admin interface). The Weblate languages section allows changing or adding language definitions. While editing, make sure all fields are correct (especially plurals and text direction), otherwise translators be unable to properly edit those translations.
http://docs.weblate.org/en/weblate-3.9.1/admin/languages.html
2019-12-05T17:41:15
CC-MAIN-2019-51
1575540481281.1
[]
docs.weblate.org
XML cross-site scripting check The XML Cross-Site Scripting check examines the user requests for possible cross-site scripting attacks in the XML payload. If it finds a possible cross-site scripting attack, it blocks the request. To prevent misuse of the scripts on your protected web services to breach security on your web services, the XML. The Web App Firewall offers various action options for implementing XML Cross-Site Scripting protection. You have the option to configure Block, Log, and Stats actions. The Web App Firewall XML XSS check is performed on the payload of the incoming requests and attack strings are identified even if they are spread over multiple lines. The check looks for XSS attack strings in the element and the attribute values. You can apply relaxations to bypass security check inspection under specified conditions. The logs and statistics can help you identify needed relaxations. The CDATA section of the XML payload might be an attractive area of focus for the hackers because the scripts are not executable outside the CDATA section. A CDATA section is used for content that is to be treated entirely as character data. HTML mark up tag delimiters “<”, “>”, and “/>” will not cause the parser to interpret the code as HTML elements. The following example shows a CDATA Section with XSS attack string: <![CDATA[rn <script language="Javascript" type="text/javascript">alert ("Got you")</script>rn ]]> Action OptionsAction Options An action is applied when the XML Cross-Site Scripting check detects an XSS attack in the request. The following options are available for optimizing XML Cross-Site Scripting protection for your application: - Block—Block action is triggered if the XSS tags are detected in the request. - Log—Generate log messages indicating the actions taken by the XML Cross-Site Scripting check. If block is disabled, a separate log message is generated for each location (ELEMENT, ATTRIBUTE) in which the XSS violation is detected. However, only one message is generated when the request is blocked. You can monitor the logs to determine whether responses to legitimate requests are getting blocked. A large increase in the number of log messages can indicate attempts to launch an attack. - Stats—Gather statistics about violations and logs. An unexpected surge in the stats counter might indicate that your application is under attack. If legitimate requests are getting blocked, you might have to revisit the configuration to see if you need to configure new relaxation rules or modify the existing ones. Relaxation RulesRelaxation Rules If your application requires you to bypass the Cross-Site Scripting check for a specific ELEMENT or ATTRIBUTE in the XML payload, you can configure a relaxation rule. The XML Cross-Site Scripting check relaxation rules have the following parameters: - Name—You can use literal strings or regular expressions to configure the name of the ELEMENT or the Attribute. The following expression exempts all ELEMENTS beginning with the string name_ followed by a string of uppercase or lowercase letters, or numbers, that is at least two and no more than fifteen characters long: ^name_[0-9A-Za-z]{2,15}$ Note The names are case sensitive. Duplicate entries are not allowed, but you can use capitalization of the names and differences in location to create similar entries. For example, each of the following relaxation rules is unique: 1) XMLXSS: ABC IsRegex: NOTREGEX Location: ATTRIBUTE State: ENABLED 2) XMLXSS: ABC IsRegex: NOTREGEX Location: ELEMENT State: ENABLED 3) XMLXSS: abc IsRegex: NOTREGEX Location: ELEMENT State: ENABLED 4) XMLXSS: abc IsRegex: NOTREGEX Location: ATTRIBUTE State: ENABLED - Location—You can specify the Location of the Cross-site Scripting Check Cross-Site Scripting check would otherwise have blocked. Using the Command Line to Configure the XML Cross-Site Scripting checkUsing the Command Line to Configure the XML Cross-Site Scripting check To configure XML Cross-Site Scripting check actions and other parameters by using the command line If you use the command-line interface, you can enter the following commands to configure the XML Cross-Site Scripting Check: > set appfw profile <name> -XMLXSSAction (([block] [log] [stats]) | [none]) To configure a XML Cross-Site Scripting check relaxation rule by using the command line You can add relaxation rules to bypass inspection of XSS script attack inspection in a specific location. Use the bind or unbind command to add or delete the relaxation rule binding, as follows: > bind appfw profile <name> -XMLXSS <string> [isRegex (REGEX | NOTREGEX)] [-location ( ELEMENT | ATTRIBUTE )] –comment <string> [-state ( ENABLED | DISABLED )] > unbind appfw profile <name> -XMLXSS <String> Example: > bind appfw profile test_pr -XMLXSS ABC After executing the above command, the following relaxation rule is configured. The rule is enabled, the name is treated as a literal (NOTREGEX), and ELEMENT is selected as the default location: 1) XMLXSS: ABC IsRegex: NOTREGEX Location: ELEMENT State: ENABLED `> unbind appfw profile test_pr -XMLXSS abc` ERROR: No such XMLXSS check `> unbind appfw profile test_pr -XMLXSS ABC` Done Using the GUI to configure the XML Cross-Site scripting checkUsing the GUI to configure the XML Cross-Site scripting check In the GUI, you can configure the XML Cross-Site scripting check in the pane for the profile associated with your application. To configure or modify the XML Cross-Site Scripting the XML Cross-Site Scripting check, you can select or clear check boxes in the table, click OK, and then click Save and Close to close the Security Check pane. b) You can double click XML Cross-Site Scripting, or select the row and click Action Settings, to display the action options. After changing any of the action Cross-Site Scripting relaxation rule by using the GUI - Navigate to Web App Firewall > Profiles, highlight the target profile, and click Edit. - In the Advanced Settings pane, click Relaxation Rules. - In the Relaxation Rules table, double-click the XML Cross-Site Scripting entry, or select it and click Edit. - In the XML Cross-Site Scripting Relaxation Rules dialogue box, perform Add, Edit, Delete, Enable, or Disable operations for relaxation rules. To manage XML Cross-Site Scripting relaxation rules by using the visualizer For a consolidated view of all the relaxation rules, you can highlight the XML Web App the profile that is processing the traffic for which you want to use these customized allowed and denied lists. For more information about signatures, see. To view default XSS patterns: - Navigate to Web App Firewall > Signatures, select *Default Signatures, and click Edit. Then click Manage SQL/XSS Patterns. The Manage SQL/XSS Paths table shows following three rows pertaining to XSS : xss/allowed/attribute xss/allowed/tag xss/denied/pattern. - Navigate to Web App Firewall > Signatures, highlight the target user-defined signature, and click Edit. Click Manage SQL/XSS Patterns to display the Manage SQL/XSS paths table. - Select the target XSS row. a) Click Manage Elements, to Add, Edit or Remove the corresponding XSS element. b) Click Remove to remove the selected row. Warning Be very careful when you remove or modify any default XSS element, or delete the XSS path to remove the entire row. The signatures, HTML Cross-Site Scripting security check, and XML Cross-Site Scripting security check rely on these Elements for detecting attacks to protect your applications. Customizing the XSS Elements can make your application vulnerable to Cross-Site Scripting attacks if the required pattern is removed during editing. Using the log feature with the XML cross-site scripting checkUsing the log feature with the XML cross-site scripting check When the log action is enabled, the XML Cross-Site Scripting security check violations are logged in the audit log as APPFW_XML_XSS** Example of a XML Cross-Site Scripting security check violation log message in Native log format showing <blocked> action Oct 7 01:44:34 <local0.warn> 10.217.31.98 10/07/2015:01:44:34 GMT ns 0-PPE-1 : default APPFW APPFW_XML_XSS 1154 0 : 10.217.253.69 3466-PPE1 - owa_profile Cross-site script check failed for field script="Bad tag: script" <**blocked**> Example of a XML Cross-Site Scripting security check violation log message in CEF log format showing <not blocked> action Oct 7 01:46:52 <local0.warn> 10.217.31.98 CEF:0|Citrix|Citrix ADC|NS11.0|APPFW|APPFW_XML_XSS|4|src=10.217.30.17 geolocation=Unknown spt=33141 method=GET request= msg=Cross-site script check failed for field script="Bad tag: script" cn1=1607 cn2=3538 cs1=owa_profile cs2=PPE0 cs4=ERROR cs5=2015 act=**not blocked** To access the log messages by using the GUI The Citrix GUI includes a useful tool (Syslog Viewer) for analyzing the log messages. You have multiple options for accessing the Syslog Viewer: Navigate to the Web App Firewall > Profiles, select the target profile, and click Security Checks. Highlight the XML Cross-Site Scripting row and click Logs. When you access the logs directly from the XML Cross-Site Scripting check, filter by selecting APPFW in the dropdown options for Module. The Event Type list offers a rich set of options to further refine your selection. For example, if you select the APPFW_XML_XSS check box and click the Apply button, only log messages pertaining to the XML. Statistics for the XML cross-site scripting violationsStatistics for the XML cross-site scripting violations When the stats action is enabled, the counter for the XML Cross-Site Scripting check statistics by using the command line At the command prompt, type: > **sh appfw stats** To display stats for a specific profile, use the following command: > **stat appfw profile** <profile name> To display XML Cross-Site Scripting statistics by using the GUI - Navigate to System > Security > Web App Firewall. - In the right pane, access the Statistics Link. - Use the scroll bar to view the statistics about XML Cross-Site Scripting violations and logs. The statistics table provides real-time data and is updated every 7 seconds.
https://docs.citrix.com/en-us/citrix-adc/13/application-firewall/xml-protections/xml-cross-site-scripting-check.html
2019-12-05T17:22:44
CC-MAIN-2019-51
1575540481281.1
[]
docs.citrix.com
Scripting Reference The scripting reference is intended for developers who want to extend NeoFPS with their own custom code. The documentation is generated from markup in the source code itself. If you feel the documentation could be extended for a certain class, then please make your suggestions via email. Please use the table of contents to the left to select a topic. Typing in the filter box will filter the entries to match.
https://docs.neofps.com/api/index.html
2019-12-05T17:19:42
CC-MAIN-2019-51
1575540481281.1
[]
docs.neofps.com
Admissions Plus Pro User Guides When it comes to running an admissions office efficiently, one decision can make all the difference: the right software. Admissions Plus Pro with Online Forms and Applications makes it easy to track and communicate with each applicant step by step through the entire admissions process, from initial inquiry through acceptance and enrollment.
https://docs.rediker.com/guides/admissions-guides.htm
2019-12-05T16:43:50
CC-MAIN-2019-51
1575540481281.1
[]
docs.rediker.com
A cache group is a collection of Varnish Cache servers that have identical configuration. Attributes on a cache group includes: A VCL file is a configuration file which describes control flow when Varnish Cache handles a HTTP request. VAC distributes and store VCL files for you. A parameter set is a list of Varnish cache parameters. A parameter set can be applied to one or more cache groups simultaneously, as long as the cache groups all consist of cache servers of the same Varnish Cache version. VAC ships with a JSON-based RESTful API. All actions performed via the user interface can be replicated with direct access to the API. This includes fetching all real-time graph data. The default user is “vac” and the default password is “vac”. Please change the login password the first time you log in. You can access VAC on . The VAC as three types of roles both for the UI and API. Admin User Read-only
https://docs.varnish-software.com/varnish-administration-console/installation/concepts/
2019-12-05T18:36:35
CC-MAIN-2019-51
1575540481281.1
[]
docs.varnish-software.com
Getting Started with Session Manager Before you use Session Manager to connect to the Amazon EC2 instances in your account, complete the steps in the following topics. Topics - Step 1: Complete Session Manager Prerequisites - Step 2: Verify or Create an IAM Instance Profile with Session Manager Permissions - Step 3: Control User Session Access to Instances - Step 4: Configure Session Preferences - Step 5: (Optional) Use PrivateLink to Set Up a VPC Endpoint for Session Manager - Step 6: (Optional) Disable or Enable ssm-user Account Administrative Permissions
https://docs.aws.amazon.com/systems-manager/latest/userguide/session-manager-getting-started.html
2019-01-16T04:12:18
CC-MAIN-2019-04
1547583656665.34
[]
docs.aws.amazon.com
Contents Service Management Previous Topic Next Topic Create a facility request from the floor plan ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Other Share Create a facility request from the floor plan All users in your organization can create any facility requests that your facilities admin [facilities_admin] has enabled on the floor plan view. Before you beginRole required: none Procedure Perform one of the following options. ChoiceAction To search for a space location Find a space on the floor plan. If you know the space location Click the space on the floor plan. On the Spaces tab, under the room information details and Related Links section, click Create Facilities Request. Note: You can also right-click the space link and select Create Facilities Request. Table 1. Facilities request form Field Description Location The specific location from the floor plan. Short Description Enter a short description summarizing the facilities request. You can overwrite the default description. Detailed Description Enter a detailed description of the facilities request. Requested buy The user name of the person making the request displays. Additional comments Add additional comments if necessary. Click Submit and the Floor Plan form displays. On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/helsinki-service-management-for-the-enterprise/page/product/facilities-interactive-facility-maps/task/t_CreateFacReqWorkbench.html
2019-01-16T04:25:34
CC-MAIN-2019-04
1547583656665.34
[]
docs.servicenow.com
How to create step chart? Step charts show changes that happen occasionally. It connects different datapoints with straight lines such that they form rectangular step sections. Ideata Analytics provides capabilty to create step chart on analysis screen. The steps to create step Step: Step chart will be created respectively in the chart area which can be saved or exported.
https://docs.ideata-analytics.com/create-visualizations/step-chart.html
2019-01-16T04:13:59
CC-MAIN-2019-04
1547583656665.34
[array(['../assets/step.png', None], dtype=object)]
docs.ideata-analytics.com
Storage Policies vCloud Director provider storage policies are imported into OnApp and appear in the OnApp UI as data store zones. Whereas storage policies are imported as data stores. These data stores are assigned to the data store zones of the VPC type which are the provider storage policies with which the storage policies are associated. Storage policies are not only imported but can also be created in OnApp Storage policies are created in OnApp in the following cases: - During orchestration model deployment. The newly created storage policies will be associated with the provider storage policies set in data store options. - During resource pool creation and modification. The newly created storage policy will be associated with the provider storage policy set on the resource pool page or creation form. You can select storage policies during vApp creation. You can create, edit and delete storage policies when managing resource pools. For more information refer to Resource Pools. View Storage Policies To view storage policies: - Go to your Control Panel > Settings > Data Stores menu. - On the screen that appears, you will see the list of all data stores within a cloud and their details: - label - the name of the storage policy - identifier - the identifier of the storage policy - data store zone - the data store zone to which the storage policy is assigned. The label of the data store zone consists of the following parts: "Storage Policy Name (pVDC Name) - Compute Resource Name". - disk usage - number of GB used by the VS disks assigned to this storage policy - disk capacity - the disk capacity of this storage policy. If disk capacity for this storage policy is unlimited, the value will be '99999 GB'. - Click the label of the storage policy you are interested in to view the disks and VSs associated with this storage policy. You can also view the list of storage policies assigned to a certain resource pool at Control Panel > Resource Pools > Label. On this page can add new storage policies or edit/delete the existing ones. Manage Data Store Zone To manage data store zone: - Go to your Control Panel > Settings > Data Store Zones menu. - On the screen that appears, you will see the list of the vCloud data store zone label, click the Actions button and then click Edit. - If you want to delete data store zone, click the Actions button and then click Delete. Confirm the deletion.
https://docs.onapp.com/vcd/latest/administration-guide/storage-policies
2019-01-16T04:25:50
CC-MAIN-2019-04
1547583656665.34
[]
docs.onapp.com
Authentication, Authorization. Authentication¶. During a chef-client Run¶ RSA public key-pairs are used to authenticate the chef-client with the Chef server every time a chef-client needs access to data that is stored on the Chef server. This prevents any node from accessing data that it shouldn’t and it ensures that only nodes that are properly registered with the Chef server can be managed. Knife¶ RSA public key-pairs are used to authenticate knife with the Chef server every time knife attempts to access the Chef server. This ensures that each instance of knife is properly registered with the Chef server and that only trusted users can make changes to the data. knife can also use the knife exec subcommand to make specific, authenticated requests to the Chef server. knife plugins can also make authenticated requests to the Chef server by leveraging the knife exec subcommand. From the Web Interface¶ The Chef server user interface uses the Chef server API to perform most operations. This ensures that authentication requests to the Chef server are authorized. This authentication process is handled automatically and is not something that users of the hosted Chef server will need to manage. For the on-premises Chef server, the authentication keys used by the web interface will need to be maintained by the individual administrators who are responsible for managing the server. Other Options¶ The most common ways to interact with the Chef server using the Chef server API abstract the API from the user. That said, the Chef server API can be interacted with directly. The following sections describe a few of the ways that are available for doing that. cURL¶ An API request can be made using cURL, which is a Bash shell script that requires two utilities: awk and openssl. The following example shows how an authenticated request can be made using the Chef¶ An API request can be made using PyChef, which is a Python library that meets the Mixlib::Authentication requirements so that it can easily interact with the Chef server. The following example shows how an authenticated request can be made using the Chef the chef-client or knife. For more about PyChef, see:. Ruby¶ On a system with the chef-client installed, use Ruby to make an authenticated request to the Chef server: require 'rubygems' require 'chef/config' require 'chef/log' require 'chef/rest' chef_server_url = '' client_name = 'clientname' signing_key_filename = '/path/to/pem/for/clientname' rest = Chef::REST.new(chef_server_url, client_name, signing_key_filename) puts rest.get_rest('/clients') or: require 'rubygems'/knife server API is to get objects from the Chef server, and then interact with the returned data using Ruby methods. Whenever possible, the Chef¶ In some cases, the chef-client is attempting to authenticate. This is often found in the log messages for that chef-client. Debug logging can be enabled on a chef-client using the following command: $ chef-client -l debug When debug logging is enabled, a log entry will look like the following:[Wed, 05 Oct 2011 22:05:35 +0000] DEBUG: Signing the request as NODE_NAME If the authentication request occurs during the initial chef-client run, the issue is most likely with the private key. If the authentication is happening on the node, there are a number of common causes: - The client.pem file is incorrect. This can be fixed by deleting the client.pem file and re-running the chef-client. When the chef-client re-runs, it will re-attempt to register with the Chef server and generate the correct key. - A node_name is different from the one used during the initial chef option for the chef-client executable. - The system clock has drifted from the actual time by more than 15 minutes. This can be fixed by syncing the clock with an Network Time Protocol (NTP) server. Authorization¶ The Chef server uses a role-based access control (RBAC) model to ensure that users may only perform authorized actions. Chef Server'] Chef server API¶ The Chef server API is a REST API that provides access to objects on the Chef server, including nodes, environments, roles, cookbooks (and cookbook versions), and to manage an API client list and the associated RSA public key-pairs. Authentication Headers¶ Authentication to the Chef server occurs when a specific set of HTTP headers are signed using a private key that is associated with the machine from which the request is made. The request is authorized if the Chef server can verify the signature using the public key. Only authorized actions are allowed. Note Most authentication requests made to the Chef server are abstracted from the user. Such as when using knife or the Chef server user interface. In some cases, such as when using the knife exec subcommand, the authentication requests need to be made more explicitly, but still in a way that does not require authentication headers. In a few cases, such as when using arbitrary Ruby code or cURL, it may be necessary to include the full authentication header as part of the request to the Chef server. Header Format¶ All hashing is done using SHA-1 and encoded in Base64. Base64 encoding should have line breaks every 60 characters. Each canonical header should be encoded in the following format: Method:HTTP_METHOD Hashed Path:HASHED_PATH X-Ops-Content-Hash:HASHED_BODY X-Ops-Timestamp:TIME X-Ops-UserId:USERID where: - HTTP_METHOD is the method used in the API request (GET, POST, and so on) - HASHED_PATH is the path of the request: /organizations/NAME/name_of_endpoint. The HASHED_PATH must be hashed using SHA-1 and encoded using Base64, must not have repeated forward slashes (/), must not end in a forward slash (unless the path is /), and must not include a query string. - The private key must be an RSA key in the SSL .pem file format. This signature is then broken into character strings (of not more than 60 characters per line) and placed in the header. The Chef server decrypts this header and ensures its content matches the content of the non-encrypted headers that were in the request. The timestamp of the message is checked to ensure the request was received within a reasonable amount of time. One approach generating the signed headers is to use mixlib-authentication, which is a class-based header signing authentication object similar to the one used by the chef-client. Example¶ The following example shows an authentication request: GET /organizations/NAME/nodes HTTP/1.1 Accept: application/json Accept-Encoding: gzip;q=1.0,deflate;q=0.6,identity;q=0.3 X-Ops-Sign: algorithm=sha1;version=1.0; X-Ops-Userid: user_id X-Ops-Timestamp: 2014-12-12T17:13:28Z X-Ops-Content-Hash: 2jmj7l5rfasfgSw0ygaVb/vlWAghYkK/YBwk= X-Ops-Authorization-1: BE3NnBritishaf3ifuwLSPCCYasdfXaRN5oZb4c6hbW0aefI X-Ops-Authorization-2: sL4j1qtEZzi/2WeF67UuytdsdfgbOc5CjgECQwqrym9gCUON X-Ops-Authorization-3: yf0p7PrLRCNasdfaHhQ2LWSea+kTcu0dkasdfvaTghfCDC57 X-Ops-Authorization-4: 155i+ZlthfasfasdffukusbIUGBKUYFjhbvcds3k0i0gqs+V X-Ops-Authorization-5: /sLcR7JjQky7sdafIHNfsBQrISktNPower1236hbFIayFBx3 X-Ops-Authorization-6: nodilAGMb166@haC/fttwlWQ2N1LasdqqGomRedtyhSqXA== Host: api.opscode.com:443 X-Ops-Server-API-Info: 1 X-Chef-Version: 12.0.2 User-Agent: Chef Knife/12.0.2 (ruby-2.1.1-p320; ohai-8.0.0; x86_64-darwin12.0.2; +) Endpoints¶ Each organization-specific authentication request must include /organizations/NAME as part of the name for the endpoint. For example, the full endpoint for getting a list of roles: GET /organizations/NAME/roles where ORG_NAME is the name of the organization. For more information about the Chef server API endpoints see Chef Server API.
https://docs-archive.chef.io/release/server_12-7/auth.html
2019-01-16T04:07:17
CC-MAIN-2019-04
1547583656665.34
[array(['_images/server_rbac_orgs_groups_and_users.png', '_images/server_rbac_orgs_groups_and_users.png'], dtype=object)]
docs-archive.chef.io
Searching for Dates and Times in Amazon CloudSearch You can use structured queries to search any search enabled date field for a particular date and time or a date-time range. Amazon CloudSearch supports two date field types, date and date-array. For more information, see configure indexing options. Dates and times are specified in UTC (Coordinated Universal Time) according to IETF RFC3339: yyyy-mm-ddTHH:mm:ss.SSSZ. In UTC, for example, 5:00 PM August 23, 1970 is: 1970-08-23T17:00:00Z. Note that you can also specify fractional seconds when specifying times in UTC. For example, 1967-01-31T23:20:50.650Z. To search for a date (or time) in a date field, you must enclose the date string in single quotes. For example, both of the following queries search the movie data for all movies released on December 25, 2001: release_date: '2001-12-25T00:00:00Z' (term field=release_date '2001-12-25T00:00:00Z')
https://docs.aws.amazon.com/cloudsearch/latest/developerguide/searching-dates.html
2019-01-16T04:46:07
CC-MAIN-2019-04
1547583656665.34
[]
docs.aws.amazon.com
Isolating endpoints Tanium Quarantine 3.1.0 With Tanium™ Quarantine you can isolate a Windows, Linux, or Mac endpoint that shows evidence of compromise or other suspicious activity. Use Quarantine to apply, remove, and test for quarantine. When an endpoint is quarantined, only approved traffic is allowed on the quarantined endpoint. By default, this traffic is allowed only: - Between the Tanium Client on the quarantined endpoint and Tanium Server over port 17472. - For essential traffic that is necessary to obtain and resolve IP addresses (DHCP/DNS). Quarantine includes a safety feature that automatically reverses a quarantine policy that was applied by the tool. After a quarantine policy is applied, the effect of the policy is logged. If the endpoint is able to communicate with Tanium Server, Quarantine logs the successful application of the policy. If a policy prevents the endpoint from communicating with Tanium Server, Quarantine backs out the policy and saves logs in the action folder. Before you begin Test the quarantine policy in a lab environment before deploying the policy. Do not apply a policy until its behavior is known and predictable. Incorrectly configured policies can block access to the Tanium Server. - Install the Tanium Quarantine solution. For more information, see Install Quarantine. - You must have a Content Administrator account for Tanium Console. For more information, see Tanium Core Platform User Guide: Managing Roles. Identify the traffic that is required when an endpoint is under quarantine. - You must have a lab machine on your target platform (Windows, Linux, or Mac) on which you can test the quarantine policies. You must be able to physically access the machine or to access it using RDP (Windows) or SSH (Linux, Mac). - You must have access to the endpoint that you want to quarantine through a sensor or saved question in the Tanium Console. Endpoint operating system requirements Supported Windows versions - Windows XP - Windows 7 - Windows 8.1 - Windows 10 - Windows Server 2003 - Windows Server 2008 - Windows Server 2012 Supported Linux OS versions - RedHat/CentOS 5 IPTables on SYSV - RedHat/CentOS 6 IPTables on SYSV - RedHat/CentOS 7 Firewalld on Systemd - Ubuntu 12,14 UFW on Upstart - Ubuntu 15 UFW on Upstart/Systemd Supported Mac OS versions - OSX 10.9 - Mavericks - OSX 10.10 - Yosemite - OSX 10.11 - El Capitan - OSX 10.12 - Sierra OSX 10.8 - Mountain Lion and earlier releases are based on ipfirewall (IPFW) and are not supported. Configure Windows endpoints The Apply Windows IPsec Quarantine package uses Windows IPsec policies to quarantine the endpoint. You can also add custom rules and options, see Create custom quarantine rules for more information. You cannot use Windows IPsec Quarantine on networks where a domain IPsec policy is already enforced. Check that the IPsec Policy Agent service is running on the endpoints Optionally, you can verify that the IPsec Policy Agent is listed as a running service in Windows. - In Tanium™ Interact, ask the question: Get Service Details containing "PolicyAgent" from all machines with Service Details containing "PolicyAgent" - In the table that gets returned, check the results in the following columns. - Service Status: Running or Stopped - Service Startup Mode: Manual or Automatic - If necessary, drill down into the results to determine which endpoints do not have the IPsec Policy agent running. (Windows XP only) Deploy quarantine tools The Quarantine Tools Pack includes a Microsoft policy that IPsec Quarantine uses to quarantine endpoints that are running Microsoft Windows XP. The application of IPsec policy is native to versions of Microsoft Windows later than Microsoft Windows XP and they do not require the tool pack. To find endpoints that require the quarantine tools pack: - From the Tanium Console, open the Quarantine dashboard. - Click Needs Quarantine Tools Pack (XP only), and select the Windows XP-based endpoints that require the tool pack. - Select Deploy Action. The package wizard opens. - Select Distribute Quarantine Tools. The tool pack is deployed to the selected endpoints. Configure Linux endpoints The Apply Linux IPTables Quarantine package quarantines endpoints that are running Linux-based operating systems that support the use of the iptables module. Verify that endpoints are not using Network Manager Linux IPTables Quarantine checks to ensure that the iptables module is installed and disables the use of the Network Manager module on endpoints that are targeted for quarantine. You can check for Linux-based endpoints that are running Network Manager by using the Linux Network Manager sensor to determine if Network Manager is enabled. In Interact, type network manager to find the sensor. This sensor has no parameters. Configure Mac endpoints The Apply Mac PF Quarantine package quarantines endpoints that are running Mac OS X operating systems that support the use of Packet Filter (PF) rules. This package creates packet filter rules that isolate endpoints by eliminating communication with network resources. Packet Filter (PF) software must be installed on endpoints that are targeted for quarantine. Test quarantine on lab endpoints By default, the quarantine on the lab endpoint blocks all communication except the Tanium Server. You can configure custom rules to define allowed traffic direction, allowed IP addresses, ports, and protocols. For more information about how to create and deploy custom rules, see Create custom quarantine rules . Do not quarantine without testing the rules configuration in the lab. - Target computers for quarantine. - In Tanium Console, use the Is Windows,Is Linux, or Is Mac sensor to locate an endpoint to quarantine. Select the entry for True, and click Drill Down. - On the saved questions page, select Computer Name and click Load. A Computer Names list displays the names of all computers that are running the selected OS. Select the lab endpoint as a target and click Deploy Action. - In the Deployment Package field, type the name of the quarantine package that you want to deploy: - Apply Windows IPsec Quarantine - Apply Linux IPTables Quarantine - Apply Mac PF Quarantine - (Optional) Define quarantine rules and options. For more information about quarantine rules, see Create custom quarantine rules . - If you already attached a taniumquarantine.dat file to the package you are deploying, you do not need to make any other configurations. - Otherwise, select Override Config to apply custom rules to the action. - If you are using the options and rules in this package deployment, select any options that you want to enable and enter your custom quarantine rules into the Custom Quarantine Rules field. - Click Show Preview to Continue to preview the targeting criteria for the action. Click Deploy Action. - Verify quarantine of the targeted lab endpoint. Confirm that the computer has no available means of communication to resources other than Tanium Server and any endpoints that you configured in custom quarantine rules. You can use RDP (Windows) SSH (Linux/Mac), the Ping network utility, or a similar means to confirm that communication is blocked. By default, the only traffic that the quarantine allows is between Tanium Client on the quarantined computer and Tanium Server over port 17472. If the computer is a server that must allow connections to name servers, verify that those connections are allowed to pass through. - Verify the visibility of the quarantined computer to Tanium Server. Action folders are located under the Tanium Client installation folder on the endpoint, usually <Tanium Client>\Downloads\Action_XXXX.log. Remove quarantine Deploy the Remove Windows IPsec Quarantine, Remove Mac PF Quarantine, or Remove Linux IPTables Quarantine package to the endpoint to remove the quarantine from the computer. Use RDP (Windows), SSH (Mac/Linux), the Ping utility, or another method to confirm the removal of the quarantine and the normal communication of the test computer. Create custom quarantine rules Quarantine rules and options define allowed traffic direction, allowed IP addresses, ports, and protocols. All other traffic is blocked. These rules are in the same format for Windows, Linux and Mac. For custom quarantine rule syntax, see Reference: Custom rules and options. If you do not define any quarantine rules, the default values are used, which gives the quarantined endpoint access only to the Tanium Server and permits DNS/DHCP traffic. If you previously provided a Windows IPsec policy file in earlier versions of Quarantine, the IPsec policy overrides the custom quarantine rules. Test the quarantine policy in a lab environment before deploying the policy. Do not apply a policy until its behavior is known and predictable. Incorrectly configured policies can block access to the Tanium Server. Options for deploying custom quarantine rules and options You can define quarantine rules and options by either attaching a configuration file to the package, or by selecting options in the Tanium Console when you deploy a quarantine action. Attach configuration file to package You can attach a taniumquarantine.dat configuration file that defines quarantine rules and options to either a new package or the existing Quarantine packages. Then push that package out to the endpoints. For an example taniumquarantine.dat file, see Reference: Custom rules examples. - From the Main menu, go to Content > Packages. - You can either create a new package, or edit one of the existing Quarantine packages: - Update the taniumquarantine.dat file. - To download the current file, click Download . - Remove the file that is currently in the package . - Click Add to upload the updated taniumquarantine.dat file. - Click Save to save the updates to the package. Select options in user interface when you deploy Quarantine actions When you deploy the Apply Windows IPsec Quarantine, Apply Mac PF Quarantine, or Apply Linux IPTables Quarantine actions, you can define the quarantine rules and options as a part of that action. For more information, see Test quarantine on lab endpoints. Reference: Custom rules and options Custom rules format The format for custom rules is not case sensitive. You can put each rule on a new line. Trailing white spaces are not supported. This format is used for both the configuration file and in the user interface. Direction:Protocol:IPAddress:CIDR:Port #Comment Direction Valid values: IN or OUT Specifies whether incoming or outgoing traffic is allowed. Protocol Valid values: ICMP, TCP, UDP If you specify ICMP, the ICMP protocol is allowed to communicate to and from the specified addresses. This limitation is because IPSec does not filter ICMP Type/Codes. The filtering is done by ADVFirewall. IPAddress Specifies any IPv4 address or you can use ANY for all. CIDR Valid values: 0-32 or undefined Subnet masks in dotted decimal format are not permitted in the input file. Undefined (blank) is same as 32 and uses the IP Address only. Port Valid values: 0-65535 or undefined Leave undefined (blank) to permit all ports. Ranges are not currently supported, only individual ports or all ports can be defined. When using the Custom Quarantine Rules parameter in the package, the total characters should be 1100 or less. If you need more characters, you can use a custom DAT file. Quarantine options You can configure quarantine options in a configuration file or in the deploy action user interface when you quarantine an endpoint. Configuration file format OPTION:OptionName:OptionValue Options Reference: Custom rules examples Example for Custom Quarantine Rules field IN:UDP:10.0.0.21:32:161 OUT:UDP:10.0.0.21:32:162 This example defines two rules: - Allow SNMP queries (UDP Port 161) from another device at 10.0.0.21. - Allow SNMP traps (UDP Port 162) to be sent to a device at 10.0.0.21. This example demonstrates the use of parameter options in the package and not a taniumquarantine.dat file. taniumquarantine.dat sample file For DAT files, each entry must be on one line; you cannot use pipe (|) characters to combine lines. Trailing white spaces are not supported. #Allow ICMP out to a specific IP Address OUT:ICMP:192.168.10.15::0 #Allow ICMP in from a specific IP Address IN:ICMP:192.168.20.10:32:0 #Allow TCP port 80 in from a class C subnet IN:TCP:192.168.1.0:24:80 #Allow UDP port 161 in from a specific IP Address IN:UDP:10.0.0.21:16:161 #Allow HTTPS (tcp 443) out to a specific class B subnet OUT:TCP:192.168.0.0:16:443 OPTION:ALLDNS:TRUE OPTION:CURRENTDNS:FALSE OPTION:ALLDHCP:TRUE OPTION:TANIUMSERVERS:TRUE OPTION:CHECKTS:TRUE OPTION:NOTIFY:This Device has been Quarantined Last updated: 1/11/2019 11:45 AM | Feedback
https://docs.tanium.com/ir/ir/quarantine.html
2019-01-16T04:28:02
CC-MAIN-2019-04
1547583656665.34
[]
docs.tanium.com
Environments¶ An environment might represent a system operating at a particular time of day, or in a particular physical location. Environments encapsulate visible phenomena such as assets, tasks, personas, and attackers, as well as invisible phenomena, such as goals, vulnerabilities, and threats. Environments may be identified at any time, although these may not become apparent until carrying out contextual inquiry and observing how potential users reason about their context of use. Adding a new environment¶ - Select the UX/Environments menu to open the Environments form, and click on the Add button to open the new Environment form. - Enter the name of the environment, a short code, and a description. The short-code is used to prefix requirement ids associated with an environment. - If this environment is to be a composite environment, i.e. encompass artifacts of other environments, then right click on the environment list, select Add from the speed menu, and select the environment/s to add. - It is possible an artifact may appear in multiple environments within a composite environment. It is, therefore, necessary to set duplication properties for composite environments. If the maximise radio button is selected, then the maximal values associated with that artifact will be adopted. This may be the highest likelihood value for a threat, or the highest security property values for an asset. If the override radio button is selected, then CAIRIS will ensure that the artifact properties are used for the overriding environment.
https://cairis.readthedocs.io/en/latest/environments.html
2019-01-16T04:22:26
CC-MAIN-2019-04
1547583656665.34
[array(['_images/EnvironmentForm.jpg', 'Environment form'], dtype=object)]
cairis.readthedocs.io
Starting CAIRIS¶ Starting the CAIRIS server¶ If you are using Docker then the command used to install the container also starts the CAIRIS server on port 80. If you are the only person that plans to use CAIRIS, using the Flask development server to run cairisd should be sufficient; you can find cairisd in the cairis/cairis/bin directory. ./cairisd.py runserver If you plan to use mod_wsgi-express then you need to use cairis.wsgi (also in cairis/cairis/bin): mod_wsgi-express start-server cairis.wsgi Starting the web application¶ You can use CAIRIS on any modern web browser except Microsoft Internet Explorer (although you can use Microsoft Edge). In your browser, visit the site hosting the CAIRIS server, and authenticate using credentials you have, or setup if you ran the quick_setup.py script. If you are not using the live demo, or have not mapped mod_wsgi-express to port 80, you will need to also specify the port the CAIRIS server is listening on. If you don’t specify otherwise, cairisd will listen on port 7071, and mod_wsgi-express will listen on port 8000. For example, if you are using cairisd on germaneriposte.org then you should connect to Once you login in you should see the home page, which provides a summary of threats, vulnerabilities and risks, and the threat model for different environments. Once you have finished working with CAIRIS, click on the Logout button.
https://cairis.readthedocs.io/en/latest/starting.html
2019-01-16T04:32:34
CC-MAIN-2019-04
1547583656665.34
[array(['_images/login.jpg', 'Login form'], dtype=object) array(['_images/landingPage.jpg', 'CAIRIS home page'], dtype=object)]
cairis.readthedocs.io
4697(S): A service was installed in the system. Applies to - Windows 10 - Windows Server 2016 Subcategory: Audit Security System Extension Event Description: This event generates when new service was installed in the system. Note For recommendations, see Security Monitoring Recommendations for this event. Event XML: - <Event xmlns=""> - <System> <Provider Name="Microsoft-Windows-Security-Auditing" Guid="{54849625-5478-4994-A5BA-3E3B0328C30D}" /> <EventID>4697</EventID> <Version>0</Version> <Level>0</Level> <Task>12289</Task> <Opcode>0</Opcode> <Keywords>0x8020000000000000</Keywords> <TimeCreated SystemTime="2015-11-12T01:36:11.991070500Z" /> <EventRecordID>2778</EventRecordID> <Correlation ActivityID="{913FBE70-1CE6-0000-67BF-3F91E61CD101}" /> <Execution ProcessID="736" ThreadID="2800" /> <Channel>Security</Channel> <Computer>WIN-GG82ULGC9GO.contoso.local</Computer> <Security /> </System> - <EventData> <Data Name="SubjectUserSid">S-1-5-18</Data> <Data Name="SubjectUserName">WIN-GG82ULGC9GO$</Data> <Data Name="SubjectDomainName">CONTOSO</Data> <Data Name="SubjectLogonId">0x3e7</Data> <Data Name="ServiceName">AppHostSvc</Data> <Data Name="ServiceFileName">%windir%\\system32\\svchost.exe -k apphost</Data> <Data Name="ServiceType">0x20</Data> <Data Name="ServiceStartType">2</Data> <Data Name="ServiceAccount">localSystem</Data> </EventData> </Event> Required Server Roles: None. Minimum OS Version: Windows Server 2016, Windows 10. Event Versions: 0. Field Descriptions: Subject: - Security ID [Type = SID]: SID of account that was used to install the service. was used to install the service..” Service Information: - Service Name [Type = UnicodeString]: the name of installed service. Service File Name [Type = UnicodeString]: This is the fully rooted path to the file that the Service Control Manager will execute to start the service. If command-line parameters are specified as part of the image path, those are logged. Note that this is the path to the file when the service is created. If the path is changed afterwards, the change is not logged. This would have to be tracked via Process Create events. Service Type [Type = HexInt32]: Indicates the type of service that was registered with the Service Control Manager. It can be one of the following: - Service Start Type [Type = HexInt32]: The service start type can have one of the following values (see:): Most services installed are configured to Auto Load, so that they start automatically after Services.exe process is started. Service Account [Type = UnicodeString]: The security context that the service will run as when started. Note that this is what was configured when the service was installed, if the account is changed later that is not logged. The service account parameter is only populated if the service type is a "Win32 Own Process" or "Win32 Share Process" (displayed as "User Mode Service."). Kernel drivers do not have a service account name logged. If a service (Win32 Own/Share process) is installed but no account is supplied, then LocalSystem is used. The token performing the logon is inspected, and if it has a SID then that SID value is populated in the event (in the System/Security node), if not, then it is blank. Security Monitoring Recommendations For 4697(S): A service was installed in the system. Important For this event, also see Appendix A: Security monitoring recommendations for many audit events. We recommend monitoring for this event, especially on high value assets or computers, because a new service installation should be planned and expected. Unexpected service installation should trigger an alert. Monitor for all events where “Service File Name” is not located in %windir% or “Program Files/Program Files (x86)” folders. Typically new services are located in these folders. Report all “Service Type” equals “0x1”, “0x2” or “0x8”. These service types start first and have almost unlimited access to the operating system from the beginning of operating system startup. These types are very rarely installed. Report all “Service Start Type” equals “0” or “1”. These service start types are used by drivers, which have unlimited access to the operating system. Report all “Service Start Type” equals “4”. It is not common to install a new service in the Disabled state. Report all “Service Account” not equals “localSystem”, “localService” or “networkService” to identify services which are running under a user account.
https://docs.microsoft.com/en-us/windows/security/threat-protection/auditing/event-4697
2019-01-16T03:46:53
CC-MAIN-2019-04
1547583656665.34
[array(['images/event-4697.png', 'Event 4697 illustration'], dtype=object) array(['images/branchcache-properties.png', 'BrancheCache Properties illustration'], dtype=object)]
docs.microsoft.com
Working with watchlists Create a watchlist to define a set of files and/or directories you want to monitor for any changes. Create a new watchlist - Select Watchlists from the Integrity Monitor menu. - On the Watchlists page, click Create a New Watchlist. - In the Summary section, enter a Name and Description for the new watchlist. - Select a Path Style. - Integrity Monitor ships with ready-to-use watchlist templates that contain critical files and directories that are typically monitored for Windows and Linux. In the Watchlist Templates section, select a template to add it to the new watchlist. - Click Create. When you go to Watchlists in the Integrity Monitor menu, you will see the new watchlist listed. On the Watchlists page, you can select watchlists in bulk and use the Filter by name field to filter watchlists. Click the name of a watchlist on the Watchlists page to view the files and directories it includes. The target operating system and paths must be consistent within a watchlist. For example, you cannot add a Windows path to a watchlist targeting a Linux operating system. Edit a watchlist - On the Watchlists page, click on a watchlist. - Click Edit in the top right corner. - In the Edit Watchlist window, you can modify the Name, Description or Path Style for that watchlist. To customize the types of changes monitored on files in a directory listed in a watchlist or to add file exclusions for that directory: - Select the path to modify and click Edit Path. - In the Change Type section of the Edit Path window, click to select or remove the type of change you want to monitor on that path. - Click Update to save your changes. See Permission recording for special procedures to monitor permission event types for Windows recorder. View watchlist details To view the details for a watchlist, on the Watchlists page, click the watchlist you want to view and click Expand next to the path. The details you see for a watchlist depend on the role you are assigned in Integrity Monitor. Add new paths - Click Add Paths at the top of the screen listing the files/directories for a watchlist. - Select New and provide the new path and the types of changes you want to monitor on that path. - Click Add Path. The path will appear in the list of files/directories for that watchlist. - In the Exclusions section of the Add Path window, you can also choose to exclude a specific sub-directory path or file by clicking + Add Exclusion and providing the path and path type. You can use a wildcard (*) when defining file path type exclusions. To add paths by importing them from files you have already configured for another monitoring tool: - Under Add Paths, select Import From File and choose the appropriate file. Tanium currently provides limited support for importing paths from Tripwire configuration files, OSSEC configuration files, Tenable LCE policy files, and Tanium CSV files. An example of a Tanium CSV file is shown below. - Click Import. You can also add paths from templates by selecting Import From Template under Add Paths. Example Tanium CSV file used to import paths path,ops_create,ops_delete,ops_write,ops_rename,ops_permission,excludes_type,excludes_spec C:\autoexec.bat,on,on,on,on,, C:\Windows,on,on,on,on,directory,NtServicePackUninstall ,,,,,,directory,NtUninstall ,,,,,,directory,Help C:\Windows\assembly,on,on,off,off,on,file,* C:\autoexec.bat,on,on,on,on,, Will add a path “C:\autoexec.bat” that will turn on all of the supported event types (create, delete, write, rename).C:\Windows,on,on,on,on,directory,NtServicePackUninstall ,,,,,directory,NtUninstall ,,,,,directory,Help Will add a path “C:\Windows” that will turn all of the supported event types (create, delete, write, and rename) and adds 3 directory exclusions (NtServicePackUninstall, NtUninstall, and Help). C:\Windows\assembly,on,on,off,off,file,* Will add a path “C:\Windows\assembly” that will turn on create and delete event types and adds 1 file exclusion (*). Filter files and directory paths - Use the Filter by name field at the top right of the page listing the files/directories for a watchlist to show directories with only that text in the path name. - You can delete the filtered directories in bulk or change the types of changes being monitored for files in those directories by selecting all. - Delete the text in the Filter by Text field to return to the full list of files/directories. Export and import watchlists You can export a watchlist if, for example, you created the watchlist in your QA/lab environment and you want to move it to your production environment, or for backup purposes. To export a watchlist, open the watchlist and click Export at the top right of the page to export that watchlist. To import the watchlist, click Import at the top right of the Watchlists page and then select the watchlist file in the Import Watchlist window. Click Import to import the file. Last updated: 1/15/2019 5:10 PM | Feedback
https://docs.tanium.com/integrity_monitor/integrity_monitor/watchlist.html
2019-01-16T03:36:03
CC-MAIN-2019-04
1547583656665.34
[array(['images/view_watchlist_details_thumb_100_0.png', None], dtype=object) ]
docs.tanium.com
Text property (Microsoft Forms) Returns or sets the text in a TextBox. Changes the selected row in a ComboBox or ListBox. Syntax object.Text [= String ] The Text property syntax has these parts: Remarks For a TextBox, any value you assign to the Text property is also assigned to the Value property. For a ComboBox, you can use Text to update the value of the control. If the value of Text matches an existing list entry, the value of the ListIndex property (the index of the current row) is set to the row that matches Text. If the value of Text does not match a row, ListIndex is set to -1. For a ListBox, the value of Text must match an existing list entry. Specifying a value that does not match an existing list entry causes an error. You cannot use Text to change the value of an entry in a ComboBox or ListBox; use the Column or List property for this purpose. The ForeColor property determines the color of the text.
https://docs.microsoft.com/en-us/office/vba/Language/Reference/User-Interface-Help/text-property-microsoft-forms
2019-01-16T03:57:12
CC-MAIN-2019-04
1547583656665.34
[]
docs.microsoft.com
Set Default CPU Quota CPU quota is a percentage value limiting maximum VS CPU load on a compute resource. CPU quota functionality allows limiting CPU usage for the particular virtual server in order to avoid abuse usage which is affecting all virtual servers on the KVM compute resource. - This option is available for the users under administrator's role. Make sure you have enabled Manage CPU quota permission first. - This feature is available only for KVM compute resources. - Before you enable CPU quota, its value is set to unlimited for all the VSs on this compute resource. You can set the default value of CPU quota on the compute resource level and edit the custom value on the virtual server level. Set CPU Quota for Compute Resource To set default CPU quota for KVM compute resource: - Go to your Control Panel > Settings menu and click the Compute Resources icon. - Click the label of the compute resource you are interested in. - On the screen that appears, click Tools > Set default CPU Quota. - Move the CPU Quota enabled slider to the right to enable CPU quota and set the default value. Set CPU quota. The maximum value is 99%. Also, you can select the ∞ unlimited checkbox to set an unlimited amount of CPU quota. - Click the Save button. - If default CPU quota value is changed or CPU quota is enabled, it does not affect running virtual servers until they are restarted. - If default CPU quota is disabled, it is set unlimited for all running virtual servers.
https://docs.onapp.com/agm/latest/compute-resource-settings/compute-resources-settings/set-default-cpu-quota
2019-01-16T04:19:36
CC-MAIN-2019-04
1547583656665.34
[]
docs.onapp.com
1.2.1 Lenses on Ordered Data Many Racket data structures hold ordered or sequential values. Lenses for accessing elements of these structures by index are provided. 1.2.1.1 Pairs and Lists The Lens Reference has additional information on pair and list lenses. The two primitive pair lenses are car-lens and cdr-lens: Obviously, these also work with lists, but most of the time, it’s easier to use list-specific lenses. For arbitrary access to elements within a list, use the list-ref-lens lens constructor, which produces a new lens given an index to look up. Abbreviation lenses such as first-lens and second-lens are provided for common use-cases: This is useful, but it only works for flat lists. However, using lens composition, it is possible to create a lens that performs indexed lookups for nested lists using only list-ref-lens: This can also be generalized to n-dimensional lists: This function is actually provided by lens under the name list-ref-nested-lens, but the above example demonstrates that it’s really a derived concept. 1.2.1.1.1 Fetching multiple list values at once Sometimes it can be useful to fetch multiple values from a list with a single lens. This can be done with lens-join/list, which combines multiple lenses whose target is a single value and produces a new lens whose view is all of those values. This can be useful to implement a form of information hiding, in which only a portion of a list is provided to client code, but the result can still be used to update the original list. 1.2.1.2 Vectors and Strings The Vector lenses and String Lenses sections in The Lens Reference have additional information on vector and string lenses, respectively. Lenses for random-access retrieval and functional update on vectors and strings are similar to the lenses provided for lists, but unlike lists, they are truly random-access. The vector-ref-lens and string-ref-lens lens constructors produce random-access lenses, and lens-join/vector and lens-join/string combine multiple lenses with vector or string targets. 1.2.1.3 Streams The Lens Reference has additional information on stream lenses. Racket’s streams contain ordered data, much like lists, but unlike lists, they are lazy. Lenses on streams are similarly lazy, only forcing the stream up to what is necessary. This allows stream lenses to successfully operate on infinite streams. Keep in mind that since lens-transform is strict, using it to update a value within a stream will force the stream up to the position of the element being modified.
https://docs.racket-lang.org/lens/ordered-data-lenses.html
2019-01-16T04:32:15
CC-MAIN-2019-04
1547583656665.34
[]
docs.racket-lang.org
Create the Organization¶ In order to begin managing your infrastructure with Enterprise Chef, you will need to create an organization. An organization is completely multi-tenant Chef infrastructure that shares nothing with other organizations on your Enterprise Chef server. Add Organization¶ To add an organization: Open the Chef management console. Click Administration. Click Organizations. Click Create. In the Create an Organization dialog box, enter the full and short names for the organization: Click Create Organization. Reset Validation Key¶ To reset a chef-validator key: Open the Chef management console. Click Policy. Click Clients. Select a chef-validator key. Click the Details tab. Click Reset Key. In the Reset Key dialog box, confirm that the key should be regenerated and click the Reset Key button: Copy the private key: or download and save the private key locally:
https://docs-archive.chef.io/release/oec_11-0/install_server_orgs.html
2019-01-16T04:23:27
CC-MAIN-2019-04
1547583656665.34
[]
docs-archive.chef.io
Help you and less SQL savvy users to easily customize reports to obtain the specific results that you want to extract from your data, for easier data exploration Tutorial video is coming soon 1. Create Filter Value In Report Editor View, create filters by clicking add from the filters panel Fill required inputs for your filters - Filter type: Date/DateRange/Dropdown/Text Input/List Input - Variable name: to use in your SQL as variable - Label: For display purpose - Config: configure the value and display of your filter variable Filter Template is a set of predefined settings for a filter which could be shared across multiple reports. When you update a Filter Template's settings, all the filters based on that template will be updated accordingly.
https://docs.holistics.io/docs/holistics-filters/
2019-01-16T04:02:36
CC-MAIN-2019-04
1547583656665.34
[]
docs.holistics.io
How to create scatter chart? A scatter chart plots single point for each datapoint in a series without connecting them. When the user hovers over the points, tooltips are displayed with more information.. Ideata Analytics provides capabilty to create scatter chart on analysis screen. The steps to create scatter Scatter: Scatter chart will be created respectively in the chart area which can be saved or exported
https://docs.ideata-analytics.com/create-visualizations/scatter-chart.html
2019-01-16T04:31:07
CC-MAIN-2019-04
1547583656665.34
[array(['../assets/scatter.png', None], dtype=object)]
docs.ideata-analytics.com
Set the value at the specified position Member of Grid Item (PRIM_GDIT) SetValueAt can be used to update a value in a list cell. This allows a value to be set without needing to use LANSA list commands such as UPD_ENTRY, and without needing to know the fields being used as the source of a column. The entry must exist in the list before it can updated. If it doesn't, SetValueAt will return a False. This example adds an entry to the list and uses SetValueAt to apply a value to the a cell. Add_Entry to_list(#List) #List.CurrentItem.SetValueAt( #Row #Value )) All Component Classes Technical Reference Febuary 18 V14SP2
https://docs.lansa.com/14/en/lansa016/prim_gdit_setvalueat.htm
2019-03-18T20:15:17
CC-MAIN-2019-13
1552912201672.12
[]
docs.lansa.com
Recent Contact Ministry¶ This Condition located on the Ministry category tab in Search Builder allows you find people who have received a Contact from a specific Ministry within a specified number of days. You select one or more Ministries from a drop down list and then specify the number of days to look back. Note The Ministry list comes from the Lookup Codes and can be edited by your church’s TouchPoint Admin. Use Case You want to find everyone who received a Contact from the Children’s Ministry in the past month and who has also visited in the past 7 days. You would combine the Recent Contact Ministry Condition with a Recent Attendance Condition. See also
http://docs.touchpointsoftware.com/SearchBuilder/QB-RecentContactMinistry.html
2019-03-18T19:27:36
CC-MAIN-2019-13
1552912201672.12
[]
docs.touchpointsoftware.com
Model Parameters¶ - class larch. ModelParameter(model, index)¶ A ModelParameter is a reference object, referring to a Modeland a parameter index. Unlike a roles.ParameterRef, a ModelParameteris explicitly bound to a specific Model, and edits to attributes of a ModelParameterautomatically pass through to the underlying Model. These attributes support both reading and writing: holdfast¶ a flag indicating if the parameter value should be held fast (constrained to keep its value) during estimation These attributes are read-only: - class larch. ParameterManager¶ The ParameterManager class provides the interface to interact with various model parameters. You can call a ParameterManager like a mathod, to add a new parameter to the model or to access an existing parameter. You can also use it with square brackets, to get and set ModelParameter items. When called as a method, in addition to the required parameter name, you can specify other ModelParameterattributes as keyword arguments. When getting or setting items (with square brackets) you can give the parameter name or integer index. See the Modelsection for examples.
https://larch.readthedocs.io/en/stable/parameter.html
2019-03-18T20:00:41
CC-MAIN-2019-13
1552912201672.12
[]
larch.readthedocs.io
API¶ Cheatsheet¶ Buckets, Groups, Collection and Record endpoints are resource endpoints which can be filtered, paginated, and interacted with as described in Resource endpoints. Full reference¶ Full detailed API documentation: - API versioning - Authentication - Resource endpoints - Server timestamps - Backoff indicators - Error responses - Deprecation - Buckets - Collections - Records - Groups - Permissions - Synchronisation
http://docs.kinto-storage.org/en/1.5.1/api/index.html
2019-03-18T19:34:39
CC-MAIN-2019-13
1552912201672.12
[]
docs.kinto-storage.org
Member Type Attended as Of¶ This condition on the Attendance Dates tab looks at a date range, and finds all attendees with a specific Member Type in their organization. You select the Member Type(s) from a drop-down menu. You can also specify the Program, Program/Division, or even Program/Division/Organization. Use Case You might want to know what the ratio of leaders to members during a specific date range. You select the date range you want, and you run it twice. The first time you run it using the One Of comparison and checking the various leader types. The second time you find all those who were regular members. Then divide the counts to get your ratio. Or you may just want find all the Teachers who attended during that date range. Note This condition is not looking for guests, just actual members of the organizations. To use this effectively, you will want to understand about Member Types and how they are used within an organization. See also Organization Member Types
http://docs.touchpointsoftware.com/SearchBuilder/QB-AttendMemberTypeAsOf.html
2019-03-18T19:21:49
CC-MAIN-2019-13
1552912201672.12
[]
docs.touchpointsoftware.com
An Act to amend 29.336 (2) (b) and 29.336 (2) (c) of the statutes; Relating to: rules that prohibit feeding deer in certain counties. (FE) Amendment Histories 2017 Wisconsin Act 41 (PDF: ) Bill Text (PDF: ) Fiscal Estimates and Reports LC Bill Hearing Materials Wisconsin Ethics Commission information 2017 Assembly Bill 61 - A - Rules
https://docs-preview.legis.wisconsin.gov/2017/proposals/sb68
2019-03-18T19:19:44
CC-MAIN-2019-13
1552912201672.12
[]
docs-preview.legis.wisconsin.gov
Tabs¶ Tabs are a layout container containing individual tabs, each of which contains one or more modules. This layout provides a compact display of many different modules. The preceding example illustrates the following components of a tab module: - Title for the tabs ( Media) - Label for an open tab ( Videos) - Two items appearing under the open tab ( Space Shuttle Discovery Liftoffand Space Shuttle Atlantis Liftoff) - Label for a closed tab ( Images) (Your version of Brightspot may render tabs differently.) In the content edit page, open or create the item to which you want to add a tab module. Under Overrides, expand Module Placement. Drop-down lists Above, Aside, and Below appear for the tab module’s position. (For an illustration of these positions, see the diagram Examples of Module Placement.) From one of the drop-down lists, select Replace. For an explanation of the possible selections for module placement, see Module Placement. Click add_circle_outline, and select Tabs. A form appears. In the Title field, enter a title. Under Tabs, click add_circle_outline. A Tab Item form appears. In the Tab Item form, do the following: - In the Label field, enter a label for the tab. - In the Title field, enter a title for the tab. - Under Tab Content, do the following: - Click add_circle_outline to add a module. The corresponding form appears. - Fill out the module form. - Repeat steps a–b to add additional items to this tab. - Depending on your current theme, there is an Overrides tab with additional fields you can use to further define the module. Repeat steps 6–7 to add additional tabs. Depending on your current theme, there is an Overrides tab with additional fields you can use to further define the module. Your Tabs form looks similar to the following: Click Publish. For information about adding different types of modules under tabs, see the following:
http://docs.brightspot.com/cms/editorial-guide/modules/tab.html
2019-02-16T02:52:33
CC-MAIN-2019-09
1550247479838.37
[]
docs.brightspot.com
: Path to the beets library file. Defaults to ~/.beetsmusic.blb on Unix and %APPDATA\beetsmusic.blb on Windows. The directory to which files will be copied/moved when adding them to the library. Defaults to ~. A space-separated list of glob patterns specifying file and directory names to be ignored when importing. Defaults to .* *~ (i.e., ignore Unix-style hidden files and backup files).> ... When importing album art, the name of the file (without extension) where the cover art image should be placed. Defaults to cover (i.e., images will be named cover.jpg or cover.png and placed in the album’s directory). A space-separated list of plugin module names to load. For instance, beets includes the BPD plugin for playing music. A colon-separated list of directories to search for plugins. These paths are just added to sys.path before the plugins are loaded. The plugins still have to be contained in a beetsplug namespace package.. The amount of time that the SQLite library should wait before raising an exception when the database lock is contended. This should almost never need to be changed except on very slow systems. Defaults to 5.0 (5 seconds). Format to use when listing individual items with the beet list command. Defaults to $artist - $album - $title. The -f command-line option overrides this setting. Format to use when listing albums with the beet list command. Defaults to $albumartist - $album. The -f command-line option overrides this setting..)
https://beets.readthedocs.io/en/1.0b15/reference/config.html
2019-02-16T04:45:08
CC-MAIN-2019-09
1550247479838.37
[]
beets.readthedocs.io
With the increase in popularity of smart devices, like Amazon Echo, voice user interaction (VUI) has already become a mainstream mode of human-computer interaction. You can use Amazon Sumerian Hosts to bring life to your VUI, and create interactive virtual concierge experiences. The Host can personalize a greeting for each user based on facial recognition, walk your users through your company’s services and offerings, and answer commonly asked questions. In this Sumerian Concierge demo, our Host, Cristine, introduces a user to the Sumerian team. When she recognizes her teammate through a webcam, she greets them by name and shows them their desk’s location. She’s also able to help visitors learn more about the Sumerian office space. The kiosk supports both voice and touch interactions to accommodate settings with various noise requirements. Powered by AWS artificial intelligence services, the Host understands the intent of different phrases as the same request, and responds accordingly. For example, Cristine can understand “Open floor plan” and “Show me map” as the same intent. We added emotional intelligence for the Host by changing the tone and content of her greeting, based on how her underlying AI interprets the user’s facial expression. She also has a varied response during each interaction. Additionally, the Sumerian Host component’s Point of Interest system is coupled with computer vision. This enables Cristine to maintain eye contact with a user to increase user engagement. Although this demo is built for touch screens and laptops, you can easily extend it to mobile or virtual reality applications so that your users can continue to interact with our Sumerian Host even when they are not on site. Hardware Setup The minimum hardware requirement for this experience is a laptop with a webcam, microphone, and speaker. For a kiosk installation, we additionally recommend a touch screen and an external webcam. Technologies Used in the Scene At the core of this experience is the ability to converse with the Sumerian Host. This. Sumerian integrates the AWS JavaScript API and other APIs to personalize the experience for each user and enhance interactivity.
https://docs.sumerian.amazonaws.com/articles/concierge-experience/
2019-02-16T04:02:12
CC-MAIN-2019-09
1550247479838.37
[]
docs.sumerian.amazonaws.com
Contributing to this project¶ - All potential contributors must read the Contributor Code of Conduct and follow it - Fork the repository on GitHub or GitLab - Create a new branch, e.g., git checkout -b bug/12345 - Fix the bug and add tests (if applicable) - Add yourself to the AUTHORS.rst - Commit it. While writing your commit message, follow these - See Example Commit Message below - Push it to your fork - Create either a request for us to merge your contribution After this last step, it is possible that we may leave feedback in the form of review comments. When addressing these comments, you can follow two strategies: - Amend/rebase your changes into an existing commit - Create a new commit and push it to your branch This project is not opinionated about which approach you should prefer. We only ask that you are aware of the following: - Neither GitHub nor GitLab notifies us that you have pushed new changes. A friendly ping is welcome -
https://toolbelt.readthedocs.io/en/0.5.1/contributing.html
2019-02-16T04:19:32
CC-MAIN-2019-09
1550247479838.37
[]
toolbelt.readthedocs.io
Contents Performance Analytics and Reporting Previous Topic Next Topic Create a stacked column visualization for a breakdown widget Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Create a stacked column visualization for a breakdown widget To follow changes over time in the relative proportion of breakdown elements for an indicator, use a stacked column visualization in a breakdown widget. Before you beginRole required: pa_power_user or admin About this taskThis visualization shows the relative proportion of breakdown elements in a single column, and shows a column for every point in time that indicator scores are collected. To select the time period over which changes are tracked, go to the Date Settings tab. Figure 1. Stacked column visualization - breakdown Stacked Column. You can let the user switch between visualizations. Select Show visualization selector in the Display settings tab. In the Indicator field, select the main indicator which you want to break down. (Optional) To show only the scores that match one element of a breakdown, select a filtering breakdown in the Breakdown and Element fields. To use a breakdown to filter the data, you must specify an element. Select the breakdown to group the scores by. If you did not select a breakdown as a filter, select the grouping breakdown in the Breakdown field. If you selected a filtering breakdown in the Breakdown and Element fields, select the grouping breakdown in the 2nd Breakdown field. You can let the user switch between breakdowns to apply. Select Show breakdown selector in the Display settings tab. (Optional) Fill in any of the following fields: FieldDescription Time series Run a function on the scores for a specific time period, such as applying a 7-day sum or average. For more information, see Time series aggregations in scorecards and widgets. Sort on Sort the data on this attribute.. ConceptsGrouping by breakdown and filtering by breakdownInteracting with breakdown widgets on dashboardsRelated ReferenceOptional settings for breakdown widgets On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-performance-analytics-and-reporting/page/use/performance-analytics/task/create-stacked-column-bkdown-widget.html
2019-02-16T04:00:57
CC-MAIN-2019-09
1550247479838.37
[]
docs.servicenow.com
#include "gossip_propagation_strategy.hpp" This class provides strategy for propagation states in network Emits exactly (or zero if provider is empty) amount of peers at some period note: it can be inconsistent with the peer provider Initialize strategy with Provides observable that will be emit new results with respect to own strategy Implements iroha::PropagationStrategy.
https://docs.iroha.tech/dd/d61/classiroha_1_1GossipPropagationStrategy.html
2019-02-16T03:54:48
CC-MAIN-2019-09
1550247479838.37
[]
docs.iroha.tech
In WordPress, each menu saved with a unique ID associated with it in the database which you can reference when making advanced tweaks to your site such as showing different menus conditionally. Follow the steps below to find the ID of your menu: Log into your WordPress website and navigate to Appearance » Menus. Select the menu you want to get the ID of. Once selected, take a look at your browser URL and find the menu ID at the end.
https://docs.conj.ws/development/find-a-menu-id
2019-02-16T03:36:58
CC-MAIN-2019-09
1550247479838.37
[]
docs.conj.ws
How to define commissions in VendSoft You can define Comission on Machine level and Location level. If you define commission on both levels, Machine level has precedence. The system supports 5 different commission types: - % of Gross Sales Commissions - % Gross Sales Gross sales = Sum of (Sold Qty * Vend Price). % of Gross Profit Commissions - % Gross Profit Gross Profit = Gross Sales - COGS, or in other words Gross Profit = Sum of (Sold Qty * Vend Price) - Sum of (Sold Qty * Avg Cost). % Cash Collected Commissions - % Cash Collected Monthly Commissions - Monthly Per Item Sold Commissions - Per Item Sold None - no commissions are paid. Define on Machine level This is how to define commissions on Machine level: go to Machines tab, select the machine in Machines grid and right-click on it to show context menu. Select Commissions menu. The system opens Machine Commissions dialog where you make your settings. Define on Location level You can define commissions on Location level when you create the location record or at a later moment when you edit location.
https://docs.vendsoft.com/articles/commissions/
2019-02-16T04:18:46
CC-MAIN-2019-09
1550247479838.37
[array(['../../screenshots/articles/commissions.png', None], dtype=object)]
docs.vendsoft.com
). Adding paths on the API proxy Performance tab Up the following Apigee Community article:. (EDGEUI-902) Bugs fixed The following bugs are fixed in this release. This list is primarily for users checking to see if their support tickets have been fixed. It's not designed to provide detailed information for all users.
https://docs.apigee.com/release/notes/170215-apigee-edge-public-cloud-release-notes-ui?authuser=0&hl=ja
2021-11-27T15:01:54
CC-MAIN-2021-49
1637964358189.36
[]
docs.apigee.com
Storing business data Accessing additional data on process instances Step by step example Data management Most development platforms or digital products come with their own data model that doesn’t quite match whatever already exists in an organization. Dealing with this situation is achieved through complex mappings or by building intermediary or transient databases that only add complexity. What FlowX.AI does differently is our so called "no data model" model , which we build step by step as it suits the business flow. This means that we don’t have to spend time on complex data mappings when building an app. Instead, we just create the needed data points while designing the process. When we need to integrate with an external system we just need to map the data model available in the business process with the data model expected and / or returned by the integration. This brings incredible flexibility in business operations as well as in changing, replacing or adding new business applications. FlowX.AI is also built with AI at the core . Winning in the digital age cannot be done without data. Traditional enterprises sit on tons of data, yet data is siloed, meaning it is stored in uncorrelated systems, which makes it difficult to use ML algorithms to optimize processes or forecast business outcomes. With this challenge in mind, the FlowX.AI platform was built from the start with the ability of managing data in such a way that it becomes useful in business operations . Sub-processes in depth Storing business data Last modified 7mo ago Copy link
https://docs.flowx.ai/flowx-engine/data-management
2021-11-27T15:11:25
CC-MAIN-2021-49
1637964358189.36
[]
docs.flowx.ai
All Categories Getting Started New to DailyStory? Start here for a step-by-step walk through, helpful videos and introductions to our documentation guides. Account Setup and Configuration Articles related to getting DailyStory setup and configured. Integrations Contacts A DailyStory Contact is a customer or prospective customer managed by DailyStory. While there is only a single contact for an individual, the contact may be part of multiple campaigns. This is called a lead. Segments A segment in DailyStory enables you to organize your contacts into smaller groups. This enables you to target those groups with specific messages using the most appropriate channel. Campaigns Campaigns organize and manage contacts and assets around activities. Activities include visits to pages, scheduled emails, drip marketing, push notifications and more. Features Explore DailyStory features from Personalization, Scheduling, Events, Autopilot automation and more. Autopilot Automation Autopilot is DailyStory’s friendly, drag-and-drop automation builder. Using Triggers, Actions and Conditions you can model out complex user journeys and experiences. DailyStory's email marketing engine sends targeted and personalized emails to your segments and contacts in a campaign. Emails are sent based on a schedule or a workflow. You can also send individual emails. Text Message Marketing DailyStory's Text Message marketing is a powerful tool for sending personalized communication to your customers mobile devices. Text Messages are sent based on a schedule or a workflow. You can also send individual text messages. Quickstarts Step by step instructions for building out popular campaigns with assets and automation. Reports Reports and Dashboards enable you to see insights about your contacts as well as track, measure and monitor how your contacts and prospective clients interact with your marketing. Give Us Feedback Article links to reviews and feedback.
https://docs.dailystory.com/
2021-11-27T15:25:22
CC-MAIN-2021-49
1637964358189.36
[array(['https://files.helpdocs.io/q8t48d995t/other/1591019461734/getting-started.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591023409487/gear-setu.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591018760121/contacts.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1590961614177/line-chart.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591040426071/megaphone.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1590959468073/games-medal.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591039993359/workflow.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1590939163459/email-responder.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1590954613805/text-ms.png', 'Category icon'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1592116508625/1569849403665.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) array(['https://files.helpdocs.io/q8t48d995t/other/1591035716058/my-avatar.png', 'author avatar'], dtype=object) ]
docs.dailystory.com
Zoning provides access control in the SAN topology. Zoning defines which HBAs can connect to which targets. When you configure a SAN by using zoning, the devices outside a zone are not visible to the devices inside the zone. Zoning has the following effects: -..
https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.storage.doc/GUID-E7818A5D-6BD7-4F51-B4BA-EFBF2D3A8357.html
2021-11-27T15:06:09
CC-MAIN-2021-49
1637964358189.36
[]
docs.vmware.com
You register the vRealize Orchestrator server with a vCenter Single Sign-On server by using the vSphere authentication mode. Use vCenter Single Sign-On authentication with vCenter Server 6.0 and later. Prerequisites - Download and deploy the latest version of the vRealize Orchestrator Appliance. See Download and Deploy the vRealize Orchestrator Appliance. - Install and configure a vCenter Server with vCenter Single Sign-On running. See the vSphere documentation.Sphere from the Authentication mode drop-down menu. - In the Host address text box, enter the fully qualified domain name or IP address of the Platform Services Controller instance that contains the vCenter Single Sign-On and click Connect.Note: If you use an external Platform Services Controller or multiple Platform Services Controller instances behind a load balancer, you must manually import the certificates of all Platform Services Controllers that share a vCenter Single Sign-On domain.Note: To integrate a different vSphere Client with your configured vRealize Orchestrator environment, you must configure vSphere to use the same Platform Services Controller registered to vRealize Orchestrator. For High Availability vRealize Orchestrator environments, you must replicate the PCS instances behind the vRealize Orchestrator load balancer server. - Review the certificate information of the authentication provider and click Accept Certificate. - Enter the credentials of the local administrator account for the vCenter Single Sign-On domain. Click REGISTER.By default, this account is [email protected] and the name of the default tenant is vsphere.local. - In the Admin group text box, enter the name of an administrators group and click SEARCH.For example, vsphere.local\vcoadmins - Select the administration group you want to use. - Click SAVE CHANGES.A message indicates that your configuration is saved successfully. Results You have successfully finished the vRealize Orchestrator server configuration. What to do next - Verify that CIS.
https://docs.vmware.com/en/vRealize-Orchestrator/8.5/com.vmware.vrealize.orchestrator-install-config.doc/GUID-61267B72-2963-4E22-9630-DA90AF40EC05.html
2021-11-27T14:56:28
CC-MAIN-2021-49
1637964358189.36
[]
docs.vmware.com
Create a "master" encryption key for the new encryption zone. Each key will be specific to an encryption zone. Ranger supports AES/CTR/NoPadding as the cipher suite. (The associated property is listed under HDFS -> Configs in the Advanced hdfs-site list.) Key size can be 128 or 256 bits. Recommendation: create a new superuser for key management. In the following examples, superuser encr creates the key. This separates the data access role from the encryption role, strengthening security. Create an Encryption Key using Ranger KMS (Recommended) In the Ranger Web UI screen: Choose the Encryption tab at the top of the screen. Select the. For information about rolling over and deleting keys, see Using the Ranger Key Management Service in the Ranger KMS Administration Guide. Create an Encryption Key using the CLI The full syntax of the hadoop key create command is as follows: [create <keyname> [-cipher <cipher>] [-size <size>] [-description <description>] [-attr <attribute=value>] [-provider <provider>] [-help]] Example: # su - encr # hadoop key create <key_name> [-size <number-of-bits>] The default key size is 128 bits. The optional -size parameter supports 256-bit keys, and requires the Java Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File on all hosts in the cluster. For installation information, see Installing the JCE. Example: # su - encr # hadoop key create key1 To verify creation of the key, list the metadata associated with the current user: # hadoop key list -metadata For information about rolling over and deleting keys, see Using the Ranger Key Management Service in the Ranger KMS Administration Guide.
https://docs.cloudera.com/HDPDocuments/HDP2/HDP-2.4.0/bk_Security_Guide/content/create-encr-key.html
2021-11-27T15:01:52
CC-MAIN-2021-49
1637964358189.36
[array(['figures/6/figures/add-new-key.png', None], dtype=object) array(['figures/6/figures/add-key-scrn2.png', None], dtype=object)]
docs.cloudera.com
Package Metrics and Formulas Package consumption is calculated using a combination of metrics (decimal values) in a formula. The metrics that are used for package licensing are unique to each package. The licensed units of each package are based on the objects that it contains. For example, the package SAP Payroll Processing uses the number of user primary records, while SAP E-Recruiting uses the number of employees. Other metrics include the number of orders, contracts, patients, etc. In many cases, a single metric is the consumption (for example, the number of end users), or the formula to calculate the consumption is relatively simple (for example, metric1 + metric2). In rare cases, the formula is more complicated, and a different formula based on the SAP Basis release or SAP price list version may be required. When measured, SAP packages often return more metrics than is required for licensing purposes, making it difficult to determine which metric or metrics to use. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/concepts/SAP-PkgMetricsFormulas.html
2021-11-27T14:10:23
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
Vector3D(Double, Double, Double) Constructor Definition Important Some information relates to prerelease product that may be substantially modified before it’s released. Microsoft makes no warranties, express or implied, with respect to the information provided here. public: Vector3D(double x, double y, double z); public Vector3D (double x, double y, double z); new System.Windows.Media.Media3D.Vector3D : double * double * double -> System.Windows.Media.Media3D.Vector3D Public Sub New (x As Double, y As Double, z As Double) Parameters Examples // Translates a Point3D by a Vector3D using the overloaded + operator. // Returns a Point3D. Vector3D vector1 = new Vector3D(20, 30, 40); Point3D point1 = new Point3D(10, 5, 1); Point3D pointResult = new Point3D(); pointResult = point1 + vector1; // vectorResult is equal to (30, 35, 41) ' Translates a Point3D by a Vector3D using the overloaded + operator. ' Returns a Point3D. Dim vector1 As New Vector3D(20, 30, 40) Dim point1 As New Point3D(10, 5, 1) Dim pointResult As New Point3D() pointResult = point1 + vector1 ' vectorResult is equal to (30, 35, 41)
https://docs.microsoft.com/en-us/dotnet/api/system.windows.media.media3d.vector3d.-ctor?view=windowsdesktop-5.0
2021-11-27T15:18:51
CC-MAIN-2021-49
1637964358189.36
[]
docs.microsoft.com
Does Nebula Operator support the v1.x version of Nebula Graph?¶ No, because the v1.x version of Nebula Graph does not support DNS, and Nebula Operator requires the use of DNS. Does Nebula Operator support the rolling upgrade feature for Nebula Graph clusters?¶ Nebula Operator currently supports cluster upgrading from version 2.5.x to version 2.6.x. Is cluster stability guaranteed if using local storage?¶ There is no guarantee. Using local storage means that the Pod is bound to a specific node, and Nebula Operator does not currently support failover in the event of a failure of the bound node. How to ensure the stability of a cluster when scaling the cluster?¶ It is suggested to back up data in advance so that you can roll back data in case of failure. Last update: November 16, 2021
https://docs.nebula-graph.io/2.6.1/nebula-operator/7.operator-faq/
2021-11-27T14:38:27
CC-MAIN-2021-49
1637964358189.36
[]
docs.nebula-graph.io
Document Type Article Publication Date March 2008 Abstract Foundational research on police use of bicycles for patrol. A participant/observation research design was used. A five-city, 32-shift study on the output of police bicycle patrols was conducted. Same and similar ride-alongs were conducted with bicycle and automobile patrols. All contacts (n1/4 1,105) with the public were recorded and coded. These data included: number of people, tenor, seriousness and origination for each contact. Recommended Citation Menton, C. (2008). Bicycle patrols: an underutilized resource. Retrieved from In: Policing: an International Journal of Police Strategies & Management, vol. 31, no. 1, 2008.
https://docs.rwu.edu/sjs_fp/9/
2021-11-27T13:47:21
CC-MAIN-2021-49
1637964358189.36
[]
docs.rwu.edu
All Access Evidence FlexNet Manager Suite 2020 R2 (On-Premises) The Access evidence tab on the All Evidence page lists the collected access evidence records for the server applications that require Client Access License (CAL), including ignored, inactive, and unrecognized access evidence. For more information on access evidence, see Access Evidence.. FlexNet Manager Suite (On-Premises) 2020 R2
https://docs.flexera.com/FlexNetManagerSuite2020R2/EN/WebHelp/topics/Evid-AllAccEvid.html
2021-11-27T15:10:33
CC-MAIN-2021-49
1637964358189.36
[]
docs.flexera.com
You can transfer all the leads of one sales rep to another in one single operation. To bulk transfer leads, follow the steps below: 1. Navigate to Sales Team > SR and Account Mapping. All the SR and Account Mappings will be displayed. 2. Click Bulk Transfer. The Bulk Transfer dialog will be displayed. 3. Select the current sales rep from which you want to transfer the leads from the From Current Sales Rep dropdown list. 4. Select the sales rep to which you want to transfer the leads from the To Sales Rep dropdown list. 5. Click Transfer. A confirmation dialog will be displayed. 6. Click CONFIRM. All the leads will be transferred to the selected sales rep.
https://docs.leadangel.com/knowledge-base/bulk-transferring-leads-in-sr-and-account-mapping/
2021-11-27T13:57:07
CC-MAIN-2021-49
1637964358189.36
[]
docs.leadangel.com
Home > Journals > RR > Vol. 4 (2008) > Iss. 1 Article Title Understanding the Implications of a Global Village Abstract Whether the world is shrinking, expanding, or remaining the same metaphorical size, it is clear that how we communicate across physical and cultural boundaries is changing at an accelerated rate., we will be able to reach some sort of consensus as a global village about how these issues should be addressed in order to benefit all members of our village equally. Recommended Citation Dixon, Violet (2008) "Understanding the Implications of a Global Village," Reason and Respect: Vol. 4 : Iss. 1 , Article 13. Available at:
https://docs.rwu.edu/rr/vol4/iss1/13/
2021-11-27T13:54:10
CC-MAIN-2021-49
1637964358189.36
[]
docs.rwu.edu