content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
Performance and Scalability Comparisons This section compares performance and scalability between GVP 8.x and previous releases, using the profiles VoiceXML_App1 and VoiceXML_App2. Performance Comparisons Scalability Comparisons For applications that are CPU-dependent (or applications in which bottlenecks occur due to CPU cycles) GVP 8.x can use additional CPU cycles and cores. Use case results showed that peak port densities scaled upward linearly relative to an increase in CPU clock speed. Table: Examples of Peak Capacity using VoiceXML_App1 Figure: CPU Clock Speed Versus Peak Capacity is a graphical depiction of the peak port density in Table: Call Control Platform Bandwidth Usage. CPU Clock Speed Versus Peak Capacity To increase the total clock speed by 100%, the peak capacity would have to increase by ~90 to 100%, assuming: - The type of CPUs are the same as the ones in Table: Call Control Platform Bandwidth Usage. - The VoiceXML_App1 application is used. - The overall system bottleneck CPU cycles remain the same. High Performance Configuration The Media Control Platform can support more than 400 ports on a single host, however, some configuration changes are required. Use Genesys Administrator to configure the Media Control Platform for high performance by modifying the options and default values in the table below, and configure the Windows Registry on the Media Control Platform to support either the NGI, GVPi, or both. Feedback Comment on this article:
https://docs.genesys.com/Documentation/GVP/85/GVP85HSG/PerfPlanScalePASC
2019-06-16T05:33:41
CC-MAIN-2019-26
1560627997731.69
[array(['/images/6/69/CPU_Clock_Speed_Versus_Peak_Capacity.png', None], dtype=object) ]
docs.genesys.com
Hello, and welcome! Let’s figure out how to use the Flow drawing tool: Here is a quick TLDR recap: - At Boomi Flow, the word we use for an app is flow. - We build a flow by connecting an Identity service for authentication, dragging and dropping elements from a sidebar into a canvas, and configuring the elements. - Elements are connected with outcomes. - Publishing the app gives us the app URL. Aah… the Flow drawing tool. When we sign into our Flow tenant (Looking for a Flow tenant? Get a free fully-loaded Boomi Flow account here.) it leads us straight to the drawing tool. This is a bird’s-eye view of Flow the drawing tool: Let’s see where the drawing tool leads us to: The Flows tab The Flows tab — When we log in to the drawing tool, we are in the Flows tab by default. We can create a flow by clicking New Flow. The Flows tab also lists all the flows in our current tenant. We can also edit a flow, run a flow, or delete a flow from here. The Pages tab The Pages tab — Page layouts in Flow lets us structure the pages our users will interact with. The Pages tab lists all the available pages layouts in a tenant. We can create a page layout by clicking New Page Layout, which opens a new page layout, and subsequently configuring the page layout. (We can also create page layouts on the fly from within the canvas when we are building a flow.) Other things we can do from this tab? Edit or delete an existing page layout. The Values tab The Values tab — Values in Boomi Flow are similar to variables. They are containers that have a name, and contain data. The values tab lists all the available values in the current tenant. We can create a value by clicking New Value. (We can also create values from the canvas, when we are building a flow.) As we can guess, we can edit or delete existing values from the Values tab as well. The Services tab The Services tab — The service element in Flow lets us connect our apps to other applications, service providers, or databases. The Services tab lists the all services we have installed in the current tenant. We can add a new service by clicking Install Service here. We can also edit or delete existing services in our tenant from here. The Types tab The Types tab — Types in Flow are used to create representations of real world objects. (In programming terms, a type is a type/interface.) The Types tab lists the all types we have installed in the current tenant. We add a new type by clicking New Type. And yes, we can edit or delete existing types from the Types tab. The Assets tab The Assets tab — Assets are static resources that we use in an app. These could be images, presentations, spreadsheets, text files, or code snippets. The Assets tab lists the all assets we have installed in the current tenant. We can add a new asset to our tenant by uploading a file from this tab. We can preview our assets from this tab, and also delete or rename them. The Tenant tab The Tenant tab — The Tenant tab lists all the builders and users associated with our tenant. We can add new builders from the Tenant tab as well. We can also configure security and restriction options from here. We can add subtenants from within tenants, but not from subtenants. The options to create subtenants will show up only if we are in the main tenant. The API tab The API tab — Boomi Flow has an in-built client that lets us create API requests. The API tab lets us access Flow endpoints right from within the drawing tool. The Import/Export tab The Import/Export tab — Before we start jumping into importing and exporting flows.. Here is a fun fact. The word ‘love’ appears 2,191 times in Shakespeare’s plays. Some people believe Shakespeare’s plays were actually written by another playwright called Marlowe; however that has not been confirmed as a fact. Okay, to continue with Flow now… We can import and export flows from the eponymous Import/Export tab. Builders, armed with the flow package (JSON) or the shared token will be able to access any flows we choose to share with them, from their own tenants. The Players tab The Players tab — When. The Players tab is the place where we can create a new player, or edit/delete an existing player. We can also preview how a flow looks with a player from the Players tab. The Macro tab The Macro tab — We can execute JavaScript code from within our flow. This is the place to be, when we want to create, edit, or delete a macro. The Metrics tab The Metrics tab — How many states were created in our tenant last month? Were there any service failures? What about service requests? We find answers to all such flow questions in the Metrics tab. The Docs link The Docs — Is a beautiful place by the sea. Clicking the Docs link bring us to this very site, where we can learn more about how Flow works, and build cool apps that make our lives more efficient. The Support link Get Support — We have the full extent of Flow support! Have a question, write to us please, and someone from the Flow support team will get back with answers. The Sign out link Sign out — Clicking the Sign Out link signs us out from the existing tenant. We are led back to the Flow sign-in page. Two more things.. Let’s look at a couple of other things we can do from the Flow drawing tool. We can change the tenant we are currently in, and we can go to AtomSphere, MDM, or API Management. Changing the tenant — A Flow tenant is the place where all our flows, values, service integrations and content are stored. A subtenant is a new tenant under the same tenant account. We can change the tenant we are in, by clicking the gravatar icon on the right-hand side. Opening AtomSphere, MDM, or API Management — We can open AtomSphere, MDM, or API Management in a new tab by clicking Flow, and selecting the option of our choice. Now we know how the Flow drawing tool works. Next up… the Flow canvas! We are getting deliciously close to building cool enterprise apps and integrations.
https://docs.manywho.com/working-with-the-flow-drawing-tool/
2019-06-16T05:48:39
CC-MAIN-2019-26
1560627997731.69
[array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-9.40.47-PM-minishadow-1024x548.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-10.24.36-PM-minishadow-1024x547.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-10.28.07-PM-minishadow-1024x544.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-10.30.58-PM-minishadow-1024x545.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-10.36.12-PM-minishadow-1024x543.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-10.41.04-PM-minishadow-1024x544.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-20-at-11.50.53-PM-minishadow-1024x546.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-12.12.42-AM-minishadow-1024x545.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-12.53.14-AM-minishadow-1024x546.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-12.58.51-AM-minishadow-1024x546.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-1.14.58-AM-minishadow-1024x545.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-1.40.25-AM-minishadow-1024x547.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-1.58.50-AM-minishadow-1024x545.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-2.04.18-AM-minishadow-1024x546.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-21-at-2.14.15-AM-minishadow-1024x545.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-22-at-1.46.23-PM-minishadow-1024x532.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-22-at-1.52.12-PM-minishadow-1024x548.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-22-at-1.59.46-PM-minishadow-1024x547.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-23-at-4.56.20-AM-minishadow-1024x547.png', None], dtype=object) array(['https://docs.manywho.com/wp-content/uploads/2016/08/Screen-Shot-2018-04-23-at-7.23.35-AM-minishadow-1024x547.png', None], dtype=object) ]
docs.manywho.com
We. (i) You must have Super Admin or Admin permissions in Engage in order to schedule pulses and manage questions. Get targeted feedback by segment In addition to the main Engage pulse question, you can also send additional pulses to your segments. Segment pulses are designed to be sent when topics are at the top of your mind, so you can only create segment pulses for the very next pulse that's queued to go out. - Go to the Engage Dashboard and open the Pulses section using the left side navigation. - Click Add New Pulse. - Enter your question by choosing one from the question bank or creating a custom question. - Select Specific Segments in the Send to section. - Choose which segments to have this pulse sent to. You can choose one or many. - Verify your question in the preview window and Save when you're done. Your segment level question will now appear in the queue for the very next pulse that's scheduled to go out. All employees will get the main Company pulse and employees belonging to a segment scheduled to get a Segment pulse will get that question as well (two surveys for the week). View results Viewing segment level results on the Engage Dashboard is easy. From the Overview or Responses pages, just click the arrow next to the TINYpulse for the week of... label to view all of the available questions. Results from the main company question are separate from the segment level questions so you can get clear insights on sentiment from a specific segment or the org as a whole. The employee experience Employees receiving two pulses (the main company survey and a segment level question) will receive still receive just one email from TINYpulse, and both questions will appear on the same survey page. We like to "keep it tiny" and uncomplicated so employees won't feel burdened by the additional question. Employees can respond to both questions, one of them, or neither. They can even respond to one now and go back to give their feedback on the other later so they can respond on their own time. - Can I schedule a segment pulse to be sent in the distant future? No; segment pulses can only be scheduled for the very next pulse. Administrators cannot schedule segment pulses for any company pulse beyond the next one that is scheduled depending on your cadence (every week, every two weeks, ever four weeks). - Who can schedule segment pulses? Engage Super Admins and Admins can schedule segment pulses. Segment admins cannot create a pulse for their segments of responsibility. Segment admins should kindly reach out to their friendly Super Admin or Admin for assistance scheduling a pulse for one of their segments. - Can I still send a segment pulse if the company pulse has been paused? No; pausing a company pulse will automatically pause any segment pulses that have been scheduled. Segment pulses will also be resumed once the main company pulse has been unpaused. - Do segment pulses contribute to the Compare -> Categories calculations? No; segment pulses are designed to collect localized feedback from specific groups so results may not be an accurate representation for your entire organization. Therefore, segment pulses are not included in Compare -> Categories metrics. Please sign in to leave a comment.
https://docs.tinypulse.com/hc/en-us/articles/115004449194-Send-Engage-pulses-to-specific-segments
2019-06-16T05:39:12
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/35156119/fd9fd985eb35e83d9313519f/Screen+Shot+2017-09-29+at+8.47.04+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/40029695/59d0760edd1bb89cd34bd9e6/Screen+Shot+2017-11-20+at+3.43.57+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/35157895/0a2fe50545ef610f18d39858/Screen+Shot+2017-09-29+at+8.54.21+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/35158987/ec379a4471d00393c418bd09/Screen+Shot+2017-09-29+at+9.12.48+AM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/40030196/8614d2c50611065237b804c0/Screen+Shot+2017-11-20+at+3.48.26+PM.png', None], dtype=object) array(['https://downloads.intercomcdn.com/i/o/40031193/e4666c3b59bef3f86cc033d8/Screen+Shot+2017-11-20+at+3.59.54+PM.png', None], dtype=object) ]
docs.tinypulse.com
Storage How Brigade uses Kubernetes Persistent Storage Brigade allows script authors to declare two kinds of storage: - per-job caches, which persist across builds - per-build shared storage, which exists as long as the build is running Usage of these is described within the JavaScript docs and the scripting guide. This document describes the underlying Kubernetes architecture of these two storage types. Brigade and PersistentVolumeClaims Brigade provisions storage using Kubernetes PVCs. Both caches and shared storage are PVC-backed. Caches For a Cache, the Brigade worker will check to see if a Job asks for a cache. If it does, the worker will create a PVC (if it doesn’t already exist) and then mount it to the cache. A Job, in this case, gains its identity from its name, and the project that it belongs to. So two hooks in the same brigade.js can redeclare a job name and thus share the cache. That PVC is never removed by Brigade. Each subsequent run of the same Job will then mount that same PVC. Shared Storage Shared storage provisioning is markedly different than caches. - The worker will always provision a shared storage PVC per build. - Each job may mount this shared storage by setting its storage.enabledflag to true. - At the end of a build, the storage will be destroyed. In the current implementation, both the after and error hooks may attach to the shared storage volume. Supporting Brigade Storage Only certain volume plugins can support Brigade. Specifically, a volume driver must be readWriteMany in order for Brigade to use it. At the time of writing very few VolumePlugins support the readWriteMany access mode. Ensure that your volume plugin can support readWriteMany (table) or that you’re able to use NFS. Only the following volume drivers are tested: - Minikube’s 9P implementation - Azure’s AzureFile storage - NFS We believe Gluster will work, but it’s untested. Examples Using an NFS Server As Brigade uses storage for caching and short-term file sharing, it is often convenient to use storage backends that are optimized for short-term ephemeral storage. NFS (Network File System) is one protocol that works well for Brigade. You can use the NFS Provisioner chart to easily install an NFS server. $ helm repo add nfs-provisioner $ helm install --name nfs nfs-provisioner/nfs-provisioner --set hostPath=/var/run/nfs-provisioner (Note that RBAC is enabled by default. To turn it off, use --set rbac.enabled=false.) To use an emptyDir instead of a host mount, set --hostPath="", like so: $ helm install --name nfs nfs-provisioner/nfs-provisioner --set hostPath="" If you have plenty of memory to spare, and are more concerned with fast storage, you can configure the provisioner to use a tmpfs in-memory filesystem. $ helm install --name nfs nfs-provisioner/nfs-provisioner --set hostPath="" --set useTmpfs=true This chart installs a StorageClass named local-nfs. Brigade projects can each declare which storage classes they want to use. And there are two storage class settings: kubernetes.cacheStorageClass: This is used for the Job cache. kubernetes.buildStorageClass: This is used for the shared per-build storage. In your project’s values.yaml file, set both of those to local-nfs, and then upgrade your project: values.yaml # ... kubernetes kubernetes: buildStorageClass: local-nfs cacheStorageClass: local-nfs Then: $ helm upgrade my-project brigade/brigade-broject -f values.yaml If you would prefer to use the NFS provisioner as a cluster-wide default volume provider (and have Brigade automatically use it), you can do so by making it the default storage class: $ helm install --name nfs nfs-provisioner/nfs-provisioner --set hostPath="" --set defaultClass=true Because Brigade pipelines can set up and tear down an NFS PVC very fast, the easiest way to check that the above works is to run a brig run and then check the log files for the NFS provisioner: $ kubectl logs nfs-provisioner-0 | grep volume I0305 21:20:28.187133 1 controller.go:786]" created I0305 21:20:28.195955 1 controller.go:803]" saved I0305 21:20:28.195972 1 controller.go:839] volume "pvc-06e2d938-20bb-11e8-a31a-080027a443a9" provisioned for claim "default/brigade-worker-01c7w0jse5grpkzwesz3htnnv5-master" I0305 21:20:34.208355 1 controller.go:1028] volume "pvc-06e2d938-20bb-11e8-a31a-080027a443a9" deleted I0305 21:20:34.216852 1 controller.go:1039] volume "pvc-06e2d938-20bb-11e8-a31a-080027a443a9" deleted from database I0305 21:21:15.967959 1 controller.go:786] volume "pvc-235dd152-20bb-11e8-a31a-080027a443a9" for claim "default/brigade-worker-01c7w0m8jw1h44vwhvzp4pr2dr-master" created I0305 21:21:15.973328 1 controller.go:803] volume "pvc-235dd152-20bb-11e8-a31a-080027a443a9" for claim "default/brigade-worker-01c7w0m8jw1h44vwhvzp4pr2dr-master" saved I0305 21:21:15.973358 1 controller.go:839] volume "pvc-235dd152-20bb-11e8-a31a-080027a443a9" provisioned for claim "default/brigade-worker-01c7w0m8jw1h44vwhvzp4pr2dr-master" I0305 21:21:26.045133 1 controller.go:1028] volume "pvc-235dd152-20bb-11e8-a31a-080027a443a9" deleted I0305 21:21:26.052593 1 controller.go:1039] volume "pvc-235dd152-20bb-11e8-a31a-080027a443a9" deleted from database I0305 21:25:40.845601 1 controller.go:786] volume "pvc-c13e95f0-20bb-11e8-a31a-080027a443a9" for claim "default/brigade-worker-01c7w0wbffk3xhmbwwq114g15v-master" created I0305 21:25:40.853759 1 controller.go:803] volume "pvc-c13e95f0-20bb-11e8-a31a-080027a443a9" for claim "default/brigade-worker-01c7w0wbffk3xhmbwwq114g15v-master" saved I0305 21:25:40.853790 1 controller.go:839] volume "pvc-c13e95f0-20bb-11e8-a31a-080027a443a9" provisioned for claim "default/brigade-worker-01c7w0wbffk3xhmbwwq114g15v-master" I0305 21:25:50.974719 1 controller.go:786] volume "pvc-c746f068-20bb-11e8-a31a-080027a443a9" for claim "default/github-com-brigadecore-empty-testbed-three" created I0305 21:25:50.994219 1 controller.go:803] volume "pvc-c746f068-20bb-11e8-a31a-080027a443a9" for claim "default/github-com-brigadecore-empty-testbed-three" saved I0305 21:25:50.994237 1 controller.go:839] volume "pvc-c746f068-20bb-11e8-a31a-080027a443a9" provisioned for claim "default/github-com-brigadecore-empty-testbed-three" I0305 21:25:56.974297 1 controller.go:1028] volume "pvc-c13e95f0-20bb-11e8-a31a-080027a443a9" deleted I0305 21:25:56.985432 1 controller.go:1039] volume "pvc-c13e95f0-20bb-11e8-a31a-080027a443a9" deleted from database Implementation details of note: - The NFS server used is NFS-Ganesha - The Kubernetes provisioner is part of kubernetes-incubator/external-storage - Some Linux distros may not have the core NFS libraries installed. In such cases, NFS-Ganesha may not work. You may need to do something like apt-get install nfs-commonon the nodes to install the appropriate libraris. Azure File Setup If one has a Kubernetes cluster on Azure, and the default storageclass is of the non- readWriteMany-compatible kubernetes.io/azure-disk variety, one can create an Azure File storageclass and then configure the Brigade project to use this instead of default. See the official Azure File storageclass example for the yaml to use. (Hint: The parameters section can be omitted altogether and Azure will use the defaults associated with the existing Kubernetes cluster.) Create the resource via kubectl create -f azure-file-storage-class.yaml. Finally, be sure to set kubernetes.buildStorageClass=azurefile on the Brigade project Helm release, or via the “Advanced” set up if creating via the brig cli. Errata - At this point, cache PVCs are never destroyed, even if the project to which they belong is destroyed. This behavior may change in the future. - Killing the worker pod will orphan shared storage PVCs, as the cleanup routine is part of the worker’s shutdown process. If you manually destroy a worker pod, you must also manually destroy the associated PVCs.
https://docs.brigade.sh/topics/storage/
2019-06-16T05:38:18
CC-MAIN-2019-26
1560627997731.69
[]
docs.brigade.sh
Configure Single-Tenancy in Engine¶ Note This guide applies only to the delivery environment of Crafter CMS Crafter Engine by default is setup for multi-tenancy (multiple sites handled by a single Crafter Engine). There are instances where the deployment is for a single site. This guide explains how to setup Crafter Engine for single tenancy. Assume we have a website in Crafter Studio named editorial, to be deployed on Crafter Engine. To setup single tenancy, follow the instructions listed below. Configure the Default Name¶ The default name, as shown below, needs to be configured with the name of the site to be deployed (site name is editorial for our example), by adding the following lines # The default site name, when not in preview or multi-tenant modes crafter.engine.site.default.name=editorial Change Simple Multi-Tenancy to Single-Tenant¶ As mentioned above, Crafter Engine is setup for multi-tenancy by default. To change it to single tenant, comment out the import line in your services-context.xml and rendering-context.xml file like so: <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- <import resource="classpath*:crafter/engine/mode/multi-tenant/simple/services-context.xml"/> --> </beans> <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="" xmlns: <!-- <import resource="classpath*:crafter/engine/mode/multi-tenant/simple/rendering-context.xml"/> --> </beans> After making your changes and reloading, your Crafter Engine in delivery is now setup for single tenancy.
https://docs.craftercms.org/en/3.0/system-administrators/engine/configure-engine-single-tenant.html
2019-06-16T05:53:04
CC-MAIN-2019-26
1560627997731.69
[]
docs.craftercms.org
You can start with a plan which best suits your current needs and upgrade anytime if you need a higher plan or downgrade if you don’t need the features of the higher plan anymore. The plan can be changed from the workspace dashboard’s Manage / Workspace settings menu / Plans and billing tab. When you'd like to downgrade, you may get the following message: Please have a look on the message on the top of the screen, it will look something like this: In the example above, you can see that the Jira integration is enabled on the workspace and there is at least one map with Trello integration enabled, which prevents changing to the Basic plan. You can disable the Jira integration from the Manage / Integration settings menu, by clicking on Disconnect JIRA button: To remove other integrations tied to the story maps, please open Manage / Manage storymaps menu and you'll see which map has any of the integrations: You can ignore the Jira integrations here, if you've disconnected from Jira at the workspace level. You can remove the integrations from the Options menu: After removing the integrations you should be able to choose the Basic plan:
http://docs.storiesonboard.com/articles/608657-how-do-i-change-my-subscription-plan
2019-06-16T05:02:05
CC-MAIN-2019-26
1560627997731.69
[array(['https://uploads.intercomcdn.com/i/o/26776202/d595ab2ac88a03a3e9aaa243/image.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/26776439/85326943d19d803aa39c193c/image.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/26777024/a8d1e8a54a8d11b80d49b02f/image.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/26777220/655eddaf71c7f22ff5a5c786/image.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/26777326/ee2a538c4e6d3c7fbe6bfec8/image.png', None], dtype=object) array(['https://uploads.intercomcdn.com/i/o/26777396/b2d477904fa24d0a97b59992/image.png', None], dtype=object) ]
docs.storiesonboard.com
kids outdoor table and chair chairs of activity view larger decorating games. Related Post Giant Outdoor Games Indoor Play Tent Childrens Outdoor Chairs Command Ceiling Hooks Paper Dividers Outdoor Jungle Gym Child Outdoor Furniture Boys Indoor Tent Floor To Ceiling Bicycle Rack Kids Outdoor Lounge Chair Children Indoor Tent Kid Craft Outdoor Playhouse Kid Kraft Outdoor Playhouse Children Outdoor Furniture Fancy Mansion
http://top-docs.co/kids-outdoor-table-and-chair/kids-outdoor-table-and-chair-chairs-of-activity-view-larger-decorating-games/
2019-06-16T05:36:13
CC-MAIN-2019-26
1560627997731.69
[array(['http://top-docs.co/wp-content/uploads/2018/10/kids-outdoor-table-and-chair-chairs-of-activity-view-larger-decorating-games.jpg', 'kids outdoor table and chair chairs of activity view larger decorating games kids outdoor table and chair chairs of activity view larger decorating games'], dtype=object) ]
top-docs.co
Security and Access Control In this lesson you’ll learn how to secure your data model using Couchbase Mobile’s built-in security framework. Security rules are used to determine who has read and write access to the database. They live on the server in Sync Gateway and are enforced at all times. The access control requirements for the application are the following. A user can create a list and tasks. The owner of a list can invite other users to access the list. Users invited to a list can create tasks. A moderator has access to all lists. A moderator can create new tasks and invte users. Routing Sync Gateway provides an ability to assign documents to something we call channels. You control assigning documents to channels through the Sync Function. The image below specifies that the List and Task documents are assigned to the same channel ("list.user1.dk39-4kd9-lw9d"). Read Access Single user Once the document is mapped to the channel you can give the user access to it. In doing so, that user will have read access to all the documents in the channel. The following image specifies the access call to grant read access to the list channel. Multiple users Now let’s consider the action of sharing a list with another user. Currently, List and Task models do not have an option to specify other user names. In the application, there is no limit to how many users can access a list. There could be thousands! So instead of embedding other user’s details on the List model we’ll introduce a 3rd document that joins a list and user. The image below adds the List User model. When processing a document of type "list-user", the Sync Function must grant the user ( doc.name) access to the list ( doc.list). Write Access Write access security rules are necessary to protect the system. Generally that means checking that the user is allowed to perform the operation before persisting the document to disc. By user The image below adds a rule to ensure that only the List owner can persist those documents. The List model is the straightforward case, it checks that the user synchronizing the document is indeed the owner of the list ( doc.owner). For List User and Task documents the same security can be enforced because the owner is prefixed on the List document ID ("user1.dk39-4kd9-lw9d"). Roles Another design requirement of the application is that certain users can be moderators. In that case they can perform more operations than other non-moderating users. A user can be elected to be a moderator by users with the admin role only. The image below adds a new Moderator model for this purpose. The following security changes to routing, read and write permissions were added. List, List User and Task documents are routed to the "moderators" channel. Moderators have access to all the lists and can create new tasks or invite users.
https://docs.couchbase.com/tutorials/todo-app/design/security.html
2019-06-16T04:29:26
CC-MAIN-2019-26
1560627997731.69
[array(['../_images/02-list-channel.png', '02 list channel'], dtype=object) array(['../_images/03-read-access.png', '03 read access'], dtype=object) array(['../_images/04-multiple-users.png', '04 multiple users'], dtype=object) array(['../_images/05-write-by-user.png', '05 write by user'], dtype=object) array(['../_images/07-role.png', '07 role'], dtype=object)]
docs.couchbase.com
Testing GOV.UK Pay When your GOV.UK Pay account is made live, you will still have a ‘test’ account for general testing and experimenting with different settings and features. You can read more in the Switching to live section. When you test, you should make sure that you use in the Security section. Make a demo payment You can try the payment experience as a user, and then view the completed payment in the GOV.UK Pay transactions list. Test your service with your users You can create a reusable link to integrate your service prototype with GOV.UK Pay, and test with your users. This feature only works in test accounts, and not in live accounts. section on How to integrate with the GOV.UK Pay API. You can also read the Versioning section to find out about the evolution of the GOV.UK Pay API. If you experience any problems when testing, please email us at [email protected]. Performance testing The contract you have with GOV.UK Pay requires you to seek written approval from GOV.UK Pay before you conduct any performance testing. If you’d like to carry out any kind of performance testing, including in a rate-limiting environment, please contact us at [email protected]. Submit a test transaction using mock card numbers When you test your integration with GOV.UK Pay by submitting a test transaction, you must use mock card numbers. You can enter any valid value for the other required information for that test transaction. For example, card expiry dates can be any date in the future. The following example mock card numbers only work with a test account. If you use mock card numbers after you have switched to a live account, your payment service provider will not authorise the payment.
https://docs.payments.service.gov.uk/testing_govuk_pay/
2019-06-16T04:39:55
CC-MAIN-2019-26
1560627997731.69
[]
docs.payments.service.gov.uk
Default Layout The default layout of the RadMap is represented by the UI controls that appear in it. If you want to remove one of them you can easily set the respective property of the RadMap: Navigation control - NavigationVisibility Scale control - ScaleVisibility Command Bar control - CommandBarVisibility Mouse Location control - MouseLocationIndicatorVisibility Zoom Bar control - ZoomBarVisibility In case you want to hide all of the controls and create your own custom layout, you don't have to set each of these properties, you can simply set the UseDefaultLayout property of the RadMap to False. <telerik:RadMap x: </telerik:RadMap>
https://docs.telerik.com/devtools/wpf/controls/radmap/features/default-layout
2019-06-16T04:51:21
CC-MAIN-2019-26
1560627997731.69
[]
docs.telerik.com
The digital signature cache file is used during Manual Scan, Scheduled Scan, and Scan Now. Agents do not scan files whose signatures have been added to the digital signature cache file. The OfficeScan agent uses the same Digital Signature Pattern used for Behavior Monitoring to build the digital signature cache file. The Digital Signature Pattern contains a list of files that Trend Micro considers trustworthy and therefore can be excluded from scans. Behavior Monitoring is automatically disabled on Windows server platforms (64-bit support for Windows XP, 2003, and Vista without SP1 is not available). If the digital signature cache is enabled, OfficeScan agents on these platforms download the Digital Signature Pattern for use in the cache and do not download the other Behavior Monitoring components. Agents build the digital signature cache file according to a schedule, which is configurable from the web console. Agents do this to: Add the signatures of new files that were introduced to the system since the last cache file was built Remove the signatures of files that have been modified or deleted from the system During the cache building process, agents check the following folders for trustworthy files and then adds the signatures of these files to the digital signature cache file: %PROGRAMFILES% %WINDIR% The cache building process does not affect the endpoint’s performance because agents use minimal system resources during the process. Agents are also able to resume a cache building task that was interrupted for some reason (for example, when the host machine is powered off or when a wireless endpoint’s AC adapter is unplugged).
http://docs.trendmicro.com/en-us/enterprise/control-manager-60-service-pack-2/ch_policy_templates/osce_client/client_priv_sett_all/sc_pv_cache/sc_pv_cache_digsig.aspx
2019-06-16T04:38:46
CC-MAIN-2019-26
1560627997731.69
[]
docs.trendmicro.com
Follow the steps below to install plugins required for the Conj - eCommerce WordPress Theme to work properly: Check the checkbox next to the plugin’s name to select all the plugins you want to install, select the bulk action of install from the dropdown menu and then click the button to apply. Click the link to begin installing plugins. Check all the boxes to select all the plugins you want to install. Select the bulk action of install from the dropdown menu and then click the button to apply. Installation may take a few minutes depending upon your service provider and internet connection. After you have installed all the plugins, return to the plugin installer page. Select the plugins you installed, and apply the bulk action to activate. You should then see a confirmation notice that your plugins were activated successfully. CONJ PowerPack (Bundled) – Envato purchase code validation is required! Note that the CONJ PowerPack extension is designed to work specifically with the Conj – eCommerce WordPress Theme and it will not function with any other.
https://docs.conj.ws/getting-started/installing-recommended-bundled-plugins
2019-06-16T04:29:41
CC-MAIN-2019-26
1560627997731.69
[]
docs.conj.ws
If an expense has the green-light to be reimbursed simply because it's been final approved already, read on! Keep in mind, ACH reimbursements are only available in the U.S. Here's how it works: - An expense report is final approved (hint: Concierge Report Approval makes this easy!) and queued for reimbursement. - Set a maximum amount for which you'll allow reports to be automatically reimbursed. All report amounts over this limit will require a quick click for manual reimbursement. As a group policy admin, you can find the Manual Reimbursement section of the policy editor by going to Settings > Policies > Group > [Policy Name] > Manual Reimbursement. In the example below, any report that is under $5,000 will be reimbursed automatically. Anything over this amount will require an extra click to "Reimburse via ACH" For a live overview of the Policy Admin role, policy management and administration, register for our free Admin Onboarding Webinar! Still looking for answers? Search our Community for more content on this topic!
https://docs.expensify.com/articles/768321-configuring-manual-ach-reimbursement-us-only
2019-06-16T05:21:21
CC-MAIN-2019-26
1560627997731.69
[array(['https://downloads.intercomcdn.com/i/o/84649944/3a187c063ec1f530a3ffa34f/image.png', None], dtype=object) ]
docs.expensify.com
Improve your content creation workflows and migrate your writing from GatherContent to Ghost. It's possible to migrate your content from GatherContent to Ghost in a few simple steps, using HTML exports and the Ghost editor. Here's how it works: Generate your HTML code When importing a piece of content from GatherContent into a post or page in Ghost, you can do this by grabbing the HTML code from GatherContent and pasting it directly into the Ghost editor. Navigate to a project within your GatherContent account and find the particular piece of content you'd like to migrate. From that page, click the icon in the right-hand corner and locate the "View HTML" option: Copy the raw HTML This will open up a page containing the HTML for your piece of content. Use the "Copy raw HTML" option which will copy the code to your clipboard. Paste the HTML code into Ghost In a new post in the Ghost editor, paste your code into a new HTML block: Publish your post And that’s it! Ghost will automatically render your final content with all the formatting in place. Once you're happy with your custom meta data and post settings you can hit publish or schedule the post.
https://docs.ghost.org/integrations/gathercontent/
2019-06-16T05:33:32
CC-MAIN-2019-26
1560627997731.69
[array(['https://docs.ghost.io/content/images/2019/04/View-HTML.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/04/Screenshot-2019-04-04-at-19.54.47.png', None], dtype=object) array(['https://docs.ghost.io/content/images/2019/01/Ghost-cards.png', None], dtype=object) ]
docs.ghost.org
Contents Performance Analytics and Reporting Create a line chart Create a line chart to show how the value of one or more items changes over time.. The report is generated. Table 1. Line report options and ordering data. For example, you might create a condition that states Priority + less than + 3 - Moderate to have the report include only records with priorities of 2- High and 1 - Critical. Add "OR" Clause Select a second condition that must be met if the first condition is invalid. For example, select [Assignment Group] [is] [Database], to include records that are assigned to the Database group if the first condition is false. In Eureka, this field is only available after at least one filter condition has been created.. Line chart style options Configure the look of your line. Table
https://docs.servicenow.com/bundle/helsinki-performance-analytics-and-reporting/page/use/reporting/concept/c_CreateLineCharts.html
2019-06-16T05:39:04
CC-MAIN-2019-26
1560627997731.69
[]
docs.servicenow.com
. In SecurityCenter, organizational users can create custom reports or template-based reports, as described in Create a Custom Report or Create a Template Report. Note: Custom PDF reports and template-based reports require that either the Oracle Java JRE or OpenJDK (along with their accompanying dependencies) are installed on the system hosting the SecurityCenter. Custom CyberScope, DISA ASR, and DISA ARF reports are also available for specialized needs. An administrator user must enable report generation options before organizational users can generate reports with CyberScope, DISA ASR, or DISA ARF data. To manage reports on the Reports page, see Manage Reports.
https://docs.tenable.com/sccv/5_5/Content/Reports.htm
2019-06-16T05:31:40
CC-MAIN-2019-26
1560627997731.69
[]
docs.tenable.com
Setup Site for a Delivery Environment¶ In this section, we will be working in the delivery environment of Crafter CMS and describing how to setup your site for a delivery environment. Setup Solr core and Target¶ Crafter CMS out of the box has a script to help you create your Solr core and deployer target for the delivery environment. In the bin folder in your Crafter CMS delivery environment, we will use the script init-site.sh/ init-site.bat to help us create the Solr core and the deployer target. From your command line, navigate to your {Crafter-CMS-delivery-environment-directory}/bin/ , and execute the init-site script. The following output of init-site.sh -h explains how to use the script: usage: init-site [options] [site] [repo-path] -a,--notification-addresses <addresses> A comma-separated list of email addresses that should receive deployment notifications -b,--branch <branch> The name of the branch to clone (live by default) -f,--passphrase <passphrase> The passphrase of the private key (when the key is passphrase protected) -h,--help Show usage information -k,--private-key <path> The path to the private key, if it's not under the default path (~/.ssh/id_rsa), when authenticating through SSH to the remote Git repo -p,--password <password> The password for the remote Git repo, when using basic authentication -u,--username <username> The username for the remote Git repo, when using basic authentication EXAMPLES: Init a site from the default repo path (../../crafter-authoring/data/repos/sites/{sitename}/published) init-site mysite Init a site from a specific local repo path init-site mysite /opt/crafter/authoring/data/repos/sites/mysite/published Init a site from a specific local repo path, cloning a specific branch of the repo init-site -b master mysite /opt/crafter/authoring/data/repos/sites/mysite/published Init a site that is in a remote HTTPS repo with username/password authentication init-site -u jdoe -p jdoe1234 mysite Init a site that is in a remote SSH repo with public/private key authentication (default private key path with no passphrase) init-site mysite ssh://myserver/opt/crater/sites/mysite Init a site that is in a remote SSH repo with public/private key authentication (specific private key path with no passphrase) init-site -k ~/.ssh/jdoe_key mysite ssh://myserver/opt/crater/sites/mysite Init a site that is in a remote SSH repo with public/private key authentication (specific private key path with passphrase) init-site -k ~/.ssh/jdoe_key -f jdoe123 mysite ssh://myserver/opt/crater/sites/mysite We recommend using Secure Shell (SSH) with your site’s published repo Git url and for authentication, to use either username/password authentication or public/private key authentication. The SSH Git URL format is: ssh://[user@]host.xz[:port]/path/to/repo/ where sections between [] are optional. Example #1: ssh://server1.example.com/path/to/repo Example #2: ssh://[email protected]:63022/path/to/repo Note When using ssh, your keys need to be generated using RSA as the algorithm. If you are just working on another directory on disk for your delivery, you can just use the filesystem. When your repository is local, make sure to use the absolute path. Here is an example site’s published repo Git url when using a local repository: /opt/crafter/authoring/data/repos/sites/mysite/published Note When using ssh, you might see in the logs com.jcraft.jsch.JSchException: UnknownHostKeyerrors. These errors are common in Ubuntu, and are caused by known host keys being stored in non-RSA format. Please follow the instructions in Debugging Deployer Issues under SSH Unknown Hostto resolve them. Gitneeds to be installed in authoring when using SSH to connect the delivery to the authoring. If you see the following error in the delivery Deployer: Caused by: java.io.IOException: bash: git-upload-pack: command not found you’ll need to add the location of git (usually /usr/bin) to your non-login shell startup file (e.g. ~/.bashrc). To get the location of Git, run the following command: which git-upload-pack Viewing your Site for Testing¶ To test viewing your site, open a browser and type in the url of your site. If you have multiple sites setup, to view a certain site, in your browser, enter the following: <your url>?crafterSite=<site name> Here we have an example of a delivery setup in another directory on disk (local), where there are two sites, myawesomesite and helloworld To set the site to the helloworld site, in your browser, type in To set the site to the myawesomesite, in your browser, type in
https://docs.craftercms.org/en/3.0/system-administrators/activities/delivery/setup-site-for-delivery.html
2019-06-16T05:52:45
CC-MAIN-2019-26
1560627997731.69
[array(['../../../_images/site-list.png', 'Setup Site for Delivery - Site List'], dtype=object) array(['../../../_images/site-hello.png', 'Setup Site for Delivery - Hello World Site'], dtype=object) array(['../../../_images/site-awesome.png', 'Setup Site for Delivery - My Awesome Site'], dtype=object)]
docs.craftercms.org
Crafter Social¶ Crafter Social is a multi-tenant, platform independent user-generated content management system for handling all actions related to user-generated content (UGC), including the creation, updating and moderation of the content. It is built on MongoDB and uses Crafter Profile for profile, tenant, roles management, and authentication. Crafter Social is highly scalable in terms of both the users & data, and secures the generated content using Crafter Profile and the Crafter Profile Security library. As a headless, RESTful application, Crafter Social allows for loosely coupled integration with the vertical applications using it. Some examples of these vertical applications include: - a products site, for example a books site with reviews & ratings, - a ratings site and - a blogging application with threaded comments. Source Code¶ Crafter Social’s source code is managed in GitHub: Java Doc¶ Crafter Social’s Java Doc is here: ReST API¶ Configuration¶ To configure Crafter Social, please see Crafter Social System Administration
https://docs.craftercms.org/en/3.1/developers/projects/social/index.html
2019-06-16T05:51:16
CC-MAIN-2019-26
1560627997731.69
[]
docs.craftercms.org
Data model eazyBI accounts In eazyBI plugin you can create one or several accounts. One eazyBI account contains large number of Jira projects and large number of Jira issues (more than 100,000) then it is recommended to have several eazyBI accounts and import related set of Jira projects. This is example of Issues cube that has Project, Priority and Time dimensions and has measures Issues created, Issues due and Issues resolved. Each detailed cube “cell” contains number of issues created, due and resolved for particular project, priority and time period. It is easy to illustrate cube with three dimensions but in real eazyBI issues data cube you will have many more dimensions and many more measures which will be described below. Each dimension has either just detailed level of all dimension members or has hierarchy with several levels. For example, Project dimension has Project and Component levels. All measures are automatically aggregated (typically as sum of detailed level values) in upper hierarchy levels. If issue can belong to several level members (e.g. issue can have several components) then measure value at upper hierarchy level will not be.
https://docs.eazybi.com/eazybijira/getting-started/data-model
2019-06-16T05:20:46
CC-MAIN-2019-26
1560627997731.69
[]
docs.eazybi.com
data - Data and observations¶ Introduction¶ gammapy.data currently contains the EventList class, as well as classes for IACT data and observation handling. Getting Started¶ You can use the EventList class to load IACT gamma-ray event lists: >>> from gammapy.data import EventList >>>>> events = EventList.read(filename) To load Fermi-LAT event lists, use the EventListLAT class: >>> from gammapy.data import EventListLAT >>>>> events = EventListLAT.read(filename) The other main class in gammapy.data is the DataStore, which makes it easy to load IACT data. E.g. an alternative way to load the events for observation ID 23523 is this: >>> from gammapy.data import DataStore >>> data_store = DataStore.from_dir('$GAMMAPY_DATA/hess-dl3-dr1') >>> events = data_store.obs(23523).events Using gammapy.data¶ Gammapy tutorial notebooks that show examples using gammapy.data:
https://docs.gammapy.org/dev/data/index.html
2019-06-16T05:35:19
CC-MAIN-2019-26
1560627997731.69
[]
docs.gammapy.org
Subscription admin(s) can change billing information or credit card data on workspace dashboard / Manage / Workspace Settings menu / Plans and billing tab. How can I edit my billing information or change the credit card data? Written by Arpi Tamas Updated over a week ago Updated over a week ago
http://docs.storiesonboard.com/articles/608658-how-can-i-edit-my-billing-information-or-change-the-credit-card-data
2019-06-16T05:26:34
CC-MAIN-2019-26
1560627997731.69
[]
docs.storiesonboard.com
Configuration and Data Files¶ Definitions¶ - A : - A robot_file describes the available functionalities of a given class of robot (i.e., sensor and actuator propositions), as well as the motion control strategy that should be used for driving the robot from region to region (e.g., potential field + differential-drive feedback linearization). - A spec_file contains a specification, written in Structured English, which describes how the robot should behave. - An aut_file is generated automatically from a spec_file and contains an automaton whose execution will cause the robot to satisfy the original specification (under environmental assumptions). - An Experiment Configuration File Example¶ To perform an experiment, the following steps should serve as a reasonable guideline: - Select a robot and import it into specEditor
http://ltlmop.readthedocs.io/en/latest/config_files.html
2017-06-22T14:12:56
CC-MAIN-2017-26
1498128319575.19
[]
ltlmop.readthedocs.io
You are viewing documentation for Kubernetes version: v1.23 Kubernetes v1.23 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version. Restrict a Container's Access to Resources with AppArmor Kubernetes v1.4 [beta] allow. Objectives - See an example of how to load a profile on a node - Learn how to enforce the profile on a Pod - Learn how to check that the profile is loaded - See what happens when a profile is violated - See what happens when a profile cannot be loaded Before you beginfile: cat /sys/module/apparmor/parameters/enabled Y If the Kubelet contains AppArmor support (>= v1.4), it will refuse to run a Pod with AppArmor options if the kernel module is not enabled. Container runtime supports AppArmor -- Currently all common Kubernetes-supported container runtimes should support AppArmor, like Docker, CRI-O or containerd. Please refer to the corresponding runtime documentation and verify that the cluster fulfills the requirements to use AppArmor.file. Securing a Pod) Example This example assumes you have already set up a cluster with AppArmor support. First, we need to load the profile we want to use onto our nodes. This profile: apiVersion: v1 kind: Pod metadata: name: hello-apparmor annotations: # Tell Kubernetes to apply the AppArmor profile "k8s-apparmor-example-deny-write". # Note that this is ignored if the Kubernetes node is not running version 1.4 or greater. container.apparmor.security.beta.kubernetes.io/hello: localhost/k8s-apparmor-example-deny-write spec: containers: - name: hello image: busybox:1.28 command: [ "sh", "-c", "echo 'Hello AppArmor!' && sleep 1h" ]:1.28 Pending, with a helpful error message: Pod Cannot enforce AppArmor: profile "k8s-apparmor-example-allow-write" is not loaded. An event was also recorded with the same message. Administration Setting up nodes with profiles Kubernetes does not currently provide any native mechanisms for loading AppArmor profiles onto nodes. There are lots of ways to setup the profiles though, such as: - Through a DaemonSet that runs a Pod on each node to ensure the correct profiles are loaded. An example implementation can be found here. - At node initialization time, using your node initialization scripts (e.g. Salt, Ansible, etc.) or image. - By copying the profiles to each node and loading them through SSH, as demonstrated in the Example.. Restricting profiles with the PodSecurityPolicy. Disabling AppArmor If you do not want AppArmor to be available on your cluster, it can be disabled by a command-line flag: --feature-gates=AppArmor=false When disabled, any Pod that includes an AppArmor profile will fail validation with a "Forbidden" error. Authoring Profiles. - bane is an AppArmor profile generator for Docker that uses a simplified profile language.. API Reference Pod Annotation Specifying the profile a container will run with: - key: container.apparmor.security.beta.kubernetes.io/<container_name>Where <container_name>matches the name of a container in the Pod. A separate profile can be specified for each container in the Pod. - value: a profile reference, described below Profile Reference runtime/default: Refers to the default runtime profile. - Equivalent to not specifying a profile (without a PodSecurityPolicy default), except it still requires AppArmor to be enabled. - In practice, many container runtimes use the same OCI default profile, defined here: localhost/<profile_name>: Refers to a profile loaded on the node (localhost) by name. - The possible profile names are detailed in the core policy reference. unconfined: This effectively disables AppArmor on the container. Any other profile reference format is invalid. PodSecurityPolicy Annotations Specifying the default profile to apply to containers when none is provided: - key: apparmor.security.beta.kubernetes.io/defaultProfileName - value: a profile reference, described above Specifying the list of profiles Pod containers is allowed to specify: - key: apparmor.security.beta.kubernetes.io/allowedProfileNames - value: a comma-separated list of profile references (described above) - Although an escaped comma is a legal character in a profile name, it cannot be explicitly allowed here. What's next Additional resources:
https://v1-23.docs.kubernetes.io/docs/tutorials/security/apparmor/
2022-06-25T13:53:02
CC-MAIN-2022-27
1656103035636.10
[]
v1-23.docs.kubernetes.io
Ensure a log metric filter and alarm exist for root account use Error: A log metric filter and alarm does not exist for root account use Bridgecrew Policy ID: BC_AWS_MONITORING_3 Severity: HIGH A log metric filter and alarm does not exist for root account use Description Real-time monitoring of API calls can be achieved by directing CloudTrail Logs to CloudWatch Logs and establishing corresponding metric filters and alarms. Monitoring for root account logins will provide visibility into the use of a fully privileged account and an opportunity to reduce the use of it. We recommend you establish a log metric filter and alarm for root login attempts. Root account usage and the <cloudtrail_log_group_name> taken from step 1. aws logs put-metric-filter --log-group-name <cloudtrail_log_group_name> --filter-name <root_usage_metric> --metric-transformations metricName=<root_usage_metric>, metricNamespace='CISBenchmark',metricValue=1 --filterpattern '{ $.userIdentity.type = "Root" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != "AwsServiceEvent" }' <root_usage_alarm> --metricname <root_usage" "root" { alarm_name = "root_usage_alarm" comparison_operator = "GreaterThanOrEqualToThreshold" evaluation_periods = 1 metric_name = "root_usage_metric" namespace = "CISBenchmark" period = 300 statistic = "Sum" threshold = 1 alarm_actions = [aws_sns_topic.trail-unauthorised.arn] } resource "aws_cloudwatch_log_metric_filter" "root" { name = "root_usage_metric" pattern = "{ $.userIdentity.type = \"Root\" && $.userIdentity.invokedBy NOT EXISTS && $.eventType != \"AwsServiceEvent\" }" log_group_name = var.log_group_name metric_transformation { name = "root_usage_metric" namespace = "CISBenchmark" value = "1" } } Updated 12 months ago
https://docs.bridgecrew.io/docs/monitoring_3
2022-06-25T14:18:31
CC-MAIN-2022-27
1656103035636.10
[]
docs.bridgecrew.io
Your search In authors or contributors Publication year Result 1 resource - Blended Learning and Academic Achievement: A Meta-AnalysisNajafi, H., & Heidari, M. - 2018 - Quarterly Journal of Iranian Distance Education, 1(3), 39–48 The purpose of the present research was to conduct a meta-analysis of Iranian studies on blended learning and academic achievement. As the third generation of distance education, blended learning integrates the strengths of face-to-face and online approaches. The methodology involved estimating the effect size for the relationship between blended learning and academic achievement. Out of 231 studies conducted between 2010 and 2017, 20experimental and quasi-experimental studies were selected... Last update from database: 25/06/2022, 11:56 (UTC)
https://docs.edtechhub.org/lib/?creator=%22Najafi%2C+Hossein%22&year=2000.2010&sort=hub_desc
2022-06-25T13:52:36
CC-MAIN-2022-27
1656103035636.10
[]
docs.edtechhub.org
Distributed snapshots for YCQL doesn't physically copy the data; instead, it creates hard links to all the relevant files. These links reside on the same storage volumes where the data itself is stored, which makes both backup and restore operations nearly instantaneous. Note on space consumptionThere are no technical limitations on how many snapshots you can create. However, increasing the number of snapshots stored also increases the amount of space required for the database. The actual overhead depends on the workload, but you can estimate it by running tests based on your applications. Create a snapshot The distributed snapshots feature allows you to back up a keyspace or a table, and then restore it in case of a software or operational error, with minimal RTO and overhead. To back up a keyspace with all its tables and indexes, create a snapshot using the create_keyspace_snapshot command: ./bin/yb-admin create_keyspace_snapshot my_keyspace If you want to back up a single table with its indexes, use the create_snapshot command instead: ./bin/yb-admin create_snapshot my_keyspace my_table When you run either of these commands, it returns a unique ID for the snapshot: Started snapshot creation: a9442525-c7a2-42c8-8d2e-658060028f0e You can then use this ID to check the status of the snapshot, delete it, or use it to restore the data. Both create_keyspace_snapshot and create_snapshot commands exit immediately, but the snapshot may take some time to complete. Before using the snapshot, verify its status with the list_snapshots command: ./bin/yb-admin list_snapshots This command lists the snapshots in the cluster, along with their states. Locate the ID of the new snapshot and make sure its state is COMPLETE: Snapshot UUID State a9442525-c7a2-42c8-8d2e-658060028f0e COMPLETE Delete a snapshot Snapshots never expire and are retained as long as the cluster exists. If you no longer need a snapshot, you can delete it with the delete_snapshot command: ./bin/yb-admin delete_snapshot a9442525-c7a2-42c8-8d2e-658060028f0e Restore a snapshot To restore the data backed up in one of the previously created snapshots, run the restore_snapshot command: ./bin/yb-admin restore_snapshot a9442525-c7a2-42c8-8d2e-658060028f0e This command rolls back the database to the state which it had when the snapshot was created. The restore happens in-place: in other words, it changes the state of the existing database within the same cluster. Move a snapshot to external storage Storing snapshots in-cluster is extremely efficient, but also comes with downsides. It can increase the cost of the cluster by inflating the space consumption on the storage volumes. Also, in-cluster snapshots don't protect you from a disaster scenario like filesystem corruption or a hardware failure. To mitigate these issues, consider storing backups outside of the cluster, in cheaper storage that is also geographically separated from the cluster. This approach helps you to reduce the cost, and also to restore your databases into a different cluster, potentially in a different location. To move a snapshot to external storage, gather all the relevant files from all the nodes, and copy them along with the additional metadata required for restores on a different cluster: Create an in-cluster snapshot. Create the snapshot metadata file by running the export_snapshotcommand and providing the ID of the snapshot: ./bin/yb-admin export_snapshot a9442525-c7a2-42c8-8d2e-658060028f0e my_keyspace.snapshot Copy the newly created snapshot metadata file ( my_keyspace.snapshot) to the external storage. Copy the data files for all the tablets. The file path structure is: <yb_data_dir>/node-<node_number>/disk-<disk_number>/yb-data/tserver/data/rocksdb/table-<table_id>/[tablet-<tablet_id>.snapshots]/<snapshot_id> <yb_data_dir>is the directory where YugabyteDB data is stored. (default= ~/yugabyte-data) <node_number>is used when multiple nodes are running on the same server (for testing, QA, and development). The default value is 1. <disk_number>when running yugabyte on multiple disks with the --fs_data_dirsflag. The default value is 1. <table_id>is the UUID of the table. You can get it from the http://<yb-master-ip>:7000/tablesurl in the Admin UI. <tablet_id>in each table there is a list of tablets. Each tablet has a <tablet_id>.snapshotsdirectory that you need to copy. <snapshot_id>there is a directory for each snapshot since you can have multiple completed snapshots on each server. This directory structure is specific to yb-ctl, which is a local testing tool. In practice, for each server, you will use the --fs_data_dirsflag, which is a comma-separated list of paths where to put the data (normally different paths should be on different disks). In this yb-ctlexample, these are the full paths up to the disk-x.To get a snapshot of a multi-node cluster, you need to go into each node and copy the folders of ONLY the leader tablets on that node. There is no need to keep a copy for each replica, since each tablet-replica has a copy of the same data. If you don't want to keep the in-cluster snapshot, it's now safe to delete it. Restore a snapshot from external storage To restore a snapshot that had been moved to external storage, do the following: Fetch the snapshot metadata file from the external storage and apply it by running the import_snapshotcommand: ./bin/yb-admin import_snapshot my_keyspace.snapshot my_keyspace The output will contain the mapping between the old tablet IDs and the new tablet IDs: Read snapshot meta file my_keyspace.snapshot Importing snapshot a9442525-c7a2-42c8-8d2e-658060028f0e a9442525-c7a2-42c8-8d2e-658060028f0e a9442525-c7a2-42c8-8d2e-658060028f0e Copy the tablet snapshots. Use the tablet mappings to copy the tablet snapshot files from the external location to appropriate location. yb-data/tserver/data/rocksdb/table-<tableid>/tablet-<tabletid>.snapshots In our example, it'll be: NoteFor each tablet, you need to copy the snapshots folder on all tablet peers and in any configured read replica cluster. - YugabyteDB Anywhere Automated BackupsYugabyteDB Anywhere provides the API and UI for Backup and Restore, which automates most of the steps described above. Consider using it for better usability, especially if you have many databases and snapshots to manage.
https://docs.yugabyte.com/preview/manage/backup-restore/snapshots-ycql/
2022-06-25T13:11:16
CC-MAIN-2022-27
1656103035636.10
[]
docs.yugabyte.com
You can use the COPY command to load data in parallel from one or more remote hosts, such as Amazon EC2 instances or other computers. COPY connects to the remote hosts using SSH and runs commands on the remote hosts to generate text output. The remote host can be an Amazon EC2 Linux instance or another Unix or Linux computer configured to accept SSH connections. This guide assumes your remote host is an Amazon EC2 instance. Where the procedure is different for another computer, the guide will point out the difference. Amazon Redshift can connect to multiple hosts, and can open multiple SSH connections to each host. Amazon Redshifts sends a unique command through each connection to generate text output to the host's standard output, which Amazon Redshift then reads as it would a text file. Before you begin Before you begin, you should have the following in place: One or more host machines, such as Amazon EC2 instances, that you can connect to using SSH. Data sources on the hosts. You will provide commands that the Amazon Redshift cluster will run on the hosts to generate the text output. After the cluster connects to a host, the COPY command runs the commands, reads the text from the hosts' standard output, and loads the data in parallel into an Amazon Redshift table. The text output must be in a form that the COPY command can ingest. For more information, see Preparing your input data Access to the hosts from your computer. For an Amazon EC2 instance, you will use an SSH connection to access the host. You will need to access the host to add the Amazon Redshift cluster's public key to the host's authorized keys file. A running Amazon Redshift cluster. For information about how to launch a cluster, see Amazon Redshift Getting Started Guide. This section walks you through the process of loading data from remote hosts. The following sections provide the details you need to accomplish each step. Step 1: Retrieve the cluster public key and cluster node IP addresses The public key enables the Amazon Redshift cluster nodes to establish SSH connections to the remote hosts. You will use the IP address for each cluster node to configure the host security groups or firewall to permit access from your Amazon Redshift cluster using these IP addresses. Step 2: Add the Amazon Redshift cluster public key to the host's authorized keys file You add the Amazon Redshift cluster public key to the host's authorized keys file so that the host will recognize the Amazon Redshift cluster and accept the SSH connection. Step 3: Configure the host to accept all of the Amazon Redshift cluster's IP addresses For Amazon EC2 , modify the instance's security groups to add ingress rules to accept the Amazon Redshift IP addresses. For other hosts, modify the firewall so that your Amazon Redshift nodes are able to establish SSH connections to the remote host. Step 4: Get the public key for the host You can optionally specify that Amazon Redshift should use the public key to identify the host. You will need to locate the public key and copy the text into your manifest file. Step 5: Create a manifest file The manifest is a JSON-formatted text file with the details Amazon Redshift needs to connect to the hosts and fetch the data. Step 6: Upload the manifest file to an Amazon S3 bucket Amazon Redshift reads the manifest and uses that information to connect to the remote host. If the Amazon S3 bucket does not reside in the same Region as your Amazon Redshift cluster, you must use the REGION option to specify the Region in which the data is located. Step 7: Run the COPY command to load the data From an Amazon Redshift database, run the COPY command to load the data into an Amazon Redshift table.
https://docs.amazonaws.cn/en_us/redshift/latest/dg/loading-data-from-remote-hosts.html
2022-06-25T13:55:31
CC-MAIN-2022-27
1656103035636.10
[]
docs.amazonaws.cn
ansible.builtin.runas become – Run As user Note This become plugin is part of ansible-core and included in all Ansible installations. In most cases, you can use the short plugin name run windows runas facility. Parameters Notes Note runas is really implemented in the powershell module handler and as such can only be used with winrm connections. This plugin ignores the ‘become_exe’ setting as it uses an API and not an executable. The Secondary Logon service (seclogon) must be running to use runas Collection links Issue Tracker Repository (Sources) Communication
https://docs.ansible.com/ansible/latest/collections/ansible/builtin/runas_become.html
2022-06-25T13:07:54
CC-MAIN-2022-27
1656103035636.10
[]
docs.ansible.com
Your search In authors or contributors Result 1 resource - Embracing EdTech: four critical questions answered through a rapid prototype programme on teachers’ professional development (TPD)Sabbab, S., Shakil, A. A., & Alam, F. - 2021, July 14 - EdTech Hub Seventy-one teachers participating in the ‘Onneshon’ prototype were flooding the WhatsApp group with messages about the challenges and hiccups they were experiencing, either with watching the video content or with taking a quiz in the first two weeks. Problems were pouring in and so were solutions being innovated and offered through diverse channels. In the space of two weeks, the… Last update from database: 25/06/2022, 10:46 (UTC)
https://docs.edtechhub.org/lib/?creator=%22Shakil%2C+Abdullah+All%22&hubonly=IRGPEZK6&sort=date_asc
2022-06-25T13:19:59
CC-MAIN-2022-27
1656103035636.10
[]
docs.edtechhub.org
Native Android Only If your application uses any hardware accelerated views, chances are that the captured screenshots will be blank for these views. In these cases, you can use this API to enable screenshots via media projection. Please note that enabling this will request permission from the users to record the screen at the beginning of the session. Requires API 21 In order to enable media projection, the minimum support Android SDK version is 21. BugReporting.setScreenshotByMediaProjectionEnabled(boolean enabled)
https://docs.instabug.com/reference/take-screenshot-by-media-projection
2022-06-25T14:32:37
CC-MAIN-2022-27
1656103035636.10
[]
docs.instabug.com
"An attendance transaction does not exist in Human Resources" warning occurs during payrun build Applies to: Microsoft Dynamics GP Original KB number: 2892399 Symptoms The following warning message appears during the payrun build process in Microsoft Dynamics GP: An attendance transaction does not exist in Human Resources WARNING Cause This warning message means that the system has found a transaction(s) in the payroll batch that does not exist on the HR side. If the paycode is linked to a timecode, the transaction should exist in both the HR and payroll tables, and will happen automatically if you hand-key the transactions with a linked timecode/paycode. However, only the payroll table will be updated if the following happens: - The transactions were imported directly into the payroll batch. This process may update the Payroll table only and not HR. - The payroll batch was keyed first, and then the paycode was linked to a timecode in HR after the fact. Resolution If you need to update the Human Resources table with the transactions in the unposted payroll batch, there is a reconcile process in Human Resources that will do this: - Under Microsoft Dynamics GP, select Tools, point to Utilities, point to Human Resources and select Reconcile. - Mark the checkbox for Reconcile Attendance Transactions. - Select Process. - Rebuild the payrun batch and verify that the warning message is now gone. Note If you are importing in transactions directly to a payroll batch, you will need to run this reconcile utility each time before you build the payrun. There steps should be added to your payrun procedures if you import in payroll transactions on a regular basis. This reconcile option will only look at unposted batches in payroll. It will update the HR transaction table (TATX1030) with the corresponding transaction(s) from the Payroll Work table (UPR10302) where the paycode is linked to a timecode. It will not update the HR table with transactions that were already posted and in payroll history. So once the batch is posted in payroll, it is too late to run this reconcile utility. You can run this reconcile option as many times as you wish (before the batch is posted in payroll). It will skip over transactions that already exist on both the HR and Payroll tables, and update the HR table only for transactions where a matching record is not found. For example, if you import in 3 payroll batches to be included in the same checkrun, you can run this reconcile utility after each import for a total of three times, or you can import in all 3 batches and just run it one time after the last import, and before you build the payrun.
https://docs.microsoft.com/en-US/troubleshoot/dynamics/gp/an-attendance-transaction-does-not-exist-in-human-resources-warning
2022-06-25T15:41:47
CC-MAIN-2022-27
1656103035636.10
[]
docs.microsoft.com
LibreOffice » codemaker Generators for language-binding–specific representations of UNOIDL entities: cppumakergenerates header ( .hdland .hpp) files for the C++ UNO language binding javamakergenerates class files for the JVM language binding; which can be a source of confusion. Generated by Libreoffice CI on lilith.documentfoundation.org Last updated: 2022-06-21 20:14:08 | Privacy Policy | Impressum (Legal Info)
https://docs.libreoffice.org/codemaker.html
2022-06-25T13:56:18
CC-MAIN-2022-27
1656103035636.10
[]
docs.libreoffice.org
DMA_Init_TypeDef Struct ReferenceEMLIB > DMA DMA initialization structure. Definition at line 303 of file em_dma.h. #include < em_dma.h> Field Documentation Pointer to the control block in memory holding descriptors (channel control data structures). This memory must be properly aligned at a 256 bytes, i.e., the 8 least significant bits must be zero. Refer to the reference manual, DMA chapter for more details. It is possible to provide a smaller memory block, only covering those channels actually used, if not all available channels are used. For instance, if only using 4 channels (0-3), both primary and alternate structures, then only 16*2*4 = 128 bytes must be provided. However, this implementation has no check if later exceeding such a limit by configuring for instance channel 4, in which case memory overwrite of some other data will occur. Definition at line 329 of file em_dma.h. Referenced by DMA_Init(). HPROT signal state when accessing the primary/alternate descriptors. Normally set to 0 if protection is not an issue. The following bits are available: - bit 0 - HPROT[1] control for descriptor accesses (i.e., when the DMA controller accesses the channel control block itself), privileged/non-privileged access. Definition at line 312 of file em_dma.h. Referenced by DMA_Init(). The documentation for this struct was generated from the following file: - C:/repos/super_h1/platform/emlib/inc/ em_dma.h
https://docs.silabs.com/mcu/5.7/efm32wg/structDMA-Init-TypeDef
2022-06-25T13:40:35
CC-MAIN-2022-27
1656103035636.10
[]
docs.silabs.com
A storage policy is a group of parameters that define how to store VM volumes: a tier, a failure domain, and a redundancy mode. A storage policy can also be used to limit the bandwidth or IOPS of the volume. These limits help customize the allocation of cluster resources between the virtual machines. It is also needed to provide predictable performance levels for virtual machine disks. When you deploy the compute cluster, a default storage policy is created that enforces the best replication scheme allowed by the number of nodes in the storage cluster. The default policy cannot be deleted or renamed. By default, it is applied to uploaded images and base volumes created from these images. A base volume is created from a source image when you deploy a VM. It is not used directly by a VM, but all volumes that a VM actually uses (which are listed on the Volumes tab) are in fact deltas (differences) from the base volume. It is important to keep base volumes available as VM volumes depend on them. For that, you need the default storage policy to enforce multiple replicas. If the storage cluster does not have enough nodes to enable multiple replicas (not recommended), you can adjust the default storage policy once you add more nodes to the storage cluster. It will be applied to the images and base volumes that were created with the default policy. To apply custom redundancy schemes to VM volumes, you can create, edit, or clone storage policies for them. Limitations Prerequisites To create a storage policy Admin panel In the Create storage policy window, specify a policy name and select redundancy settings. Enable IOPS limit or Bandwidth limit to set the corresponding limits on the volume. To edit a storage policy Keep in mind, that changes to the storage policy will affect the redundancy and performance of all the volumes covered by it. To clone a storage policy Modify the existing parameters or just leave them as they are, and then click Clone. To remove a storage policy Last build date: Thursday, March 3, 2022 Administrator Guide for Virtuozzo Hybrid Infrastructure5.0. Copyright © 2016-2022 Virtuozzo International GmbH
https://docs.virtuozzo.com/virtuozzo_hybrid_infrastructure_5_0_admins_guide/managing-storage-policies.html
2022-06-25T14:48:40
CC-MAIN-2022-27
1656103035636.10
[]
docs.virtuozzo.com
The Versal ACAP programmable logic (PL) comprises configurable logic blocks (CLBs), internal memory, and DSP engines. Every CLB contains 64 flip-flops and 32 look-up tables (LUTs). Half of the CLB LUTs can be configured as a 64-bit RAM, as a 32-bit shift register (SRL32), or as two 16-bit shift registers (SRL16). In addition to the LUTs and flip-flops, the CLB contains the following: - Carry lookahead logic for implementing arithmetic functions or wide logic functions - Dedicated, internal connections to create fast LUT cascades without external routing This enables a flexible carry logic structure that allows a carry chain to start at any bit in the chain. In addition to the distributed RAM (64-bit each) capability in the CLB, there are dedicated blocks for optimally building memory arrays in the design: - Accelerator RAM (4 MB) (available in some Versal devices only) - Block RAM (36 Kb each) where each port can be configured as 4Kx9, 2Kx18, 1Kx36, or 512x72 in simple dual-port mode - UltraRAM (288 Kb each) where each port can be configured as 32Kx9, 16Kx18, 8Kx36, or 4Kx72 Versal devices also include many low-power DSP Engines, combining high speed with small size while retaining system design flexibility. The DSP engines can be configured in various modes to better match the application needs: - 27×24-bit twos complement multiplier and a 58-bit accumulator - Three element vector/INT8 dot product - Complex 18bx18b multiplier - Single precision floating point For more information on PL resources, see the Versal ACAP Configurable Logic Block Architecture Manual (AM005), Versal ACAP Memory Resources Architecture Manual (AM007), and Versal ACAP DSP Engine Architecture Manual (AM004).
https://docs.xilinx.com/r/2021.2-English/ug1273-versal-acap-design/Programmable-Logic
2022-06-25T13:12:32
CC-MAIN-2022-27
1656103035636.10
[]
docs.xilinx.com
Representing different kinds of graph in a SQL database Representing a graph in a SQL database Look at the diagram of an example undirected cyclic graph on this page's parent page. You can see that it has seven edges thus: (n1-n2), (n2-n3), (n2-n4), (n3-n5), (n4-n5), (n4-n6), (n5-n6) The minimal representation in a SQL database uses just a table of table of edges, created thus: cr-edges.sql drop table if exists edges cascade; create table edges( node_1 text not null, node_2 text not null, constraint edges_pk primary key(node_1, node_2)); A typical special case, like computing Bacon Numbers, would specify "node_1" and "node_2" with names that are specific to the use case (like "actor" and "co_actor") whose values are the primary keys in a separate "actors" table. This implies foreign key constraints, of course. The "actors" table would have other columns (like "given_name", "family_name", and so on). Similarly, the "edges" table would have at least one other column: the array of movies that the two connected actors have been in. How to represent the undirected nature of the edges There are two obvious design choices. One design populates the "edges" table sparsely by recording each edge just once. For example, if node "a" is recorded in column "node_1", then it must not be recorded in column "node_2". Then you must understand that though the column names "node_1" and "node_2" imply a direction, this is not the case. Though this design has the appeal of avoiding denormalization, it makes the SQL design tricky. When the traversal arrives at a node and needs to find the nodes at the other ends of the edges that have the present node at one of their ends, you don't know whether to restrict on "node_1" and read "node_2" or to restrict on "node_2" and read "node_1" and so you must try both. The other design records each edge twice. For example, the edge between nodes "a" and "b" will be recorded both as (node_1 = 'a', node_2 = 'b') and as (node_1 = 'b', node_2 = 'a'). This denormalization makes the SQL straightforward—and is therefore preferred. An implementation of the design that represents each edge just once is described for completeness. But the approach that leads to simpler SQL, the design that represents each edge twice—once in each direction is described first. The representations for the more specialized kinds of graph—directed cyclic graph, directed acyclic graph, and rooted tree—all use the same table structure but with appropriately different rules to which the content must conform.
https://docs.yugabyte.com/preview/api/ysql/the-sql-language/with-clause/traversing-general-graphs/graph-representation/
2022-06-25T14:55:56
CC-MAIN-2022-27
1656103035636.10
[]
docs.yugabyte.com
Trailer Burial Skip to tickets Directed By: Emilija Škarnulyte The Changing Face of Europe 2022 Lithuania, Norway Lithuanian Full Subtitles North American Premiere 60 minutes Uranium was produced more than six billion years ago when a star in our galaxy burst open. One of the building blocks of life, it takes hundreds of thousands of years to decay once extracted from the ground. So when the Ignalina nuclear reactor in Lithuania—the sister reactor to Chernobyl—is decommissioned, the people who live nearby are faced with a cleanup project that will take significantly longer than their lifetimes to complete. In this hypnotizing reflection on radioactive waste, director Emilija Škarnulyte traverses the eerie landscape of an industrial gravesite built to entomb nuclear ruins deep below the surface of the Earth. Like an archaeologist travelling through space and time, she documents the process in close detail. A profoundly meditative watch that considers the serpentine life cycle of industrial technology and the strikingly unnatural process of transforming nuclear waste back to its original form. Vivian Belik Read less Credits Director(s) Emilija Škarnulyte Producer(s) Elisa Fernanda Pirir Dagne Vildžiunaite Writer(s) Emilija Škarnulyte Editor(s) Darius Šilenas Mykolas Žukauskas Cinematography Audrius Budrys Eitvydas Doškus Adam Khalil Emilija Škarnulyte Composer Gaute Barlindhaug Sound Vytis Puronas
https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/burial
2022-06-25T13:18:47
CC-MAIN-2022-27
1656103035636.10
[]
hotdocs.ca
14. The LFC Process 14.1. Overview kea-lfc is a service process that removes redundant information from the files used to provide persistent storage for the memfile database backend. This service is written to run as a standalone process. While kea-lfc can be started externally, there is usually no need to do so. kea-lfc is run on a periodic basis by the Kea DHCP servers. The process operates on a set of files, using them to receive input and output of the lease entries and to indicate what stage the process is in, in the event of an interruption. Currently the caller must supply names for all of the files. 14.2. Command-Line Options kea-lfc is run as follows: kea-lfc [-4 | -6] -c config-file -p pid-file -x previous-file -i copy-file -o output-file -f finish-file The argument -4 or -6 selects the protocol version of the lease files. The -c argument specifies the configuration file. This is required, but is not currently used by the process. The -p argument specifies the PID file. When the kea-lfc process starts, it attempts to determine whether another instance of the process is already running by examining the PID file. If one is already running, the new process is terminated; if one is not running, Kea writes its PID into the PID file. The other filenames specify where the kea-lfc process should look for input, write its output, and perform its bookkeeping: previous— when kea-lfcstarts, this is the result of any previous run of kea-lfc. When kea-lfcfinishes, it is the result of this run. If kea-lfcis interrupted before completing, this file may not exist. input— before the DHCP server invokes kea-lfc, it moves the current lease file here and then calls kea-lfcwith this file. output— this is the temporary file where kea-lfcwrites the leases. Once the file has finished writing, it is moved to the finishfile (see below). finish— this is another temporary file kea-lfcuses for bookkeeping. When kea-lfccompletes writing the outputfile, it moves the contents to the file of this name. After kea-lfcfinishes deleting the other files ( previousand input), it moves this file to the previouslease file. By moving the files in this fashion, kea-lfcand the DHCP server processes can determine the correct file to use even if one of the processes is interrupted before completing its task. There are several additional arguments, mostly for debugging purposes. -d sets the logging level to debug. -v and -V print out version stamps, with -V providing a longer form. -h prints out the usage string.
https://kea.readthedocs.io/en/kea-2.1.3/arm/lfc.html
2022-06-25T14:54:41
CC-MAIN-2022-27
1656103035636.10
[]
kea.readthedocs.io
Enabling NodeSync for keyspaces and tables About this task Enable the NodeSync Service for OpsCenter Monitoring. Designate which keyspaces and tables have NodeSync active for viewing NodeSync status. NodeSync and incremental NodeSync are enabled by default; however, keyspaces and their tables must be explicitly opted in. Follow the steps in this procedure to enable NodeSync monitoring for eligible keyspaces or individual tables.. Prerequisites NodeSync is enabled in DSE by default. If NodeSync was disabled, enable NodeSync Service. Create tables for keyspaces you intend to monitor. You cannot enable NodeSync on empty keyspaces in OpsCenter. If OpsCenter authentication is enabled, set user permissions for the NodeSync Service for the applicable user roles and clusters. Procedure Select cluster name > Services. Select Details for the NodeSync Service. Select the Settings tab. Select the keyspaces and tables to configure NodeSync for your environment, and then select Configure. Use the toggle buttons to enable NodeSync and incremental NodeSync for the selected keyspaces and tables. Enabling NodeSync sends an ALTER TABLE statement that is effective immediately; no DSE restart is necessary. What’s next Once NodeSync is enabled for your environment, monitor NodeSync operations for keyspace and tables by viewing NodeSync status.
https://docs.datastax.com:443/en/opscenter/docs/6.8/online_help/services/enableNodeSyncOpsc.html
2022-06-25T13:38:12
CC-MAIN-2022-27
1656103035636.10
[]
docs.datastax.com:443
Language syntax Basic types Integers: 10, -42, +1337 Strings: "Hello, world!" Booleans: true, false Maths Basic maths are supported so you can do arithmetic or boolean logic: 1 + 1 b * b - 4 * a * c (1 + 2) * 3 true || false true || false && myVar x % 2 == 0 String concatenation Simply «add» any value to a string to concatenate it: "Hello" + "world!" --> "Helloworld!" "2 + 2 is " + 4 --> "2 + 2 is 4" "This is " + false --> "This is false" Conditional execution if (x == y) { ... } if (x < y) { ... } else { ... } Loops for (i in 0..10) { ... } Iterates in a range from 0 to 10 (inclusive). You can also iterate on decreasing values: for (i in 10..0) { ... } Iterates in a range from 10 to 0 (inclusive). You can use «complex» expressions for the range bounds: for (i in start..(a + b)) { ... } Printing data display "hello"; display (1 + 2) * 3; display true || false && true; Prints three lines: hello 9 true To print values without new lines, you can use: put "hello" put 42 Prints: hello42 Defining functions Functions can be defined following this pattern: fun <function name> (<arg 1>, ..., <arg N>) = <expression> | <statements> Example: fun mul(x, y) = x * y; fun print(s) = { display s }; fun spawnActor(x) = { a = create Actor(x); return a } Functions can be called as part of expressions or with the call statement: fun print(value, printer) = send [value] to printer; Printer () [value] = display value; p = create Printer (); v = F(x, y) * H(y) + x; call print(v, p) Actor behavior definition An actor behavior definition follows this pattern: <actor type> (<state var1>, <state varN>) [<message item1>, <message itemN>] = <statements> A behavior is executed when a message matching a pattern is received by an actor. Example: MyActor () [item1] = display item1; MyActor (State) [item1, item2] = { display item1; display item2; }; Tagged messages To enable calling the right behavior of an actor with multiple behaviors of the same arity, one can tag the messages with literal values in the patterns. Example: MyActor () ["display-one"] = display 1; MyActor () ["display-two"] = display 2; These behaviors will be executed on receiving a message containing either "display-one" or "display-two". Changing behavior An actor can change its type (and so its behavior) based on a received message: Empty () ["set", x] = become Full (x); Full (X) ["get", sender] = { send [X] to sender; become Empty () }; Sending messages send [42] to anActor; send ["hello", 1337] to anotherActor; Actor self reference In a behavior, the self variable is a reference to the actor executing the code. Creating actors instances Instantiating an actor from a given type with a given state (e.g. MyActor (42)) is done like so: myInstantiatedActor = create MyActor (42);
https://actorlang.readthedocs.io/en/1.1.0/syntax.html
2022-06-25T14:40:36
CC-MAIN-2022-27
1656103035636.10
[]
actorlang.readthedocs.io
Ethernet_b239 (community library) Summary Arduino port of Ethernet library Example Build Testing Device OS Version: This table is generated from an automated build. Success only indicates that the code compiled successfully. Library Read Me This content is provided by the library maintainer and has not been validated or approved. About This repo serves as the specfication for what constitutes a valid Spark firmware library and an actual example library you can use as a reference when writing your own libraries. Spark Libraries can be used in the Spark IDE. Soon you'll also be able to use them with the Spark CLI and when compiling firmware locally with Spark core-firmware. Table of Contents This README describes how to create libraries as well as the Spark Library Spec. The other files constitute the Spark Library itself: - file, class, and function naming conventions - example apps that illustrate library in action - recommended approaches for test-driven embedded development - metadata to set authors, license, official names Getting Started 1. Define a temporary function to create library boilerplate Copy and paste this into a bash or zsh shell or .profile file. create_spark_library() { LIB_NAME="$1" ### Make sure a library name was passed if [ -z "{$LIB_NAME}" ]; then echo "Please provide a library name" return fi echo "Creating $LIB_NAME" ### Create the directory if it doesn't exist if [ ! -d "$LIB_NAME" ]; then echo " ==> Creating ${LIB_NAME} directory" mkdir $LIB_NAME fi ### CD to the directory cd $LIB_NAME ### Create the spark.json if it doesn't exist. if [ ! -f "spark.json" ]; then echo " ==> Creating spark.json file" cat <<EOS > spark.json { "name": "${LIB_NAME}", "version": "0.0.1", "author": "Someone <[email protected]>", "license": "Choose a license", "description": "Briefly describe this library" } EOS fi ### Create the README file if it doesn't exist if test -z "$(find ./ -maxdepth 1 -iname 'README*' -print -quit)"; then echo " ==> Creating README.md" cat <<EOS > README.md TODO: Describe your library and how to run the examples EOS fi ### Create an empty license file if none exists if test -z "$(find ./ -maxdepth 1 -iname 'LICENSE*' -print -quit)"; then echo " ==> Creating LICENSE" touch LICENSE fi ### Create the firmware/examples directory if it doesn't exist if [ ! -d "firmware/examples" ]; then echo " ==> Creating firmware and firmware/examples directories" mkdir -p firmware/examples fi ### Create the firmware .h file if it doesn't exist if [ ! -f "firmware/${LIB_NAME}.h" ]; then echo " ==> Creating firmware/${LIB_NAME}.h" touch firmware/${LIB_NAME}.h fi ### Create the firmware .cpp file if it doesn't exist if [ ! -f "firmware/${LIB_NAME}.cpp" ]; then echo " ==> Creating firmware/${LIB_NAME}.cpp" cat <<EOS > firmware/${LIB_NAME}.cpp #include "${LIB_NAME}.h" EOS fi ### Create an empty example file if none exists if test -z "$(find ./firmware/examples -maxdepth 1 -iname '*' -print -quit)"; then echo " ==> Creating firmware/examples/example.cpp" cat <<EOS > firmware/examples/example.cpp #include "${LIB_NAME}/${LIB_NAME}.h" // TODO write code that illustrates the best parts of what your library can do void setup { } void loop { } EOS fi ### Initialize the git repo if it's not already one if [ ! -d ".git" ]; then GIT=`git init` echo " ==> ${GIT}" fi echo "Creation of ${LIB_NAME} complete!" echo "Check out for more details" } 2. Call the function create_spark_library this-is-my-library-name - Replace this-is-my-library-namewith the actual lib name. Your library's name should be lower-case, dash-separated. 3. Edit the spark.json firmware .h and .cpp files - Use this repo as your guide to good library conventions. 4. Create a GitHub repo and push to it 5. Validate and publish via the Spark IDE To validate, import, and publish the library, jump into the IDE and click the "Add Library" button. Getting Support - Check out the libraries category on the Spark community site and post a thread there! - To file a bug; create a GitHub issue on this repo. Be sure to include details about how to replicate it. The Spark Library Spec A Spark firmware library consists of: - a GitHub REPO with a public clone url - a JSON manifest ( spark.json) at the root of the repo - a bunch of files and directories at predictable locations (as illustrated here) More specifically, the collection of files comprising a Spark Library include the following: Supporting Files - a spark.jsonmeta data file at the root of the library dir, very similar to NPM's package.json. (required) The content of this file is validated via this JSON Schema. a README.mdthat should provide one or more of the following sections - About: An overview of the library; purpose, and description of dominant use cases. - Example Usage: A simple snippet of code that illustrates the coolest part about your library. - Recommended Components: Description and links to example components that can be used with the library. - _Circuit Diagram: A schematic and breadboard view of how to wire up components with the library. Learning Activities: Proposed challenges to do more sophisticated things or hacks with the library. a docdirectory of diagrams or other supporting documentation linked to from the README.md Firmware - a firmwarefolder containing code that will compile and execute on a Spark device. This folder contains: - A bunch of .h, .cpp, and .cfiles constituting the header and source code of the library. - The main library header file, intended to be included by users - MUST be named the same as the "name" key in the spark.json+ a .hextension. So if nameis uber-library-example, then there should be a uber-library-example.hfile in this folder. Other .hfiles, can exist, but this is the only one that is required. - SHOULD define a C++ style namespace in upper camel case style from the name (i.e. uber-library-example -> UberLibraryExample) - The main definition file, providing the bulk of the libraries public facing functionality - MUST be named like the header file, but with a .cppextension. (uber-library-example.cpp) - SHOULD encapsulate all code inside a C++ style namespace in upper camel case style (i.e. UberLibraryExample) - Other optional .hfiles, when included in a user's app, will be available for inclusion in the Web IDE via #include "uber-library-example/SOME_FILE_NAME.h". - Other optional .cppfiles will be compiled by the Web IDE when the library is included in an app (and use arm-none-eabi-g++to build). - Other optional .cfiles will be compiled by the Web IDE when the library is included in an app (and use arm-none-eabi-gccto build). - An examplessub-folder containing one or more flashable example firmware .inoor .cppapplications. - Each example file should be named descriptively and indicate what aspect of the library it illustrates. For example, a JSON library might have an example file like parse-json-and-output-to-serial.cpp. - A testsub-folder containing any associated tests Contributing This repo is meant to serve as a place to consolidate insights from conversations had about libraries on the Spark community site, GitHub, or elsewhere on the web. "Proposals" to change the spec are pull requests that both define the conventions in the README AND illustrate them in underlying code. If something doesn't seem right, start a community thread or issue pull requests to stir up the conversation about how it ought to be! Browse Library Files
https://docs.particle.io/reference/device-os/libraries/e/Ethernet_b239/
2022-06-25T14:16:50
CC-MAIN-2022-27
1656103035636.10
[]
docs.particle.io
Virtuozzo Hybrid Infrastructure supports distributed virtual switching on the basis of Open vSwitch. The latter runs on every compute node and forwards network traffic between virtual machines on the same node, and between virtual machines and infrastructure networks. Distributed virtual switching provides centralized management and monitoring of virtual network configuration across all nodes in the compute cluster. Distributed virtual routing used for virtual network connectivity enables placing virtual routers on compute nodes and routing VM traffic directly from hosting nodes. In the DNAT scenario, a floating IP is assigned directly to the VM’s network interface. If SNAT is used, then traffic is routed via management nodes. Physical networks are connected to infrastructure networks on Layer 2. The physical representation of physical network connectivity can be shown as follows: On the figure above: Logically, the physical networking scheme can be represented as follows: VXLAN technology used for virtual networks allows creating logical L2 networks in L3 networks by encapsulating (tunneling) Ethernet frames over UDP packets. The physical representation of virtual network connectivity can be shown as follows: Logically, the virtual networking scheme can be represented as follows: Last build date: Monday, December 27, 2021 Administrator Guide for Virtuozzo Hybrid Infrastructure4.7. Copyright © 2016-2021 Virtuozzo International GmbH
https://docs.virtuozzo.com/virtuozzo_hybrid_infrastructure_4_7_admins_guide/compute-network-architecture.html
2022-06-25T13:37:26
CC-MAIN-2022-27
1656103035636.10
[]
docs.virtuozzo.com
As opposed to selling courses, if you were looking to sell memberships and control student access, you’ll need a membership plugin. And out of the many membership plugins available, the one I’d recommend is Paid Membership Pro. The steps to integrate LearnDash with a membership plugin are the same as integrating it with an e-commerce plugin. - You have to install the membership plugin- Paid Membership Pro, - and then the integration plugin with LearnDash.
https://docs.wbcomdesigns.com/docs/learndash/using-a-membership-plugin/
2022-06-25T13:18:03
CC-MAIN-2022-27
1656103035636.10
[]
docs.wbcomdesigns.com
Security architecture YugabyteDB Managed is a fully managed YugabyteDB-as-a-Service that allows you to run YugabyteDB clusters on public cloud providers such as Google Cloud Platform (GCP) and Amazon Web Services (AWS), with more public cloud provider options coming soon. YugabyteDB Managed runs on top of YugabyteDB Anywhere. It is responsible for creating and managing customer YugabyteDB clusters deployed on cloud provider infrastructure. All customer clusters are firewalled from each other. Outside connections are also firewalled according to the IP allow list rules that you assign to your clusters. You can also connect Dedicated (that is, not Sandbox) clusters to virtual private clouds (VPCs) on the public cloud provider of your choice (subject to the IP allow list rules). Infrastructure security YugabyteDB Managed uses both encryption in transit and encryption at rest to protect clusters and cloud infrastructure. All communication between YugabyteDB Managed architecture domains is encrypted in transit using TLS. Likewise, all communication between clients or applications and clusters is encrypted in transit. Every cluster has its own certificates, generated when the cluster is created and signed by the Yugabyte internal PKI. Root and intermediate certificates are not extractable from the hardware security appliances. Data at rest, including clusters and backups, is AES-256 encrypted using native cloud provider technologies - S3 and EBS volume encryption for AWS, and server-side and persistent disk encryption for GCP. Encryption keys are managed by the cloud provider and anchored by hardware security appliances. Customers can enable column-level encryption as an additional security control. YugabyteDB Managed provides DDoS and application layer protection, and automatically blocks network protocol and volumetric DDoS attacks. Securing database clusters by default Yugabyte secures YugabyteDB Managed databases using the same default security measures that we recommend to our customers to secure their own YugabyteDB installations, including: - authentication - role-based access control - dedicated users - limited network exposure - encryption in transit - encryption at rest Data privacy and compliance For information on data privacy and compliance, refer to the Yugabyte DPA. Auditing Yugabyte supports audit logging at the account and database level. For information on database audit logging, refer to Configure Audit Logging.
https://docs.yugabyte.com/preview/yugabyte-cloud/cloud-security/cloud-security-features/
2022-06-25T14:30:50
CC-MAIN-2022-27
1656103035636.10
[array(['/images/yb-cloud/cloud-security-diagram.png', 'YugabyteDB Managed high-level architecture'], dtype=object)]
docs.yugabyte.com
Trailer Still Working 9 to 5 Skip to tickets Directed By: Camille Hardman, Gary Lane Special Presentations 2022 USA English Closed Captions, Open Captions International Premiere 97 minutes Showing with closed or open captions In the 1970s, Jane Fonda tried shopping around a movie about women in the workplace, but no studio wanted it. Blacklisted for her political views, Fonda couldn't get the gatekeepers interested in a woman-led labour story until it became a satirical comedy with two more Hollywood legends attached. Forty years after 9 to 5 became the highest-grossing comedy of 1980, Fonda, Dolly Parton, Lily Tomlin and Dabney Coleman reunite to share what it took to get the cult classic made. Rooted in the activism of the 9to5, National Association of Working Women, the film was such a risky undertaking that Tomlin tried to quit after the first day of shooting and Parton took heat for working with Jane. Equal Rights Amendment champion Zoe Nicholson, Allison Janney from the 9 to 5 Broadway adaptation, and the TV version's Rita Moreno discuss how the story speaks to watershed feminist events like #MeToo and the Weinstein scandal, reminding audiences that working women still have so much to do. Myrocia Watamaniuk Read less Credits Director(s) Camille Hardman Gary Lane Producer(s) Camille Hardman Gary Lane Executive Producer(s) Larry Lane Steve Summers Regina K. Scully Shane McAnally Geralyn Dreyfous Gary Lane Writer(s) Camille Hardman Editor(s) Oreet Rees Elisa Bonora Cinematography Brian Tweedt Composer Jessica Weiss Sound Yahel Dooley Visit the film's website Read less Promotional Partners Special Presentations sponsored by Back to What's On See More & Save Buy your Festival ticket packages and passes today! Share
https://hotdocs.ca/whats-on/hot-docs-festival/films/2022/still-working-9-to-5
2022-06-25T14:42:48
CC-MAIN-2022-27
1656103035636.10
[]
hotdocs.ca
Restricting Access to Amazon S3 Content by Using an Origin Access Identity Topics. Important If you use an Amazon S3 bucket configured as a website endpoint, you must set it up with CloudFront as a custom origin and you can't use the origin access identity feature described in this topic. You can restrict access to content on a custom origin by using custom headers. For more information, see Using Custom Headers to Restrict Access to Your Content on a Custom Origin. When you first set up an Amazon S3 bucket as the origin for a CloudFront distribution, you grant everyone permission to read the files in your bucket.. If you use CloudFront signed URLs or signed cookies to limit access to files in your Amazon S3 bucket, you probably also want to prevent users from accessing your Amazon S3 files by using Amazon S3 URLs. If users access your files directly in Amazon S3, they bypass the controls provided by CloudFront signed URLs or signed cookies, for example, control over the date and time that a user can no longer access your content and control over which IP addresses can be used to access content. In addition, if users access files both through CloudFront and directly by using Amazon S3 URLs, CloudFront access logs are less useful because they're incomplete. To ensure that your users access your files using only CloudFront URLs, regardless of whether the URLs are signed, do the following:. For more information, see Creating a CloudFront Origin Access Identity and Adding it to Your Distribution. Change the permissions either on your Amazon S3 bucket or on the files in your bucket so that only the origin access identity has read permission (or read and download permission). When your users access your Amazon S3 files through CloudFront, the CloudFront origin access identity gets the files on behalf of your users. If your users request files directly by using Amazon S3 URLs, they're denied access. The origin access identity has permission to access files in your Amazon S3 bucket, but users don't. For more information, see Granting the Origin Access Identity Permission to Read Files in Your Amazon S3 Bucket. Note To create origin access identities, you must use the CloudFront console or CloudFront API version 2009-09-09 or later. For detailed information about setting up a private Amazon S3 bucket to use with CloudFront, see How to Set Up and Serve Private Content Using S3 and Amazon CloudFront. Creating a CloudFront Origin Access Identity and Adding it to Your Distribution An AWS account can have up to 100 CloudFront origin access identities. However, you can add an origin access identity to as many distributions as you want, so one origin access identity is usually sufficient. If you didn't create an origin access identity and add it to your distribution when you created the distribution, you can create and add one now by using either the CloudFront console or the CloudFront API: To use the CloudFront console – You can create an origin access identity and add it to your distribution at the same time. For step-by-step instructions, see Creating an Origin Access Identity and Adding it to Your Distribution. To use the CloudFront API – You create an origin access identity, and then you add it to your distribution. For step-by-step instructions, see the following: Creating an Origin Access Identity and Adding it to Your Distribution If you didn't create an origin access identity when you created your distribution, do the following. To create a CloudFront origin access identity using the CloudFront console Sign in to the AWS Management Console and open the CloudFront console at. Click the ID of a distribution that has an S3 origin, and then choose Distribution Settings. Choose the Origins tab. Choose an origin, and then choose Edit. For Restrict Bucket Access, choose Yes. Note If you don't see the Restrict Bucket Access option, your Amazon S3 origin might be configured as a website endpoint. In that configuration, S3 buckets must be set up with CloudFront as custom origins and you can't use an origin access identity with them. If you already have an origin access identity that you want to use, click Use an Existing Identity. Then select the identity in the Your Identities list. Note If you already have an origin access identity, we recommend that you reuse it to simplify maintenance. If you want to create an identity, click Create a New Identity. If you like, you can replace the bucket name in the Comment field with a custom description. If you want CloudFront to automatically give the origin access identity permission to read the files in the Amazon S3 bucket specified in Origin Domain Name, click Yes, Update Bucket Policy. Important If you choose Yes, Update Bucket Policy, CloudFront updates bucket permissions to grant the specified origin access identity the permission to read files in your bucket.. For more information, see Granting the Origin Access Identity Permission to Read Files in Your Amazon S3 Bucket. If you want to manually update permissions on your Amazon S3 bucket, choose No, I Will Update Permissions. Choose Yes, Edit. If you have more than one origin, repeat the steps to add an origin access identity for each one. Creating an Origin Access Identity Using the CloudFront API If you already have an origin access identity and you want to reuse it instead of creating another one, skip to Adding an Origin Access Identity to Your Distribution Using the CloudFront API. To create a CloudFront origin access identity by using the CloudFront API, use the POST Origin Access Identity API action. The response includes an Id and an S3CanonicalUserId for the new origin access identity. Make note of these values because you will use them later in the process: Id element – You use the value of the Idelement to associate an origin access ID with your distribution. S3CanonicalUserId element – You use the value of the S3CanonicalUserIdelement when you give CloudFront access to your Amazon S3 bucket or files. For more information, see CreateCloudFrontOriginAccessIdentity in the Amazon CloudFront API Reference. Adding an Origin Access Identity to Your Distribution Using the CloudFront API You can use the CloudFront API to add a CloudFront origin access identity to an existing distribution or to create a new distribution that includes an origin access identity. In either case, include an OriginAccessIdentity element. This element contains the value of the Id element that the POST Origin Access Identity API action returned when you created the origin access identity. You can add the OriginAccessIdentity element to one or more origins. See the following topics in the Amazon CloudFront API Reference: Create a new web distribution – CreateDistribution Update an existing web distribution – UpdateDistribution Granting the Origin Access Identity Permission to Read Files in Your Amazon S3 Bucket When you create or update a distribution, you can add an origin access identity and automatically update the bucket policy to give the origin access identity permission to access your bucket. Alternatively, you can choose to manually change the bucket policy or change ACLs, which control permissions on individual files in your bucket. Whichever method you use, you should still review the bucket policy for your bucket and review the permissions on your files to ensure that: CloudFront can access files in the bucket on behalf of users who are requesting your files through CloudFront. Users can't use Amazon S3 URLs to access your files. Important If you configure CloudFront to accept and forward to Amazon S3 all of the HTTP methods that CloudFront supports, create a CloudFront origin access identity to restrict access to your Amazon S3 content, and grant the origin access identity the desired permissions. For example, if you configure CloudFront to accept and forward these methods because you want to use the PUT method, you must configure Amazon S3 bucket policies or ACLs to handle DELETE requests appropriately so users can't delete resources that you don't want them to. Note the following: You might find it easier to update Amazon S3 bucket policies than ACLs because you can add files to the bucket without updating permissions. However, ACLs give you more fine-grained control because you're granting permissions on each file. By default, your Amazon S3 bucket and all of the files in it are private—only the AWS account that created the bucket has permission to read or write the files in it. If you're adding an origin access identity to an existing distribution, modify the bucket policy or any file ACLs as appropriate to ensure that the files are not publicly available. Grant additional permissions to one or more secure administrator accounts so you can continue to update the contents of the Amazon S3 bucket. Important There might be a brief delay between when you save your changes to Amazon S3 permissions and when the changes take effect. Until the changes take effect, you can get permission-denied errors when you try to access files in your bucket. Updating Amazon S3 Bucket Policies You can update the Amazon S3 bucket policy by using either the AWS Management Console or the Amazon S3 API: Grant the CloudFront origin access identity the desired permissions on the bucket.. For more information, go to Using Bucket Policies and User Policies in the Amazon Simple Storage Service Developer Guide. For an example, see "Granting Permission to an Amazon CloudFront Origin Identity" in the topic Bucket Policy Examples, also in the Amazon Simple Storage Service Developer Guide. Updating Amazon S3 ACLs Using either the AWS Management Console or the Amazon S3 API, change the Amazon S3 ACL: Grant the CloudFront origin access identity the desired permissions on each file that the CloudFront distribution serves.. If another AWS account uploads files to your bucket, that account is the owner of those files. Bucket policies only apply to files that the bucket owner owns. This means that if another account uploads files to your bucket, the bucket policy that you created for your OAI will not be evaluated for those files. For more information, see Managing Access with ACLs in the Amazon Simple Storage Service Developer Guide. You can also change the ACLs programmatically by using one of the AWS SDKs. For an example, see the downloadable sample code in Create a URL Signature Using C# and the .NET Framework. Using an Origin Access Identity in Amazon S3 Regions that Support Only Signature Version 4 Authentication Newer Amazon S3 regions require that you use signature version 4 for authenticated requests. (For the versions of signature supported in each Amazon S3 region, see Amazon Simple Storage Service (S3) in the topic Regions and Endpoints in the Amazon Web Services General Reference.) However, when you create an origin access identity and add it to a CloudFront distribution, CloudFront typically uses signature version 4 for authentication when it requests files in your Amazon S3 bucket. If you're using an origin access identity and if your bucket is in one of the regions that requires signature version 4 for authentication, note the following: DELETE, GET, HEAD, OPTIONS, and PATCHrequests are supported without qualifications. If you want to submit PUTrequests to CloudFront to upload files to your Amazon S3 bucket, you must add an x-amz-content-sha256header to the request, and the header value must contain a SHA256 hash of the body of the request. For more information, see the documentation about the x-amz-content-sha256header on the Common Request Headers page in the Amazon Simple Storage Service API Reference. POSTrequests are not supported.
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/private-content-restricting-access-to-s3.html
2019-04-18T13:03:32
CC-MAIN-2019-18
1555578517639.17
[]
docs.aws.amazon.com
Q: Why did EDD Bookings start at version 0.1, not 1.0? We follow SemVer - a widely adopted standard for versioning that advocates the use of version numbers that have real meaning. In short, every number of a version reflects how that version is different from the one that came before it. Every version has three numbers: a major, minor and a patch number. Although sometimes the patch number is omitted when it is zero, as is the case with 0.1 and 1.0. The versions 0.1.0 and 1.0.0 are respectively equivalent. - The patch number indicates how many times the product has undergone maintenance changes. These include bug fixes, minor text or style changes, and so on. - The minor number indicates the number of times the product has had features and significant improvements added to it. These include new features, significant UI updates, and other similar changes. - The major version indicates how many times the product has undergone major API changes. In layman's terms, how many times the product has broken backwards compatibility. This is usually due to a rewrite. or massive refactoring. We built EDD Bookings from scratch, rather than refactor the previous version. Therefore, we've essentially built a new product that has, so far, not undergone any API changes that broke backwards compatibility. For this reason, and because we adhere to SemVer, the major version of our product is zero. On the other hand, we've created a product that is being released with an initial set of features, so the minor version should be one. Therefore, we're starting our versioning at 0.1 and not at 1.0. SemVer also states that between 0.1, 0.2, 0.3 and so on, the product is allowed to break backwards incompatibility until it has matured and reached a point of stability. At that point, the product may versioned as 1.0, to signify that it has reached its first stable version. While we aren't planning to break backwards compatibility between the minor versions that lead up to 1.0, if the need arises we will be providing measures and tools to ensure that your existing data is kept intact.
https://docs.eddbookings.com/article/408-q-why-did-edd-bookings-start-at-version-01-not-10
2019-04-18T13:22:23
CC-MAIN-2019-18
1555578517639.17
[]
docs.eddbookings.com
Return data to a cassandra server New in version 2015.5.0. Required python modules: cassandra-driver To use the cassandra returner, append '--return cassandra_cql' to the salt command. ex: salt '*' test.ping --return_cql cassandra Note: if your Cassandra instance has not been tuned much you may benefit from altering some timeouts in cassandra.yaml like so: # How long the coordinator should wait for read operations to complete read_request_timeout_in_ms: 5000 # How long the coordinator should wait for seq or index scans to complete range_request_timeout_in_ms: 20000 # How long the coordinator should wait for writes to complete write_request_timeout_in_ms: 20000 # How long the coordinator should wait for counter writes to complete counter_write_request_timeout_in_ms: 10000 # How long a coordinator should continue to retry a CAS operation # that contends with other proposals for the same row cas_contention_timeout_in_ms: 5000 # How long the coordinator should wait for truncates to complete # (This can be much longer, because unless auto_snapshot is disabled # we need to flush first so we can snapshot before removing the data.) truncate_request_timeout_in_ms: 60000 # The default timeout for other, miscellaneous operations request_timeout_in_ms: 20000 As always, your mileage may vary and your Cassandra cluster may have different needs. SaltStack has seen situations where these timeouts can resolve some stacktraces that appear to come from the Datastax Python driver. salt.returners.cassandra_cql_return. event_return(events)¶ Return event to one of potentially many clustered cassandra nodes Requires that configuration be enabled via 'event_return' option in master config. Cassandra does not support an auto-increment feature due to the highly inefficient nature of creating a monotonically increasing number across all nodes in a distributed database. Each event will be assigned a uuid by the connecting client. salt.returners.cassandra_cql_return. get_fun(fun)¶ Return a dict of the last function called for all minions salt.returners.cassandra_cql_return. get_jid(jid)¶ Return the information returned when the specified job id was executed salt.returners.cassandra_cql_return. get_jids()¶ Return a list of all job ids salt.returners.cassandra_cql_return. get_load(jid)¶ Return the load data that marks a specified jid salt.returners.cassandra_cql_return. get_minions()¶ Return a list of minions salt.returners.cassandra_cql_return. prep_jid(nocache, passed_jid=None)¶ Do any work necessary to prepare a JID, including sending a custom id salt.returners.cassandra_cql_return. returner(ret)¶ Return data to one of potentially many clustered cassandra nodes salt.returners.cassandra_cql_return. save_load(jid, load, minions=None)¶ Save the load to the specified jid id
https://docs.saltstack.com/en/latest/ref/returners/all/salt.returners.cassandra_cql_return.html
2019-04-18T13:12:24
CC-MAIN-2019-18
1555578517639.17
[]
docs.saltstack.com
Keyword: Child development and pedagogy, about learning-Tet paper 2, TRB TET study materials in tamil for tet exam, TNPSC online exam, TET news,Tamil Quiz,Tamil Objective questions,National Affairs,Exam Notification, Applied science question, TET Science latest notification, TET Exam study materials, Tamil textbooks, Current Affairs,TNPSC Previous Question,Physics Questions,Recruitment Notification,Model Question Papers,Tamil Books,, History of India,Tamil Nadu Teachers Recruitment Board Exam, Indian Economics Question, TET syllabus, Geography Questions, TNPSC Model Question Paper, , Tamil GK,TET Model Papers, General Tamil Question and answer, Chemistry Questions,TRB exam question, TRB,,Biology quiz, General Knowledge for All competitive Exam,Important questions,TNPSC News,Tamil Notes,TET previous paper,
http://docs.tnpscquestionpapers.com/2013/08/child-development-pedagogy-learning-tamil-tet.html
2019-04-18T12:38:47
CC-MAIN-2019-18
1555578517639.17
[]
docs.tnpscquestionpapers.com
Create a VM from a specialized VHD in a storage account Create a new VM by attaching a specialized unmanaged disk as the OS disk using Powershell. A specialized disk is a copy of VHD from an existing VM that maintains the user accounts, applications and other state data from your original VM. You have two options: Note This article has been updated to use the new Azure PowerShell Az module. To learn more about the new Az module and AzureRM compatibility, see Introducing the new Azure PowerShell Az module. For installation instructions, see Install Azure PowerShell. Option 1: Upload a specialized VHD You can upload the VHD from a specialized VM created with an on-premises virtualization tool, like Hyper-V, or a VM exported from another cloud. Prepare the VM You can upload a specialized VHD that was created using an on-premises VM or a VHD exported from another cloud. A specialized VHD maintains the user accounts, applications and other state data from your original (i.e. VM image. You can either use an existing storage account or create a new one. To show the available storage accounts, type: Get-AzResourceGroup To create a resource group named myResourceGroup in the West US region, type: New-AzResourceGroup -Name myResourceGroup -Location "West US" Create a storage account named mystorageaccount in this resource group by using the New-AzStorageAccount cmdlet: New-AzStorageAccount -ResourceGroupName myResourceGroup -Name mystorageaccount -Location "West US" ` -SkuName "Standard_LRS" -Kind "Storage" Upload the VHD to your storage account Use the Add-AzVhd cmdlet to upload the image. Option 2: Copy the VHD from an existing Azure VM You can copy a VHD to another storage account to use when creating a new, duplicate VM. Before you begin Make sure that you: - Have information about the source and destination storage accounts. For the source VM, you need to have the storage account and container names. Usually, the container name will be vhds. You also need to have a destination storage account. If you don't already have one, you can create one using either the portal (All Services > Storage accounts > Add) or using the New-AzStorageAccount cmdlet. - Have downloaded and installed the AzCopy tool. Deallocate the VM Deallocate the VM, which frees up the VHD to be copied. - Portal: Click Virtual machines > myVM > Stop - Powershell: Use Stop-AzVM to stop (deallocate) the VM named myVM in resource group myResourceGroup. Stop-AzVM -ResourceGroupName myResourceGroup -Name myVM The Status for the VM in the Azure portal changes from Stopped to Stopped (deallocated). Get the storage account URLs You need the URLs of the source and destination storage accounts. The URLs look like: https://<storageaccount>.blob.core.windows.net/<containerName>/. If you already know the storage account and container name, you can just replace the information between the brackets to create your URL. You can use the Azure portal or Azure Powershell to get the URL: - Portal: Click the > for All services > Storage accounts > storage account > Blobs and your source VHD file is probably in the vhds container. Click Properties for the container, and copy the text labeled URL. You'll need the URLs of both the source and destination containers. - Powershell: Use Get-AzVM to get the information for VM named myVM in the resource group myResourceGroup. In the results, look in the Storage profile section for the Vhd Uri. The first part of the Uri is the URL to the container and the last part is the OS VHD name for the VM. Get-AzVM -ResourceGroupName "myResourceGroup" -Name "myVM" Get the storage access keys Find the access keys for the source and destination storage accounts. For more information about access keys, see About Azure storage accounts. - Portal: Click All services > Storage accounts > storage account > Access keys. Copy the key labeled as key1. - Powershell: Use Get-AzStorageAccountKey to get the storage key for the storage account mystorageaccount in the resource group myResourceGroup. Copy the key labeled key1. Get-AzStorageAccountKey -Name mystorageaccount -ResourceGroupName myResourceGroup Copy the VHD You can copy files between storage accounts using AzCopy. For the destination container, if the specified container doesn't exist, it will be created for you. To use AzCopy, open a command prompt on your local machine and navigate to the folder where AzCopy is installed. It will be similar to C:\Program Files (x86)\Microsoft SDKs\Azure\AzCopy. To copy all of the files within a container, you use the /S switch. This can be used to copy the OS VHD and all of the data disks if they are in the same container. This example shows how to copy all of the files in the container mysourcecontainer in storage account mysourcestorageaccount to the container mydestinationcontainer in the mydestinationstorageaccount storage account. Replace the names of the storage accounts and containers with your own. Replace <sourceStorageAccountKey1> and <destinationStorageAccountKey1> with your own keys. AzCopy /Source: ` /Dest: ` /SourceKey:<sourceStorageAccountKey1> /DestKey:<destinationStorageAccountKey1> /S If you only want to copy a specific VHD in a container with multiple files, you can also specify the file name using the /Pattern switch. In this example, only the file named myFileName.vhd will be copied. AzCopy /Source: ` /Dest: ` /SourceKey:<sourceStorageAccountKey1> /DestKey:<destinationStorageAccountKey1> ` /Pattern:myFileName.vhd When it is finished, you will get a message that looks something like: Finished 2 of total 2 file(s). [2016/10/07 17:37:41] Transfer summary: ----------------- Total files transferred: 2 Transfer successfully: 2 Transfer skipped: 0 Transfer failed: 0 Elapsed time: 00.00:13:07 Troubleshooting - When you use AZCopy, if you see the error "Server failed to authenticate the request", make sure the value of the Authorization header is formed correctly including the signature. If you are using Key 2 or the secondary storage key, try using the primary or 1st storage key. Create the new VM You need toResourceGroup, and sets the subnet address prefix to 10.0.0.0/24. $rgName = "myResourceGroup" $subnetName = "mySubNet" $singleSubnet = New-Az. $location = "West US" $vnetName = "myVnetName" $vnet = New-AzVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location ` -AddressPrefix 10.0.0.0/16 -Subnet $singleSubnet Create the network security group and an RDP rule To be able to log in to your VM using RDP, you need to have an security rule that allows RDP access on port 3389. Because the VHD for the new VM was created from an existing specialized VM, after the VM is created you can use an existing account from the source virtual machine that had permission to log on using RDP. This needs to be completed prior to creating the network interface it will be associated with. This example sets the NSG name to myNsg and the RDP rule name to myRdpPublicIpAddress -Name $ipName -ResourceGroupName $rgName -Location $location ` -AllocationMethod Dynamic Create the NIC. In this example, the NIC name is set to myNicName. This step also associates the Network Security Group created earlier with this NIC. $nicName = "myNicName" $nic = New-AzNetworkInterface -Name $nicName -ResourceGroupName $rgName ` VMConfig -VMName $vmName -VMSize "Standard_A2" Add the NIC $vm = Add-AzVMNetworkInterface -VM $vmConfig -Id $nic.Id Configure the OS disk Set the URI for the VHD that you uploaded or copied. In this example, the VHD file named myOsDisk.vhd is kept in a storage account named myStorageAccount in a container named myContainer. $osDiskUri = "" Add the OS disk. In this example, when the OS disk is created, the term "osDisk" is appended to the VM name to create the OS disk name. This example also specifies that this Windows-based VHD should be attached to the VM as the OS disk. $osDiskName = $vmName + "osDisk" $vm = Set-AzVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption attach -Windows Optional: If you have data disks that need to be attached to the VM, add the data disks by using the URLs of data VHDs and the appropriate Logical Unit Number (Lun). $dataDiskName = $vmName + "dataDisk" $vm = Add-AzVMDataDisk -VM $vm -Name $dataDiskName -VhdUri $dataDiskUri -Lun 1 -CreateOption attach When using a storage account, the data and operating system disk URLs look something like this:. You can find this on the portal by browsing to the target storage container, clicking the operating system or data VHD that was copied, and then copying the contents of the URL. Complete the VM Create the VM using the configurations that we just created. #Create the new VM New-AzVM -ResourceGroupName $rgName -Location $location -VM $vm If this command was successful, you'll see output like this: RequestId IsSuccessStatusCode StatusCode ReasonPhrase --------- ------------------- ---------- ------------ True OK OK Verify that the VM was created You should see the newly created VM either in the Azure portal, under All services > Virtual machines, or by using the following PowerShell commands: $vmList = Get-AzVM -ResourceGroupName $rgName $vmList.Name Next steps Sign in to your new virtual machine. For more information, see How to connect and log on to an Azure virtual machine running Windows. Feedback Send feedback about:
https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sa-create-vm-specialized
2019-04-18T12:53:26
CC-MAIN-2019-18
1555578517639.17
[]
docs.microsoft.com
Setting up Genesis eNews Extended with SendPress To find the information needed to setup Genesis eNews Extended you will need to go to the SendPress Subscribers Tab. From here you should see all of your lists and each list should have a form button. Click the form button and copy the information into the eNews widget settings. The image below shows what fields go where.
https://docs.sendpress.com/article/16-setting-up-genesis-enews-extended-with-sendpress
2019-04-18T13:27:49
CC-MAIN-2019-18
1555578517639.17
[array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54fced86e4b034c37ea94588/images/55060587e4b034c37ea96001/file-9VS8xDxniy.png', None], dtype=object) array(['http://d33v4339jhl8k0.cloudfront.net/docs/assets/54fced86e4b034c37ea94588/images/550605a7e4b034c37ea96002/file-TrDinadvMd.png', None], dtype=object) ]
docs.sendpress.com
Contents Now Platform Custom Business Applications Previous Topic Next Topic Direct web services extended query parameters Subscribe Log in to subscribe to topics and get notified when content changes. ... SAVE AS PDF Selected Topic Topic & Subtopics All Topics in Contents Share Direct web services extended query parameters The following parameters, when specified as elements of input parameters to SOAP query functions such as get, getKeys, and getRecords, has the additional behavior of filtering and modifying the results that are returned. Note: Extended query element names are preceded by two underscore characters. Table 1. Parameter Description Example __encoded_query Specify an encoded query string to be used in filtering the returned results. The encoded query string format is similar to the value that may be specified in asysparm_query URL parameter. You may refer to the encoded query building example in the RSS feed generator examples. <__encoded_query>active=true^category='hardware'</__encoded_query> __order_by Instruct the returned results to be ordered by the specified field <__order_by>priority</__order_by> __order_by_desc Instruct the returned results to be ordered by the specified field, in descending order <__order_by_desc>opened_date</__order_by_desc> __exclude_columns Specify a list of comma delimited field names to exclude from the result set <__exclude_columns>sys_created_on,sys_created_by,caller_id,priority</__exclude_columns> __limit Limit the number of records that are returned <__limit>100</__limit> __first_row Instruct the results to be offset by this number of records from the beginning of the set. When used with __last_row has the effect of querying for a window of results. The results are inclusive of the first row number. <__first_row>250</__first. <__last_row>500</__last_row> __use_view Specify a Form view by name, to be used for limiting and expanding the results returned. When the form view contains deep referenced fields eg.caller_id.email, this field will be returned in the result as well <__use_view>soap_view</__use_view> On this page Send Feedback Previous Topic Next Topic
https://docs.servicenow.com/bundle/kingston-application-development/page/integrate/inbound-soap/reference/direct-ws-extended-params.html
2019-04-18T13:14:30
CC-MAIN-2019-18
1555578517639.17
[]
docs.servicenow.com
java.lang.Object org.springframework.security.access.hierarchicalroles.UserDetailsServiceWrapperorg.springframework.security.access.hierarchicalroles.UserDetailsServiceWrapper RoleHierarchyVoterinstead of populating the user Authentication object with the additional authorities. public class UserDetailsServiceWrapper This class wraps Spring Security's UserDetailsService in a way that its loadUserByUsername() method returns wrapped UserDetails that return all hierarchically reachable authorities instead of only the directly assigned authorities. public UserDetailsServiceWrapper() public void setRoleHierarchy(RoleHierarchy roleHierarchy) public void setUserDetailsService(UserDetailsService userDetailsService) public UserDetails loadUserByUsername(String username) throws UsernameNotFoundException, DataAccessEx DataAccessException- if user could not be found for a repository-specific reason public UserDetailsService getWrappedUserDetailsService()
https://docs.spring.io/spring-security/site/docs/3.0.x/apidocs/org/springframework/security/access/hierarchicalroles/UserDetailsServiceWrapper.html
2019-04-18T12:37:15
CC-MAIN-2019-18
1555578517639.17
[]
docs.spring.io
Lifecycle of a VM pod This document describes the lifecycle of VM pod managed by Virtlet. This description omits the details of volume setup (using flexvolumes), handling of logs, the VM console and port forwarding (done by streaming server), or port forwarding. Assumptions Communication between kubelet and Virtlet goes through criproxy which directs requests to Virtlet only if the requests concern a pod that has Virtlet-specific annotation or an image that has Virtlet-specific prefix. Lifecycle VM Pod Startup - A pod is created in Kubernetes cluster, either directly by the user or via some other mechanism such as a higher-level Kubernetes object managed by kube-controller-manager(ReplicaSet, DaemonSet etc.). - Scheduler places the pod on a node based on the requested resources (CPU, memory, etc.) as well as pod's nodeSelector and pod/node affinity constraints, taints/tolerations and so on. kubeletrunning on the target node accepts the pod. kubeletinvokes a CRI call RunPodSandbox to create the pod sandbox which will enclose all the containers in the pod definition. Note that at this point no information about the containers within the pod is passed to the call. kubeletcan later request the information about the pod by means of PodSandboxStatuscalls. - If there's a Virtlet-specific annotation kubernetes.io/target-runtime: virtlet.cloud, CRI proxy passes the call to Virtlet. - Virtlet saves sandbox metadata in its internal database, sets up the network namespace and then uses internal tapmanagermechanism to invoke ADDoperation via the CNI plugin as specified by the CNI configuration on the node. - The CNI plugin configures the network namespace by setting up network interfaces, IP addresses, routes, iptables rules and so on, and returns the network configuration information to the caller as described in the CNI spec. - Virtlet's tapmanagermechanism adjusts the configuration of the network namespace to make it work with the VM. - After creating the sandbox, kubelet starts the containers defined in the pod sandbox. Currently, Virtlet supports just one container per VM pod. So, the VM pod startup steps after this one describe the startup of this single container. - Depending on the image pull police of the container, kubelet checks if the image needs to be pulled by means of ImageStatuscall and then uses PullImageCRI call to pull the image if it doesn't exist or if imagePullPolicy: Alwaysis used. - If PullImageis invoked, Virtlet resolves the image location based on the image name translation configuration, then downloads the file and stores it in the image store. - After the image is ready (no pull was needed or the PullImagecall completed successfully), kubelet uses CreateContainerCRI call to create the container in the pod sandbox using the specified image. - Virtlet uses the sandbox and container metadata to generate libvirt domain definition, using vmwrapperbinary as the emulator and without specifying any network configuration in the domain. - After CreateContainercall completes, kubeletinvokes StartContainercall on the newly created container. - Virtlet starts the libvirt domain. libvirt invokes vmwrapperas the emulator, passing it the necessary command line arguments as well as environment variables set by Virtlet. vmwrapperuses the environment variable values passed to Virtlet to communicate with tapmanagerover an Unix domain socket, retrieving a file descriptor for a tap device and/or pci address of SR-IOV device set up by tapmanager. tapmanageruses its own simple protocol to communicate with vmwrapperbecause it needs to send file descriptors over the socket. This is not usually supported by RPC libraries, see e.g. grpc/grpc#11417. vmwrapperthen updates the command line arguments to include the network interface information and execs the actual emulator ( qemu). At this point the VM is running and accessible via the network, and the pod is in Running state as well as it's only container. Deleting a pod This sequence is initiated when the pod is deleted, either by means of kubectl delete or a controller manager action due to deletion or downscaling of a higher-level object. kubeletnotices the pod being deleted. kubeletinvokes StopContainerCRI calls which is getting forwared to Virtlet based on the containing pod sandbox annotations. - Virtlet stops the libvirt domain. libvirt sends a signal to qemu, which initiates the shutdown. If it doesn't quit in a reasonable time determined by pod's termination grace period, Virtlet will forcibly terminate the domain, thus killing the qemuprocess. - After all the containers in the pod (the single container in case of Virtlet VM pod) are stopped, kubelet invokes StopPodSandboxCRI call. - Virtlet asks its tapmanagerto remove pod from the network by means of CNI DELcommand. - after StopPodSandboxreturns, the pod sandbox will be eventually GC'd by kubeletby means of RemovePodSandboxCRI call. - Upon RemovePodSandbox, Virtlet removes the pod metadata from its internal database.
https://docs.virtlet.cloud/dev/vm-pod-lifecycle/
2019-04-18T12:19:18
CC-MAIN-2019-18
1555578517639.17
[]
docs.virtlet.cloud
All OTHER physical files matching the search criteria and their associated logical files are displayed. The file details displayed are: You can display the details of any file by entering option 27 (Display Details) to the left of the file. The IBM i DSPFD details will be shown. To load or re-load the definition, enter 7 to the left of each of the required files. A logical file cannot be loaded by itself. It must be loaded with its associated physical file. Not all logical files need to be loaded. Load only those which you wish to use with LANSA. Go to Step 3. Submit the batch job.
https://docs.lansa.com/14/en/lansa009/content/lansa/insbbc_0012.htm
2019-04-18T12:36:08
CC-MAIN-2019-18
1555578517639.17
[]
docs.lansa.com
A well-constructed landing is essential. It must meet operational needs. It also must be built to last and not pose a major risk to the environment. - The site is well prepared (stripped and benched). - It is constructed to the right size/area specs. - It uses the lie of the land to increase workable area with fewer earthworks. - Water is directed away from the fill. The ‘fall’ is towards stable ground. - Trucks can turn around and loggers have parking close by. - The landing has a visible bench and has been compacted. The fill is stable. - It has well-constructed water control e.g. berms and water draining away from the fill. - The fill is free of stumps and woody debris. - Small landings work well if carefully planned and constructed. - Stumps and woody debris are through the fill. - The fill is too steep; slippage can already be seen at the toe of the fill (see arrow). - Drainage is across the narrow access road. - During harvesting when the water control will be damaged, the fill could erode.
https://docs.nzfoa.org.nz/live/nz-forest-road-engineering-manual-operators-guide/good-construction/a-well-constructed-landing/
2019-04-18T13:15:49
CC-MAIN-2019-18
1555578517639.17
[array(['/site/assets/files/1122/p13-top-lhs.430x323.jpg', None], dtype=object) array(['/site/assets/files/1123/pic-24.430x323.jpg', None], dtype=object) array(['/site/assets/files/1124/pic-23.430x323.jpg', None], dtype=object) array(['/site/assets/files/1125/pointer-1.430x323.jpg', None], dtype=object) ]
docs.nzfoa.org.nz
Greenplum PL/R Language Extension A newer version of this documentation is available. Click here to view the most up-to-date release of the Greenplum 5.x documentation.. Installing PL/R The PL/R extension is available as a package. Download the package from Pivotal Network and. Installing the Extension Package Before you install the PL/R extension, make sure that your Greenplum Database is running, you have sourced greenplum_path.sh, and that the $MASTER_DATA_DIRECTORY and $GPHOME variables are set. - Download the PL/R extension package from Pivotal Network. - Copy the PL/R package to the Greenplum Database master host. - Install the software extension package by running the gppkg command. This example installs the PL/R extension on a Linux system: $ gppkg -i plr_PATH Uninstalling PL/R When you remove PL/R language support from a database, the PL/R routines that you created in the database will no longer work. Remove PL/R Support for a Database For a database that no longer requires the PL/R language, remove support for PL/R with the SQL command DROP. Greenplum Database provides a collection of data science-related R libraries that can be used with the Greenplum Database PL/R language. You can download these libraries in .gppkg format from Pivotal Network. For information about the libraries, see R Data Science Library Package. --3.3 - The home page
http://gpdb.docs.pivotal.io/530/ref_guide/extensions/pl_r.html
2019-04-18T12:33:59
CC-MAIN-2019-18
1555578517639.17
[array(['/images/icon_gpdb.png', None], dtype=object)]
gpdb.docs.pivotal.io
About Our vision is to provide orthopedic and sports medicine services to active adults and young athletes that is innovative to maintain functionality and an active lifestyle. Dr. Lebeck has been trained and is board certified in primary care and fellowship trained in sports Medicine. She worked for the past 4 years with the US Army at Ft. Huachuca with a focus on medical and army readiness. She has perfected non surgical techniques to assist the body's natural tendency to heal itself using Biologic technology including Platelet rich plasma, prolotherapy, trigger point injections and muscle rebalancing. She also focuses on an overall health and wellness and an improvmeent in aging with increased strength, weight loss. Board certification: American Board of Family Medicine
https://app.uber-docs.com/Specialists/SpecialistProfile/Ann-Lebeck-MD/Kynetic-Health
2021-07-24T05:40:22
CC-MAIN-2021-31
1627046150129.50
[]
app.uber-docs.com
Networking services across AWS accounts and VPCs If you're part of an organization with multiple teams and divisions, you probably deploy services independently into separate VPCs inside a shared AWS account or into VPCs that are associated with multiple individual AWS accounts. No matter which way you deploy your services, we recommend that you supplement your networking components to help route traffic between VPCs. For this, several AWS services can be used to supplement your existing networking components. AWS Transit Gateway — You should consider this networking service first. This service serves as a central hub for routing your connections between Amazon VPCs, AWS accounts, and on-premises networks. For more information, see What is a transit gateway? in the Amazon VPC Transit Gateways Guide. Amazon VPC and VPN support — You can use this service to create site-to-site VPN connections for connecting on-premises networks to your VPC. For more information, see What is AWS Site-to-Site VPN? in the AWS Site-to-Site VPN User Guide. Amazon VPC — You can use Amazon VPC peering to help you to connect multiple VPCs, either in the same account, or across accounts. For more information, see What is VPC peering? in the Amazon VPC Peering Guide. Shared VPCs — You can use a VPC and VPC subnets across multiple AWS accounts. For more information, see Working with shared VPCs in the Amazon VPC User Guide.
https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/networking-connecting-services-crossaccount.html
2021-07-24T03:48:19
CC-MAIN-2021-31
1627046150129.50
[]
docs.aws.amazon.com
Step 2: Create an IAM User and Policy In this step, you create an AWS Identity and Access Management (IAM) user with a policy that grants access to your Amazon DynamoDB Accelerator (DAX) cluster and to DynamoDB. You can then run applications that interact with your DAX cluster. To create an IAM user and policy Open the IAM console at . In the navigation pane, choose Users. Choose Add user. On the Details page, enter the following information: User name—Enter a unique name, for example: MyDAXUser. Access type—Choose Programmatic access. Choose Next: Permissions. On the Set permissions page, choose Attach existing policies directly, and then choose Create policy. On the Create policy page, choose Create Your Own Policy. On the Review policy page, provide the following information: Policy Name—Enter a unique name, for example, MyDAXUserPolicy. Description—Enter a short description for the policy. Policy Document—Copy and paste the following document. { "Version": "2012-10-17", "Statement": [ { "Action": [ "dax:*" ], "Effect": "Allow", "Resource": [ "*" ] }, { "Action": [ "dynamodb:*" ], "Effect": "Allow", "Resource": [ "*" ] } ] } Choose Create policy. Return to the Permissions page. In the list of policies, choose Refresh. To narrow the list of policies, choose Filter, Customer managed. Choose the IAM policy that need both of these identifiers for Step 3: Configure an Amazon EC2 Instance.
https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/DAX.client.create-user-policy.html
2021-07-24T04:19:59
CC-MAIN-2021-31
1627046150129.50
[]
docs.aws.amazon.com
Importing files to a FHIR Data Store After your FHIR Data Store has been created, you can import files from an Amazon Simple Storage Service (Amazon S3) bucket. You can use the console to create and manage import jobs, or use the import APIs. Amazon HealthLake accepts input files in newline delimited JSON (.ndjson) format, where each line consists of a valid FHIR resource. The APIs start, describe, and list ongoing import jobs. A customer-owned or AWS-owned KMS key is required for encryption of the Amazon S3 bucket for all import jobs. To learn more about creating and using a KMS Keys, see Creating keys in the AWS Key Management Service developer guide. Only one import or export job can run at time for an active Data Store. However, users can create, read, update, or delete FHIR resources while an import job is in progress. For each import job, output files are generated for successess or failures that can be analyzed via the Manifest.json file. Users can programatically navigate to these output files, and they are organized into two folders named SUCCESS and FAILURE. An output file is generated for each file in the import job. Because the output files may contain sensitive information, users are required to provide both an output Amazon S3 bucket and a KMS key for encryption. The following is an example of the output Manifest.json file. It is recommended users look at the file as the first step of troubleshooting a failed import job because it provides details on each file and what caused the import job to fail. { "inputDataConfig": { "s3Uri": "s3://inputS3Bucket/healthlake-input/invalidInput/" }, "outputDataConfig": { "s/", "encryptionKeyID": "arn:aws:kms:us-west-2:123456789012:key/fbbbfee3-20b3-42a5-a99d-c48c655ed545" }, "successOutput": { "success/SUCCESS/" }, "failureOutput": { "failure/FAILURE/" }, "numberOfScannedFiles": 1, "numberOfFilesImported": 1, "sizeOfScannedFilesInMB": 0.023627, "sizeOfDataImportedSuccessfullyInMB": 0.011232, "numberOfResourcesScanned": 9, "numberOfResourcesImportedSuccessfully": 4, "numberOfResourcesWithCustomerError": 5, "numberOfResourcesWithServerError": 0 } Performing an import You can start an import job using either the Amazon HealthLake console or the Amazon HealthLake import API, start-fhir-import-job API. Importing files using the APIs Prerequisites When you use the Amazon HealthLake APIs, you must first create an AWS Identity Access and Management (IAM) policy and attach it to an IAM role. To learn more about IAM roles and trust policies, see IAM Policies and Permissions. Customers must also use a KMS key for encryption. To learn more about using KMS Keys, see Amazon Key Management Service. To import files (API) Upload your data into an Amazon S3 bucket. To start a new import job, use the start-FHIR-import-joboperation. When you start the job, tell HealthLake the name of the Amazon S3 bucket that contains the input files, the KMS key you wish to use for encryption, and the output data configuration. To learn more about a FHIR import job, use the describe-fhir-import-job operation to get the job's ID, ARN, name, start time, end time, and current status. Use list-fhir-import-job to show all import jobs and their statuses. Importing files using the console To import files (console) Upload your data into an Amazon S3 bucket. To start a new import job, identify the Amazon S3 bucket and either create or identify the IAM role and the KMS key you want to use. To learn more about IAM roles and trust policies, see IAM Roles. To learn more about using KMS Keys, see Amazon Key Management Service. Monitor the status of your job. The console shows the status of all imports that are in progress. IAM policies for import jobs The IAM role that calls the Amazon HealthLake APIs must have a policy that grants access to the Amazon S3 buckets containing the input files. It must also be assigned a trust relationship that enables HealthLake to assume the role. To learn more about IAM roles and trust policies, see IAM Roles. The role must have the following policy: { "Version": "2012-10-17", "Statement": [ { "Action": [ "s3:ListBucket", "s3:GetBucketPublicAccessBlock", "s3:GetEncryptionConfiguration" ], "Resource": [ "arn:aws:s3:::inputS3Bucket", "arn:aws:s3:::outputS3Bucket" ], "Effect": "Allow" }, { "Action": [ "s3:GetObject" ], "Resource": [ "arn:aws:s3:::inputS3Bucket/*" ], "Effect": "Allow" }, { "Action": [ "s3:PutObject" ], "Resource": [ "arn:aws:s3:::outputS3Bucket/*" ], "Effect": "Allow" }, { "Action": [ "kms:DescribeKey", "kms:GenerateDataKey*" ], "Resource": [ "arn:aws:kms:us-east-1:012345678910:key/d330e7fc-b56c-4216-a250-f4c43ef46e83" ], "Effect": "Allow" } ] } The role must have the following trust relationship. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": [ "healthlake.amazonaws.com" ] }, "Action": "sts:AssumeRole" } ] }
https://docs.aws.amazon.com/healthlake/latest/devguide/import-datastore.html
2021-07-24T03:59:07
CC-MAIN-2021-31
1627046150129.50
[]
docs.aws.amazon.com
Refreshing visuals manually in dashboards In dashboards, CDP Data Visualization enables you to manually refresh visuals. If you have a slow back-end data connection and your dashboard is loading slowly, you can choose to load the filters first, specify their settings, and then manually load the visuals. To enable manual refresh of visuals in a dashboard, see Refreshing Visuals Manually on Dashboard. - Open any dashboard in Edit mode. We opened the World Life Expectancy dashboard. - Under Settings, navigate to the General menu, and select the Refresh visuals manually option. - Under Filters, click on the year field to add a new filter widget to the dashboard and select 1900 from the dropdown. - Click Save. - Switch to View mode. - Only the filter widget appears on the dashboard without any visuals. To view visuals, click Refresh Visuals on the top right corner of the dashboard. The visual now appears in the dashboard along with the defined filter. - Let us add year 1901 to the filter. As soon as we change the filter, notice that the filter refreshes to show both the values 1900, 1901 but the visual does not refresh. The year column continues to show only 1900 values. Also, notice that the Refresh Visuals option again appears on the top right corner of the dashboard. - To render the updated visual, click Refresh Visuals. Notice that the year column now shows the updated values, 1900 and 1901. - Click Save.
https://docs.cloudera.com/data-visualization/cdsw/howto-dashboards/topics/viz-refresh-visuals-manually-dashboard.html
2021-07-24T05:47:53
CC-MAIN-2021-31
1627046150129.50
[]
docs.cloudera.com
ModelProperty.ClearValue Method [This documentation is for preview only, and is subject to change in later releases. Blank topics are included as placeholders.] Clears the local value for the property. Namespace: System.Activities.Design.Model Assembly: System.Activities.Design.Base (in System.Activities.Design.Base.dll) Syntax 'Declaration Public MustOverride Sub ClearValue 'Usage Dim instance As ModelProperty instance.ClearValue() public abstract void ClearValue() public: virtual void ClearValue() abstract public abstract function ClearValue() abstract ClearValue : unit -> unit Remarks When there is no local value set for the property, a value may be inherited from higher in the property hierarchy. If you wish to set the value to nulla null reference (Nothing in Visual Basic), call SetValue(Object) instead.
https://docs.microsoft.com/en-us/previous-versions/dd453699(v=vs.100)
2021-07-24T06:18:02
CC-MAIN-2021-31
1627046150129.50
[]
docs.microsoft.com
Cloud Images and APIs¶ When you begin writing your own software to interact with Cloud Images, you might want to learn about how Cloud Images works in the Cloud Control Panel and how APIs are documented at Rackspace. Cloud Images Images Images API demonstration¶ Using the process suggested at Cloud Images API investigation, this section provides an example of how you can plan and then write your own software to perform one simple task: list all your cloud images. When you login to the Cloud Control Panel, your session begins with information about your servers. To see your Cloud Images information, click Servers and then click Saved Images. By default, the list is focused on your account’s home region, showing all images in that region; you can select a different region and you can search for a specific image. If your list of images is not empty, then for each image you can see: Its name The server from which it was created Its creation date Images or Cloud Files) and the scope they act on (for example, images or image schemas). You can see all Cloud Images operations in the Cloud Images API Reference. In the group of Images operations, you can see that: Sending a GETrequest to the imagesURI requests a basic list of information about public images Sending a GETrequest to the same URI and appending an image ID requests an expanded list of information about a single image The request parameters and sample response shown here can help you formulate a basic List images request to the API and understand the API’s response. In the sample response, name, created_at, size, and id correspond to the information available on the Cloud Control Panel. In the Getting Started Guide for the Cloud Images, you can see an example with the cURL command-line interface (CLI) for Listing images.
https://docs.rackspace.com/docs/user-guides/infrastructure/cloud-interfaces/api/cloudimages-api
2021-07-24T05:37:42
CC-MAIN-2021-31
1627046150129.50
[]
docs.rackspace.com
This topic contains information on tracing and within s-Server including the following: SQLstream uses the standard Java logging interface (defined in the package java.util.logging) to trace execution for debugging purposes. Trace files can let you troubleshoot stream functioning during development. Errors logged to the trace log are also available in the global error stream. See the Streaming SQL Reference Guide for more details.. Trace files and ALL_TRACE / SAVED_TRACE also include Periodic Parser Statistics. Trace files are useful in troubleshooting streams, because they allow you to see when a stream failed and why. s-Server has two trace logs. The first is the most useful in troubleshooting streams. To control the depth of the logging and the formatting of the output, Java logging uses external configuration files in the standard Java properties format. You determine the location of these files during installation. By default, SQLstream uses the following logging configuration files: The SQLstream s-Server configuration controls tracing to three different log files: You can modify the properties files to change the default logging. Before doing so, you should save a copy of the original, unmodified file. Note: The installed files are made read-only by default so as to avoid accidental modification. Because trace files are intended for debugging, by default, very little is logged to these files in normal conditions. For debugging you can change the default tracing level by editing the Trace.properties or ClientTrace.properties file. To view more logging, open one of the properties files in a text editor and uncomment (remove the “#") lines such as the following (both from ClientTrace.properties): # com.sqlstream.aspen.jdbc.level=FINE # com.sqlstream.aspen.sdp.level=FINE You can also change the tracing level for a particular package or class. For example, suppose you created a set of plugins that you want to debug at a high level of tracing. If the plugins are all part of a package called com.example.plugin, you can add the following to your Trace.properties: com.example.plugin.level=FINEST You may also change the logging level from the SQL command line (or as part of a SQL script) by using the SETTRACELEVEL procedure. This can be more convenient than updating the Trace.properties file and waiting for the new levels to take effect. To repeat the previous example: CALL SETTRACELEVEL('com.example.plugin.level','FINEST'); See also the topic Error Logging in the Streaming SQL Reference Guide. The default tracing configurations are intentionally set for production mode, with limits placed on the maximum size of a trace file, and the number of times the file will be rotated, controlled by these properties in Trace.properties:: java.util.logging.FileHandler.limit=10000000 java.util.logging.FileHandler.count=20 For testing, it is can be helpful to change these settings to allow more information to be retained in trace files. Just be careful that your tracefiles do not use up all available disk space. To control the maximum file size and rollover count, change the following properties in the file /var/log/sqlstream/Trace.properties: For SQLstream s-Studio, use the following properties in /var/log/sqlstream/ClientTrace.properties: If you set file size limit to zero, the file will never roll over, even if a rollover count is set. You can override which logging configuration file SQLstream uses by overriding the value for the property java.util.logging.config.file. To do so, set the property in aspen.custom.properties. $SQLSTREAM_HOME/aspen.custom.properties. If this file is not present, you may need to create it or copy it from $SQLSTREAM_HOME/support. To change the user interface logging configuration file, set the property in $HOME/.SQLstreamrc. For example, to use the file /usr/local/MyTrace.properties for user interface logging configuration, add the following line to the .SQLstreamrc file in your home directory: java.util.logging.config.file=/usr/local/MyTrace.properties The trace log contains information on how parsers are reading data from external sources such as log files, Kafka, Amazon Kinesis, and AMQP. This is known as Periodic Parser Statistics Logging. These sources are read through the Extensible Common Data Adapter. For more information on parsing data, see the topic Reading Data from Other Sources in the Integrating Guavus SQLstream with Other Systems guide. You can control how often s-Server logs parser statistics with the following entry in Trace.properties: com.sqlstream.aspen.namespace.common.input.logPeriod=10 This value determines how often in minutes that s-Server logs parser statistics. By default it is set to every 10 minutes. For each Extensible Common Data reader open, the trace log lists information on the following: Statistics on runnable and blocked parsers are approximated based on sampling log parser statistics. You can control sampling and reporting by changing the following parameters in Trace.properties: INFO: Parser for LOCALDB.TEST_READER.P_Pump: throughput=12598532bps, runnable 20%, blocked waiting for input 80%, blocked waiting to write downstream 0%
https://docs.sqlstream.com/administration-guide/using-trace-files/
2021-07-24T03:43:11
CC-MAIN-2021-31
1627046150129.50
[]
docs.sqlstream.com
iOS Components integration. The iOS Include MOLPay in the list of available payment methods. - Specify in your /paymentMethods request: - countryCode: MY or TH. - amount.currency: MYR or THB. - channel: Specify iOS. - Decode the /paymentMethodsresponse with the PaymentMethodsstructure. Find let paymentMethods = try JSONDecoder().decode(PaymentMethods.self, from: paymentMethodsResponse) paymentMethods.typemolpay_ebanking_fpx_MY, molpay_ebanking_TH, or molpay_ebanking_VN and put it into an object. For example, molpayPaymentMethod. - Create an instance of APIContextwith the following parameters: // When you're ready to go live, change environment to Environment.live // You can also use other environment values described in let apiContext = APIContext(environment: Environment.test, clientKey: clientKey) - Initialize the MOLPay Component: let molpayComponent = MolPayComponent(paymentMethod: molpayPaymentMethod, apiContext: apiContext) molpayComponent.delegate = self // In this example, the Pay button will display 10 EUR. // The value is in minor units. Change the currencyCode to the currency for the MOLPay Component. molpayComponent.payment = Payment(amount: Amount(value: 1000, currencyCode: "EUR")) present(molpayComponent.viewController, animated: true).. You need this to initialize the Redirect Component. Handle the redirect - Use the Redirect Component to redirect the shopper to the issuing bank's app or website. -.
https://docs.adyen.com/pt/payment-methods/molpay/ios-component
2021-07-24T05:22:41
CC-MAIN-2021-31
1627046150129.50
[]
docs.adyen.com
HP-UX views The HP-UX views enable you to manage the capacity of a virtualized infrastructure running on an HP-UX environment. It provides an overview of the available hosts in the HP-UX virtualization environment, including details about CPU and memory utilization, pool entitlement utilization, unused resources and more. Prerequisites Ensure that the administrator has completed the following tasks: For information about views in the TrueSight console, see Accessing and using capacity views in the TrueSight console. Certified data sources The BMC - TrueSight Capacity Optimization Gateway VIS files parser is the certified data sources for these views. You must configure and run one of these ETL modules to import data into the view. For more information, see BMC - TrueSight Capacity Optimization Gateway VIS files parser. HP-UX view displays data for all systems of the following types: - Virtual Machine - HP Integrity - Virtual Host - HP Integrity - HPnPartition - HPvPartition - Virtual Host - HPnPar/vPar Video The following video (3:39) provides a brief introduction of the HP-UX views. Using the views
https://docs.bmc.com/docs/capacityoptimization/btco115/hp-ux-views-830157069.html
2021-07-24T05:11:07
CC-MAIN-2021-31
1627046150129.50
[]
docs.bmc.com
You can initiate creation of a new program such as Vulnerability Disclosure program, On-demand program, or Bug Bounty program. To initiate a self-service program, on the Dashboard page, click Start Now. The Select an engagement to launch window is displayed. To proceed, see the following sections based on the type of program you want to create:
https://docs.bugcrowd.com/customers/program-management/adding-new-engagements/
2021-07-24T04:11:16
CC-MAIN-2021-31
1627046150129.50
[array(['/assets/images/customer/add-new-engagement/start-now.png', 'start-now'], dtype=object) array(['/assets/images/customer/add-new-engagement/select-engagement.png', 'start-now'], dtype=object) ]
docs.bugcrowd.com
(or JpgViewOptions, or PngViewOptions, or PdfViewOptions) object; - Create an array of desired page numbers passing start/end page number Enumerable.Range function; - Pass page numbers array to View method. The following code sample shows how to render N consecutive pages of a document. int[] pageNumbers = new int[] { 1, 2, 3 }; using .
https://docs.groupdocs.com/viewer/net/view-n-consecutive-pages/
2021-07-24T03:38:29
CC-MAIN-2021-31
1627046150129.50
[]
docs.groupdocs.com
New in 0.12.0 (2021-07-15)¶ This release adds features for tighter integration with Pyro for model development, fixes for SOLO, and other enhancements. Users of SOLO are strongly encouraged to upgrade as previous bugs will affect performance. Enchancements¶ Add scvi.model.base.PyroSampleMixinfor easier posterior sampling with Pyro (#1059). Add scvi.model.base.PyroSviTrainMixinfor automated training of Pyro models (#1059). Ability to pass kwargs to Classifierwhen using SOLO(#1078). Ability to get doublet predictions for simulated doublets in SOLO(#1076). Add “comparison” column to differential expression results (#1074). Clarify CellAssignsize factor usage. See class docstring. Changes¶ Update minimum Python version to 3.7.2 (#1082). Slight interface changes to PyroTrainingPlan. “elbo_train” and “elbo_test” are now the average over minibatches as ELBO should be on scale of full data and optim_kwargs can be set on initialization of training plan (#1059, #1101). Use pandas read pickle function for pbmc dataset metadata loading (#1099). Adds n_samples_overall parameter to functions for denoised expression/accesibility/etc. This is used in during differential expression (#1090). Ignore configure optimizers warning when training Pyro-based models (#1064). Bug fixes¶ Fix scale of library size for simulated doublets and expression in SOLOwhen using observed library size to train original SCVImodel (#1078, #1085). Currently, library sizes in this case are not appropriately put on the log scale. Fix issue where anndata setup with a layer led to errors in SOLO(#1098). Fix adata parameter of scvi.external.SOLO.from_scvi_model(), which previously did nothing (#1078). Fix default max_epochs of SCANVIwhen initializing using pre-trained model of SCVI(#1079). Fix bug in predict() function of SCANVI, which only occurred for soft predictions (#1100).
https://docs.scvi-tools.org/en/stable/release_notes/v0.12.0.html
2021-07-24T04:41:50
CC-MAIN-2021-31
1627046150129.50
[]
docs.scvi-tools.org
Both varnishlog and varnishncsa support JSON output. This is done via the line delimenated JSON format, LDJSON, which uses the newline character to seperate valid JSON objects. To output LDJSON from varnishlog you add the -S argument to the varnishlog command. For example: varnishlog -g request -S varnishncsa uses the -j argument make sure all variables are JSON safe. For proper JSON support the format string should be a valid JSON object. For example: varnishncsa -j -F '{ "received_at": "%t", "response_bytes": %b, "request_bytes": %I, "time_taken": %D, "first_line": "%r", "status": %s }'
https://docs.varnish-software.com/varnish-cache-plus/features/json-logging/
2021-07-24T05:12:36
CC-MAIN-2021-31
1627046150129.50
[]
docs.varnish-software.com
Installation¶ You can use pip to install wheel: pip install wheel If you do not have pip installed, see its documentation for installation instructions. If you prefer using your system package manager to install Python packages, you can typically find the wheel package under one of the following package names: - python-wheel - python2-wheel - python3-wheel
https://wheel.readthedocs.io/en/latest/installing.html
2020-07-02T21:10:43
CC-MAIN-2020-29
1593655880243.25
[]
wheel.readthedocs.io
Situation A small renewable energy company had been receiving Small Business Administration (SBIR) funding for work with the US Marine Corps (USMC). Funding stream was coming to a close. Company was seeking assistance securing new DOD revenue sources through the FY 2020 Appropriations process.Company was promoting much needed technology to improve renewable energy options benefiting small operating bases in forward deployed areas. Approach Winning Strategies Washington (WSW) developed a strategic advocacy plan for securing a “plus up” in the FY 2020 Defense Appropriations bill. The firm worked with the client to finalized key arguments and messaging, including drafting white papers and filing appropriations request forms. The firm put the plan into action, meeting with Members of the Senate Armed Services and Intelligence Committees and with House and Senate Defense Appropriations Committee staff, working for the Committee Chairs and Ranking Members. WSW facilitated an early site visit by a Senior Member of the House Appropriations Committee to view the technology and meet with the senior company leadership and outside advisors.A home state Senator and Member of the Armed Services Committee also sent two staff delegations to the company’s facility. WSW was successful in engaging with two Senators who represent a military base where the research and development was being conducted. Both Senate offices weighed-in with support for the funding increase. Impact The FY 2020 House Defense Appropriations bill included a $3 Million plus-up for the USMC program.The Senate FY 2020 National Defense Authorization Act (NDAA) included a $5 Million new Authorization for the same USMC program. *** .
https://docs.202works.com/en/articles/3308556-case-study-innovative-defense-company
2020-07-02T22:14:40
CC-MAIN-2020-29
1593655880243.25
[array(['https://downloads.intercomcdn.com/i/o/103657918/67ae1f01a8c40bf4ebd058fc/wsw-logo.png', None], dtype=object) ]
docs.202works.com
Login and session information Almost every BMC Remedy AR System C API function has a control parameter as its first input argument. This parameter contains the login and session information required to connect to an BMC Remedy AR System server and, thus, required for almost every operation. Note The control parameter was optional in BMC Remedy AR System version 3.x and earlier. Therefore, you must add the control parameter to recompiled pre-4.x API programs if you use these programs with later versions of the BMC Remedy AR System API. The control parameter is a pointer to an ARControlStruct structure (see the following figure). Structure used to provide required login information This structure has the following elements: Nearly all function calls require the login and session information that ARControlStruct contains (stored in both single- and multiple-server environments) because the API does not always maintain a server connection between calls. When a program calls ARInitialization at the beginning of its execution, the BMC Remedy AR System C API returns the data in a ARControlStruct. This is the structure that the program passes as an input parameter in subsequent API calls.
https://docs.bmc.com/docs/ars1805/login-and-session-information-804716467.html
2020-07-02T23:11:11
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Message-ID: <1262621988.102370.1593730974631.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_102369_883177986.1593730974631" ------=_Part_102369_883177986.1593730974631 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: The PATROL for Lin= ux contains monitor types (application classes) that are segregated based o= n the components Note Monitor types in BMC ProactiveNet Central Monitoring Administration Appl= ication classes in BMC PATROL are known as application class. Likewise, par= ameters and attributes mean the same. Parameters in BMC PATROL Oracle= WebLogic are known as attributes in BMC ProactiveNet Central Monitor= ing Administration. The following table explains the default attribute (parameter) propertie= s available in BMC PATROL and BMC ProactiveNet Performance Management. Note Default parameter properties The following application classes are available in BMC PATROL for Linux:=
https://docs.bmc.com/docs/exportword?pageId=595559836
2020-07-02T23:02:54
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
Crate yaserde See all yaserde's items YaSerDe is a framework for serializing and deserializing Rust data structures efficiently and generically from and into XML. Generic data structure deserialization framework. Generic data structure serialization framework. A visitor that can be implemented to retrieve information from source file. A data structure that can be deserialized from any data format supported by YaSerDe. A data structure that can be serialized into any data format supported by YaSerDe.
https://docs.rs/yaserde/0.3.16/x86_64-apple-darwin/yaserde/index.html
2020-07-02T22:37:14
CC-MAIN-2020-29
1593655880243.25
[]
docs.rs
Citrix Workspace TOTP IntroductionIntroduction Multi Factor Authentication (MFA) is an extra layer of security used when logging into websites or apps. Individuals are authenticated through more than one required security and validation procedure that only you know or have access to. Citrix Workspace™ offers a complete and integrated digital workspace that’s streamlined for IT control and easily accessible for users. Acceptto as a Citrix Ready Partner offers a simple solution for adding Multi Factor Authentication (MFA) to Citrix Workspace via its TOTP solution. Pre-RequisitesPre-Requisites - The user population that is going to be authenticated via TOTP must be enrolled in the It’sMe™ Application. - A user with administrative privileges for Citrix Cloud Login. Connect your Active Directory to Citrix CloudConnect your Active Directory to Citrix Cloud By default, Citrix Cloud uses the Citrix Identity provider to manage the identity information for all users in your Citrix Cloud account. You can change this to use Active Directory (AD) instead. Connecting your Active Directory to Citrix Cloud involves installing Cloud Connectors in your domain. Citrix recommends installing two Cloud Connectors for high availability. -. - After Citrix Cloud verifies the connection with your Active Directory, click Next. The Configure Token page appears and the Single device option is selected by default. - Click Save and Finish to complete the configuration. On the Authentication tab, the Active Directory + Token entry is marked as Enabled. Configure Citrix WorkSpace to use Active Directory + Token as its authentication sourceConfigure Citrix WorkSpace to use Active Directory + Token as its authentication source - From the Citrix Cloud menu, select Workspace Configuration. - From the Authentication tab, select Active Directory + Token button. After enabling Active Directory plus token authentication, Workspace subscribers can register their device and use an Acceptto It’sMe app to generate tokens. Test your setupTest your setup - Go to your Workspace URL and enter your credentials. - A verification code is sent to your email. Enter the verification code and your password and click Next. - Open your It’sMe app. Go to Offline TOTP, select Scan QR Code and click Finish and Sign in. - The new TOTP is shown on your It’sMe mobile app. - You will be redirected to WorkSpace login page with a password Token. Enter your credentials and Acceptto Offline TOTP, then click Sign In to go to your Citrix WorkSpace landing page. - You’re now logged in with Acceptto’s MFA to you Citrix Workspace..
https://docs.acceptto.com/docs/citrix/workspace-totp
2020-07-02T21:51:31
CC-MAIN-2020-29
1593655880243.25
[array(['/docs/assets/citrix/workspace-totp/citrix-identity-management.png', 'citrix identity and access management'], dtype=object) array(['/docs/assets/citrix/workspace-totp/citrix-install-connector.png', 'citrix active directory'], dtype=object) array(['/docs/assets/citrix/workspace-totp/citrix-configure-token.png', 'citrix configure token'], dtype=object) array(['/docs/assets/citrix/workspace-totp/citrix-app-list.png', 'Citrix Active Directory list'], dtype=object) array(['/docs/assets/citrix/workspace-totp/citrix-cloud-config.png', 'Citrix cloud menu list'], dtype=object) array(['/docs/assets/citrix/workspace-totp/workspace-ad-token.png', 'Workspace configuration'], dtype=object) array(['/docs/assets/citrix/workspace-totp/workspace-login-form.png', 'Citrix Workspace login form'], dtype=object) array(['/docs/assets/citrix/workspace-totp/workspace-verification-code.png', 'Citrix Workspace login form'], dtype=object) array(['/docs/assets/citrix/workspace-totp/workspace-qr.png', 'Citrix Workspace qr code'], dtype=object) array(['/docs/assets/citrix/workspace-totp/itsme-totp.png', "It'sMe totp list"], dtype=object) array(['/docs/assets/citrix/workspace-totp/workspace-domain.png', 'Citrix Workspace sign in'], dtype=object) ]
docs.acceptto.com
TOPICS× Domain The 'Domain' dimension reports which organizations or Internet service providers that visitors use to access the internet. Populate this dimension with data This dimension uses information around the path that the image request took to reach Adobe data collection servers. It does not require any configuration, and does not have a variable to populate. It works out of the box with all AppMeasurement implementations. Dimension values Example dimension values include comcast.net , rr.com , sbcglobal.net , and amazonaws.com . Note that these are domains that ISP's use to direct traffic, and not necessarily the domain representing the ISP organization.
https://docs.adobe.com/content/help/en/analytics/components/dimensions/domain.html
2020-07-02T21:57:12
CC-MAIN-2020-29
1593655880243.25
[]
docs.adobe.com
Navigation : Introduction FAQ Setup, deployment & projects Getting started Examples Catel.Core Catel.MVVM Catel.Fody Catel.ReSharper Tips & tricks API reference - Catel.Cores - Catel.MVVMs -- Catels --- Catel.MVVM --- Catel.Properties --- Catel.Windows --- Androids --- Xamarins -- Systems - Catel.Serialization.Jsons Catel.Properties Type Description Have a question about Catel? Use StackOverflow with the Catel tag! Discussion Please enable JavaScript to view the comments powered by Disqus. WrongViewModelTypeException Catel.Windows
https://docs.catelproject.com/5.0/reference/catel.mvvm/catel/properties/
2020-07-02T21:01:33
CC-MAIN-2020-29
1593655880243.25
[]
docs.catelproject.com
GeoCoordinateWatcher tips part1 A few tips for those using GeoCoordinateWatcher in Windows Phone 7. The data: - GeoCoordinateWatcher events fire in the UI thread. Yes, StatusChanged and PositionChanged fire in the UI thread. This is not due to thread affinity or any thing similar. Even if I start the GeoCoordinateWatcher in a background thread, it still fires events in the UI thread. They wanted to make it easy for developers. - GeoCoordinateWatcher PositionChanged will fire as often every second if you do not give it a MovementThreshold. Yes, you can not move at all and it is firing (on your UI thread). Note, that is what I see on my phone, I do know the phone is optimized for battery life, so maybe it throttles with battery .. but when fully charged I see a ~1 Hz rate. - Getting a GPS fix out of GeoCoordinateWatcher can some times take a second or two. GPS is never a trivial operation (if you have ever turned it on in your car, I am sure you would know). This can be a pain if you have an app that needs a fix in milliseconds, for example, if you are in search, and you want to do a local search, you want it to have location immediately, you don’t want to wait a second … The Tips: - Avoid doing a lot of work on the GeoCoordinate events. They are in the UI thread. Avoid as much work as you can (send to background thread if needed). - Absolutely set a MovementThreshold on the GeoCoordinateWatcher… 20 meters is probably lowest you should you. I most often use 250m. Most of the services I use ( twitter, facebook, search, etc. are doing searches in a > mile radius, 250m is great threshold for these services)… - if you do set MovementThreshold you should know that GeoCoordinateWatcher fires event in a weird order and the order can get you when you use MovementThreshold. - GeoCoordinateWatcher has a StatusChanged, which you would expect fires when the GCW is ready… my experience is that when subscribing to GCW I see this sequence - StatusChanged to Initializin - PositionChanged -- with a valid location - StatusChanged to ready - If you have a high threshold, PositionChanged might not fire again… so if you were a cautious developer and checked that Status is ready before accepting the location, you might not get a PositionChanged. - The answer (from the devs) is that it is fair to assume that if PositionChanged fires, the status is ready. It is OK to not check status in this event. - If you read above, when there is no threshold it fires every second… I am told the threshold does affect sampling rate (we are preserving battery as much as we can) but I am also thinking they are doing work sampling more often than most of my apps need.. I am not writing some thing that requires a position every few seconds.. so what I do for most of my apps is Start () and Stop () the GCW every 2 minutes (or longer pending on the app)… Start and Stop is not an expensive operation… Just have your GCW be wrapped by a singleton, and start it and stop it often.. Avoid calling Dispose () on GCW.. I have very seldomly (when stress testing) seen a race condition on Dispose () that could crash my app… again, happens seldomly but I still don’t like the chances, since it is a singleton, and you can start and stop.. I don’t see a need to Dispose ().. That should be all you need to really optimize your location-aware apps… Don’t forget to prompt the user before you use their location .. Feel free to beat me to the next part… you should know what that is.. Happy Windows Phone coding !!
https://docs.microsoft.com/en-us/archive/blogs/jaimer/geocoordinatewatcher-tips-part1
2020-07-02T22:46:35
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
To reference a variable in this manner, you use the ${var_name} notation: a dollar sign with the variable's name wrapped in curly braces. Static Variables Static variables represent values that remain constant during the execution of a tool. This includes: Environment variables: Values that come from the environment that is currently enabled for the active suite. IDE variables: Eclipse variables, such as project_loc, as well as Parasoft variables, such as test_suite_loc(test suite location). For example: ${project_loc:MyProject}/DataSource/${soa_env:CVS_DIR}/my_csv_file.csv ${test_suite_loc}/../${soa_env:XLS_DIR}/my_excel_file.xls Dynamic Variables Dynamic variables represent values that may change during tool execution—for example, data source columns, data bank columns, and suite-level variables. Data source columns depend on the current data source row. Additional Information -.
https://docs.parasoft.com/display/SOAVIRT9106CTP312/Parameterizing+Tools+with+Variables+1
2020-07-02T22:47:03
CC-MAIN-2020-29
1593655880243.25
[]
docs.parasoft.com
Set maximum upload size for avatars and cover images. // Set the avatar image max size define( 'WPUM_MAX_AVATAR_SIZE', 10240 ); // Set the cover image max size define( 'WPUM_MAX_COVER_SIZE', 5242880 ); The above example will set the maximum file size for avatars to 10kb and cover images to5MB. The WPUM_MAX_AVATAR_SIZE and WPUM_MAX_COVER_SIZE constants only accept bytes as a value. Sizes examples: 50kb = 51200 100kb = 102400 1mb = 1048576 You can use a website like this to help you calculate the any size you want
https://docs.wpusermanager.com/article/146-set-maximum-upload-size-for-avatars
2020-07-02T22:34:55
CC-MAIN-2020-29
1593655880243.25
[]
docs.wpusermanager.com
Message-ID: <286704697.102488.1593732227124.JavaMail.j2ee-conf@bmc1-rhel-confprod1> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_102487_446536450.1593732227123" ------=_Part_102487_446536450.1593732227123 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: This topic presents procedures to apply featur= e packs or fix packs to an App Visibility agent for Java version 10.7. You = can apply the feature pack or fix pack to each agent using the interac= tive mode or the silent procedure. After you apply the feature pack or fix pack to an agent, it runs using = the same log files and configuration, and the same agent policy and recordi= ng configuration apply. This topic contains the following sections: Do= wnload the following files from the Electronic Product Distribution sit= e to a temporary directory on the agent computer: You must have execute privileges on the scripts and write access to the = location. At the prompt, enter the full path to the existing App Visibility ag= ent installation directory, which must include the ADOPsInstall directory. You can also use a relative path. If the path includ= es a space, enclose the path in quotation marks. Examples (Windows) "c:\BMC Software\ADOPs\ADOPsInstall" (Linux) /opt/ADOPs/ADOPsInstall To start the upgrade, enter yes. When the message App Visibility Agent for Java upgraded succes= sfully is displayed, the feature pack or fix pack is successful and = you can close the command shell. Otherwise, see Verifying the A= pp Visibility agent for Java upgrade. Start the JVM process. From the directory with the agent installa= tion files, run the following command: (Windows) adops-agent-upgrade.bat -s -d <installat= ionDirectory> (Linux) ./adops-agent-upgrade.sh -s -d <installati= onDirectory> The silent installation uses the following parameters: Example (Windows) adops-agent-upgrade.bat -s -d "c:\= BMC Software\ADOPs\ADOPsInstall" (Linux) ./adops-agent-upgrade.sh -s -d /opt/= ADOPs/ADOPsInstall See Silent command options (below) for additi= onal options. Applying the feature pack or fix pack silently requires only the -s (silent) and -d (installation directory) opti= ons. Additional options are listed in the following table. You can verify that applying the feature pack = or fix pack to the App Visibility agent for= Java was successful, and that the agent for Java was invoked, by re= viewing the log files created and updated during the process. For assistanc= e with any issues that you cannot solve, contact Customer Support. [ERROR]mess= ages for issues that you can address. Applying the feature pack or fix pack to the App Visibility agent does n= ot change the agent policy files. Agent policy files are changed with the A= pp Visibility server, and they contain the new features for the agents= . The agent for Java continues to use the same agent policy file as before = you applied the feature pack or fix pack. If you have a single App Visibility agent installation directory that is= used by multiple JVM processes, give the App Visibility agents unique names.= a> After you update the agent, perform the following procedures:
https://docs.bmc.com/docs/exportword?pageId=772582956
2020-07-02T23:23:47
CC-MAIN-2020-29
1593655880243.25
[]
docs.bmc.com
To add or modify your taxes, go to SETUP | SETTINGS | TAX CONFIGURATION. In this section, you can modify your primary currency (currency used across system), add new taxes and edit the current tax settings. There are two main taxes, the primary and secondary tax. The primary and secondary taxes can be enabled to automatically calculate tax on bookings and room rent posted to guest folios. You can enable one or both of the primary and secondary taxes.. See instructions below. See Add Additional Taxes To change the name of your taxes, go to Custom System Labeling. To change the Tax Settings, click Edit. Make any changes and click Save. To change the Tax Rate, go to the next step.
https://docs.bookingcenter.com/pages/viewpage.action?pageId=6652759
2020-07-02T21:33:20
CC-MAIN-2020-29
1593655880243.25
[]
docs.bookingcenter.com
Running the example runs a Building the example CorDapp --4.2.jar │ ├── corda-finance-workflows-4.2 deployNodes. For each node, the runnodes script creates a node tab/window: ______ __ / ____/ _________/ /___ _ / / __ / ___/ __ / __ `/ Top tip: never say "oops", instead / /___ /_/ / / / /_/ / /_/ / always say "Ah, Interesting!" \____/ /_/ \__,_/\__,_/ --- Corda Open Source corda-4.2 -4.2, cordapp-example-0.1, corda-core-corda-4.2. To ensure that all the nodes are running, you can query the ‘status’ end-point located at:[port]/api/status (e.g. for PartyA). Running the example CorDapp from IntelliJ Select the Run Example CorDapp - Kotlinrun /api/example/create-iou endpoint directly, or by using the the web form served from the home directory. To create an IOU between PartyA and PartyB, run the following command from the command line: curl -X PUT '' To create an IOU between PartyA and PartyB, navigate to the home directory for the node,..CashExitFlow net.corda.finance.flows.CashIssueAndPaymentFlow net.corda.finance.flows.CashIssueFlow net.corda.finance.flows.CashPaymentFlow net.corda.finance.internal.CashConfigDataFlow ( workflows-kotlin workflows-kotlin/build/nodes Move the node folders to their individual machines (e.g. your CorDapp Corda provides several frameworks for writing unit and integration tests for CorDapps. Contract tests You can run the CorDapp’s contract tests by running the Run Contract Tests - Kotlin run configuration. Flow tests You can run the CorDapp’s flow tests by running the Run Flow Tests - Kotlin run configuration. Integration tests You can run the CorDapp’s integration tests by running the Run Integration Tests - Kotlin run configuration.. You will also need to specify -javaagent:lib/quasar.jar and set the run directory to the project root directory for each test. Add the following to your build.gradle file - ideally to a build.gradle that. Debugging your CorDapp See Debugging a CorDapp.
https://docs.corda.net/docs/corda-enterprise/4.2/tutorial-cordapp.html
2020-07-02T23:38:29
CC-MAIN-2020-29
1593655880243.25
[]
docs.corda.net
Access on-premises resources from an Azure AD-joined device in Microsoft 365 Business Premium This article applies to Microsoft 365 Business Premium. Any Windows 10 device that is Azure Active Directory joined has access to all cloud-based resources, such as your Microsoft 365 apps, and can be protected by Microsoft 365 Business Premium. You can also allow access to on-premises resources like line of business (LOB) apps, file shares, and printers. To allow access, use Azure AD Connect to synchronize your on-premises Active Directory with Azure Active Directory. To learn more, see Introduction to device management in Azure Active Directory. The steps are also summarized in the following sections. Important This procedure is only applicable to OAuth and NTLM. Kerberos is not supported.. Considerations when you join Windows devices to Azure AD If the Windows device that you Azure-AD joined was previously domain-joined or in a workgroup, consider the following limitations: When a device Azure AD joins, it creates a new user without referencing an existing profile. Profiles must be manually migrated. A user profile contains information like favorites, local files, browser settings, and Start menu settings. won't be able to authenticate to applications that depend on Active Directory authentication. Evaluate the legacy app and consider updating to an app that uses modern Auth, if possible. Active Directory printer discovery won't work. You can provide direct printer paths for all users or use Hybrid Cloud Print.
https://docs.microsoft.com/en-us/microsoft-365/business/access-resources?redirectSourcePath=%252fzh-hk%252farticle%252f%2525E5%2525AD%252598%2525E5%25258F%252596%2525E5%252585%2525A7%2525E9%252583%2525A8%2525E9%252583%2525A8%2525E7%2525BD%2525B2%2525E8%2525B3%252587%2525E6%2525BA%252590-365-%2525E4%2525BC%252581%2525E6%2525A5%2525AD%2525E7%252589%252588-microsoft-azure-ad-%2525E5%25258A%2525A0%2525E5%252585%2525A5%2525E8%2525A3%25259D%2525E7%2525BD%2525AE-b0f4d010-9fd1-44d0-9d20-fabad2cdbab5&view=o365-worldwide
2020-07-02T22:40:59
CC-MAIN-2020-29
1593655880243.25
[]
docs.microsoft.com
. Configure Syslog Event Forwarding To forward event data to a Syslog Server: From the Settingsmodule of the Sysdig Secure UI, navigate to the Events Forwardingtab. Click the Add Integrationbutton. Syslogfrom the drop-down menu. Toggle the Enabledswitch as necessary. By default, the new integration is enabled. UDC/TCP; Define transport layer protocol UDP/TCP. Use TCP for security incidents, as it's far more reliable than UDP for handling network congestion and preventing packet loss. NOTE: RFC 5425 (TLS) only supports TCP. Data to Send: Currently, Sysdig only supports sending policy events (events from Sysdig Secure). Allow insecure connections: Toggle on if you want to allow insecure connections (i.e. invalid or self-signed certificate on the receiving side). Click the Savebutton to save the integration.
https://docs.sysdig.com/en/forwarding-to-syslog.html
2020-07-02T22:53:36
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
Go Golang expvaris the standard interface designed to instrument and expose custom metrics from a Go program via HTTP. In addition to custom metrics, it also exports some metrics out-of-the-box, such as command line arguments, allocation stats, heap stats, and garbage collection metrics. This page describes the default configuration settings, how to edit the configuration to collect additional information, the metrics available for integration, and a sample result in the Sysdig Monitor UI. Go_expvar Setup You will need to create a custom entry in the user settings config file for your Go application, due to the difficulty in determining if an application is written in Go by looking at process names or arguments. Be sure your app has expvars enabled, which means importing the expvar module and having an HTTP server started from inside your app, as follows: import ( ... "net/http" "expvar" ... ) // If your application has no http server running for the DefaultServeMux, // you'll have to have a http server running for expvar to use, for example // by adding the following to your init function func init() { go http.ServeAndListen(":8080", nil) } // You can also expose variables that are specific to your application // See for more information var ( exp_points_processed = expvar.NewInt("points_processed") ) func processPoints(p RawPoints) { points_processed, err := parsePoints(p) exp_points_processed.Add(points_processed) ... } See also the following blog entry: How to instrument Go code with custom expvar metrics. Sysdig Agent Configuration Review how to Edit dragent.yaml to Integrate or Modify Application Checks. Default Configuration No default configuration for Go is provided in the Sysdig agent dragent.default.yaml file. You must edit the agent config file as described in Example 1. Warning Remember! Never edit dragent.default.yaml directly; always edit only dragent.yaml. Example Add the following code sample to dragent.yaml to collect Go metrics. app_checks: - name: go-expvar check_module: go_expvar pattern: comm: go-expvar conf: expvar_url: "" # automatically match url using the listening port # Add custom metrics if you want metrics: - path: system.numberOfSeconds type: gauge # gauge or rate alias: go_expvar.system.numberOfSeconds - path: system.lastLoad type: gauge alias: go_expvar.system.lastLoad - path: system.numberOfLoginsPerUser/.* # You can use / to get inside the map and use .* to match any record inside type: gauge - path: system.allLoad/.* type: gauge Metrics Available See Go Metrics.
https://docs.sysdig.com/en/go.html
2020-07-02T23:02:32
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
CorDapp samples There are two distinct sets of samples provided with Corda, one introducing new developers to how to write CorDapps, and more complex worked examples of how solutions to a number of common designs could be implemented in a CorDapp. The former can be found on the Corda website. In particular, new developers should start with the example CorDapp. The advanced samples are contained within the samples/ folder of the Corda repository. The most generally useful of these samples are: - The trader-demo, which shows a delivery-vs-payment atomic swap of commercial paper for cash - The attachment-demo, which demonstrates uploading attachments to nodes - The bank-of-corda-demo, which shows a node acting as an issuer of assets (the Bank of Corda) while remote client applications request issuance of some cash on behalf of a node called Big Corporation Documentation on running the samples can be found inside the sample directories themselves, in the README.md file. flow watch.
https://docs.corda.net/docs/corda-enterprise/4.2/building-a-cordapp-samples.html
2020-07-02T21:40:26
CC-MAIN-2020-29
1593655880243.25
[]
docs.corda.net
Frequently Asked Questions¶ My schema specifies format validation. Why do invalid instances seem valid?¶ The format validator can be a bit of a stumbling block for new users working with JSON Schema. In a schema such as: {"type": "string", "format": "date"} JSON Schema specifications have historically differentiated between the format validator and other validators. In particular, the format validator was specified to be informational as much as it may be used for validation. In other words, for many use cases, schema authors may wish to use values for the format validator but have no expectation they be validated alongside other required assertions in a schema. Of course this does not represent all or even most use cases – many schema authors do wish to assert that instances conform fully, even to the specific format mentioned. In drafts prior to draft2019-09, the decision on whether to automatically enable format validation was left up to validation implementations such as this one. This library made the choice to leave it off by default, for two reasons: - for forward compatibility and implementation complexity reasons – if format validation were on by default, and a future draft of JSON Schema introduced a hard-to-implement format, either the implementation of that format would block releases of this library until it were implemented, or the behavior surrounding format would need to be even more complex than simply defaulting to be on. It therefore was safer to start with it off, and defend against the expectation that a given format would always automatically work. - given that a common use of JSON Schema is for portability across languages (and therefore implementations of JSON Schema), so that users be aware of this point itself regarding format validation, and therefore remember to check any other implementations they were using to ensure they too were explicitly enabled for format validation. As of draft2019-09 however, the opt-out by default behavior mentioned here is now required for all validators. Difficult as this may sound for new users, at this point it at least means they should expect the same behavior that has always been implemented here, across any other implementation they encounter. See also Draft 2019-09’s release notes on format for upstream details on the behavior of format and how it has changed in draft2019-09 for details on how to enable format validation the object which implements format validation Why doesn’t my schema’s default property set the default on my instance?¶ The basic answer is that the specification does not require that default actually do anything. For an inkling as to why it doesn’t actually do anything, consider that none of the other validators modify the instance either. More importantly, having default modify the instance can produce quite peculiar things. It’s perfectly valid (and perhaps even useful) to have a default that is not valid under the schema it lives in! So an instance modified by the default would pass validation the first time, but fail the second! Still, filling in defaults is a thing that is useful. jsonschema allows you to define your own validator classes and callables, so you can easily create an jsonschema.IValidator that does do default setting. Here’s some code to get you started. (In this code, we add the default properties to each object before the properties are validated, so the default values themselves will need to be valid under the schema.) from jsonschema import Draft7Validator, validators def extend_with_default(validator_class): validate_properties = validator_class.VALIDATORS["properties"] def set_defaults(validator, properties, instance, schema): for property, subschema in properties.items(): if "default" in subschema: instance.setdefault(property, subschema["default"]) for error in validate_properties( validator, properties, instance, schema, ): yield error return validators.extend( validator_class, {"properties" : set_defaults}, ) DefaultValidatingDraft7Validator = extend_with_default(Draft7Validator) # Example usage: obj = {} schema = {'properties': {'foo': {'default': 'bar'}}} # Note jsonschem.validate(obj, schema, cls=DefaultValidatingDraft7Validator) # will not work because the metaschema contains `default` directives. DefaultValidatingDraft7Validator(schema).validate(obj) assert obj == {'foo': 'bar'} See the above-linked document for more info on how this works, but basically, it just extends the properties validator on a jsonschema.Draft7Validator to then go ahead and update all the defaults. Note If you’re interested in a more interesting solution to a larger class of these types of transformations, keep an eye on Seep, which is an experimental data transformation and extraction library written on top of jsonschema. Hint The above code can provide default values for an entire object and all of its properties, but only if your schema provides a default value for the object itself, like so: schema = { "type": "object", "properties": { "outer-object": { "type": "object", "properties" : { "inner-object": { "type": "string", "default": "INNER-DEFAULT" } }, "default": {} # <-- MUST PROVIDE DEFAULT OBJECT } } } obj = {} DefaultValidatingDraft7Validator(schema).validate(obj) assert obj == {'outer-object': {'inner-object': 'INNER-DEFAULT'}} …but if you don’t provide a default value for your object, then it won’t be instantiated at all, much less populated with default properties. del schema["properties"]["outer-object"]["default"] obj2 = {} DefaultValidatingDraft7Validator(schema).validate(obj2) assert obj2 == {} # whoops How do jsonschema version numbers work?¶ jsonschema tries to follow the Semantic Versioning specification. This means broadly that no backwards-incompatible changes should be made in minor releases (and certainly not in dot releases). The full picture requires defining what constitutes a backwards-incompatible change. The following are simple examples of things considered public API, and therefore should not be changed without bumping a major version number: - module names and contents, when not marked private by Python convention (a single leading underscore) - function and object signature (parameter order and name) The following are not considered public API and may change without notice: - the exact wording and contents of error messages; typical reasons to rely on this seem to involve downstream tests in packages using jsonschema. These use cases are encouraged to use the extensive introspection provided in jsonschema.exceptions.ValidationErrors instead to make meaningful assertions about what failed rather than relying on how what failed is explained to a human. - the order in which validation errors are returned or raised - the contents of the jsonschema.testspackage - the contents of the jsonschema.benchmarkspackage - the jsonschema.compatmodule, which is for internal compatibility use - the specific non-zero error codes presented by the command line interface - the exact representation of errors presented by the command line interface, other than that errors represented by the plain outputter will be reported one per line - anything marked private With the exception of the last two of those, flippant changes are avoided, but changes can and will be made if there is improvement to be had. Feel free to open an issue ticket if there is a specific issue or question worth raising.
https://python-jsonschema.readthedocs.io/en/latest/faq/
2020-07-02T22:40:19
CC-MAIN-2020-29
1593655880243.25
[]
python-jsonschema.readthedocs.io
Benchmarks Note Earlier versions of Sysdig Secure referred to this module as Compliance. The Center for Internet Security (CIS) issues standardized benchmarks, guidelines, and best practices for securing IT systems and environments. Use the Benchmarks module of Sysdig Secure to run Kubernetes and Docker CIS benchmarks against your environment. How Sysdig Benchmark Tests Work CIS benchmarks are best practices for the secure configuration of a target system. Sysdig has implemented these standardized controls for different versions of Kubernetes and Docker. Setting Up a Task Using a new Task, configure the type of test, the environment scope, and the scheduled frequency of the compliance check. You can also filter how you'd like to view the Results report. See also Configure Benchmark Tasks. Running a Test Once a task is configured, Sysdig Secure will: Kick off a check in the agent to analyze your system configuration against CIS best-practices Store the results of this task Reviewing Report Results When a task has run, it is listed on the Results page and can be viewed as a Reviewing Benchmark Metrics Consolidated Benchmark metrics can also be viewed in Sysdig Monitor, from default or customized Benchmark Dashboards. Understanding Report Filters Customize your view of the test report, e.g., to see only high-priority results or the results from selected controls. (The entire test suite will still run; only the report contents are filtered.) Setting up a Report filter is simple. Under Report on the Benchmark Task page: Choose Custom Selection Choose a Benchmark versionand apply a Profilefilter, and/or select/deselect individual controls. Use the information in this section to understand the effect of your selections. About Custom Selections Filtering rules apply to the report, not the test itself. Filtering Rules Filtering the Report view does not change the scope of the test run. The full test will run but the result view will be edited. If you apply a filter to an existing task that has already run, the filter view will be retroactively applied to the historical reports. If you deselect the filter, the full results will again be visible. About Benchmark Versions CIS issues benchmark versions that correspond to –- but are not identical with -- the Kubernetes or Docker software version. See the mapping tables, below. Version Rules If you do not customize/filter your report, the Sysdig agent will auto-detect your environment version and will run the corresponding version of the benchmark controls. If you specify a benchmark version, you can then apply a report filter. If the test version doesn't match the environment version, the filter will be ignored and all the tests will be displayed. Kubernetes Version Mapping Sysdig also supports Kubernetes benchmark tests for the following distributions: EKS: Amazon Elastic Container Service for Kubernetes, default cluster version GKE: Google Kubernetes Engine (GKE), default cluster version IKS: IBM Kubernetes Service OpenShift versions 3.10, 3.11 Rancher Docker Version Mapping About Profile Levels CIS defines two levels of tests, as described below. In Sysdig Secure, full benchmarks are always run, but you can filter your view of the report to see only top-priority (Level 1 Profile) or only the secondary (Level 2 Priority) results. Level 1 Profile: Limited to major issues Considered a base recommendation that can be implemented fairly promptly and is designed to not have an extensive performance impact. The intent of the Level 1 profile benchmark is to lower the attack surface of your organization while keeping machines usable and not hindering business functionality. Level 2 Profile: Extensive checks, more complete Considered to be "defense in depth" and is intended for environments where security is paramount. The recommendations associated with the Level 2 profile can have an adverse effect on your organization if not implemented appropriately or without due care. Note In the Sysdig Secure interface, select Allto view an in-depth report that includes both Level 1 and Level 2 controls. Level 1to view a report that includes only high-priority controls. Level 2to view a report that includes only the lower-priority controls that are excluded from Level 1. See also: Configure Benchmark Tasks.
https://docs.sysdig.com/en/benchmarks.html
2020-07-02T22:56:27
CC-MAIN-2020-29
1593655880243.25
[]
docs.sysdig.com
Create a quick report summary of your data Important To create a report, you must first create a dashboard on which to add it. Step 1: To create “Quick report summary”, click “Dashboards”. Step 2: Select the “Analysis” option to view all the reports. Step 3: Select the “Quick report summary” option from the analysis option. Note: You can see the default quick report in your work area. Drag it to wherever you want it to be placed. Also, you can resize it by dragging the bottom right corner. Step 4: Click “Edit” menu on the report header to edit your report’s data, appearance and social settings. Step 5: To link the data to the report, select the desired file from the left file tree and select columns from data. Step 6: In “Settings” click the reporting properties which you want to analyse in your report. There are three properties: - Key Values - Maximum: The maximum value in the collection. - Minimum: The minimum value in the collection. - Percentile:The value below which a given percentage of observations in a group of observations fall. For example, the 20th percentile is the value (or score) below which 20% of the observations may be found. - Central tendency - Arithmetic mean: The sum of all values divided by the number of values. This is usually referred as the ‘mean’ or ‘average’. - Geometric mean: The nth root of the product of n values . The geometric mean multiplies all the values together and takes the nth root of the product. For example the geometric mean of 2,4 = (2×4)^1/2 = 4. The application of the geometric mean is to be able to normalise the range of values being averaged, particularly, if these values from different ranges are being included in the mean score process. For example, to calculate the overall geometric mean of a person’s bio metric, by combing the height (mm), weight (kg) and foot size (cm). To get the integrated score with geometric mean, the relative size of each measurement range is normalised, such that, the same percentage increase in any of the underlying scores has the same effect on the resulting score. For example, a 20% increase in height will have same effect as a 20% increase in weight. - Harmonic mean:The reciprocal of the arithmetic mean. The harmonic mean is influenced by lower values where as the arithmetic mean is heavily influenced by large values (a single large value will heavily skew the arithmetic mean and a single very low value will influence the harmonic mean). - Median:The middle value when all the values are sorted. This is also the 50th percentile. - Mode:The value that appears most often in collection. - Distribution - Standard deviation:A measure of the amount of variation or dispersion of a data collection. - CV: The ratio of the standard deviation to the arithmetic mean. A high coefficient of variation indicates a large standard deviation (i.e. variation) relative to the size of the mean. - Mean/Median:It is a simple measure of skewness. If the data is symmetrically distributed, then, the mean and median will be identical. This simple measure indicates the direction of skew (<1 - to the left, >1 to the right) - Pearson’s skew: The mean/median/standard deviation. Step 7: After selecting the properties, format your report with “Formatting tab” which allows: -”.
https://docs.truii.com/analyse-data/quick-summary-report/
2020-07-02T21:02:06
CC-MAIN-2020-29
1593655880243.25
[array(['https://docs.truii.com/wp-content/uploads/2017/12/quick-report-960w-1.jpg', None], dtype=object) array(['https://docs.truii.com/wp-content/uploads/2017/12/quick-data-960w.jpg', None], dtype=object) array(['https://docs.truii.com/wp-content/uploads/2017/12/quick-report-settings-960w.jpg', None], dtype=object) array(['https://docs.truii.com/wp-content/uploads/2017/12/quick-report-format-960w.jpg', None], dtype=object) array(['https://docs.truii.com/wp-content/uploads/2017/12/quickreport-social-960w.jpg', None], dtype=object) ]
docs.truii.com
CursorLocation Property (ADO) Indicates the location of the cursor service. Settings And Return Values Sets or returns a Long value that can be set to one of the CursorLocationEnum values. Remarks This property allows you to choose between various cursor libraries accessible to the provider. Usually, you can choose between using a client-side cursor library or one that is located on the server. This property setting affects connections established only after the property has been set. Changing the CursorLocation property has no effect on existing connections. Cursors returned by the Execute method inherit this setting. Recordset objects will automatically inherit this setting from their associated connections. This property is read/write on a Connection or a closed Recordset, and read-only on an open Recordset. Note Remote Data Service Usage When used on a client-side Recordset or Connection object, the CursorLocation property can only be set to adUseClient.
https://docs.microsoft.com/en-us/sql/ado/reference/ado-api/cursorlocation-property-ado
2018-02-17T21:50:20
CC-MAIN-2018-09
1518891807825.38
[]
docs.microsoft.com
This document is intended for data czars, researchers, and administrative teams at edX partner institutions who use the edX data exports to gain insight into their courses and students. The edX Research Guide is created using RST files and Sphinx. You, as a member of the user community, can help update and revise this documentation project on GitHub, using the edx-documentation repository at The edX documentation team welcomes contributions from Open edX community members. You can find guidelines for how to contribute to edX Documentation in the GitHub edx/edx-documentation repository.
http://edx.readthedocs.io/projects/devdata/en/latest/
2018-02-17T21:17:59
CC-MAIN-2018-09
1518891807825.38
[]
edx.readthedocs.io
BlackBerry ID A BlackBerry ID is a single sign-on identity service that gives you convenient access to multiple BlackBerry products, sites, services, and apps. After you create a BlackBerry ID, you can use a single email address and password to log in to any BlackBerry product that supports BlackBerry ID. A BlackBerry ID is designed to protect your account information from unauthorized access. When you create or log in to your BlackBerry ID, encryption helps protect your information. Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/39933/1457390.jsp
2015-08-28T05:30:10
CC-MAIN-2015-35
1440644060413.1
[]
docs.blackberry.com
JCacheStorageWincache:::clean Description Clean cache for a group given a mode. Description:JCacheStorageWincache::clean [Edit Descripton] public function clean ( $group $mode=null ) - Returns - Defined on line 126 of libraries/joomla/cache/storage/wincache.php See also JCacheStorageWincache::clean source code on BitBucket Class JCacheStorageWincache Subpackage Cache - Other versions of JCacheStorageWincache::clean SeeAlso:JCacheStorageWincache::clean [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JCacheStorageWincache::clean&oldid=56065
2015-08-28T06:25:06
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
When shipping a device internationally, you must include the customs manifest file option with its required subfields in your manifest file. AWS Import/Export uses these values to validate your inbound shipment and prepare your outbound customs paperwork. If you exclude these fields from your manifest and the country that we are shipping your storage device to is outside the country of the data-loading facility, your job request will fail. Note This requirement does not apply to shipments in the European Union. When shipping your device between the EU (Ireland) region data-loading facility and a European Union member nation, you do not need to include the customs options. To access Amazon S3 buckets that are in the EU (Ireland) region, the shipping device must originate from and be shipped back to a location in the European Union. All of the options in the following table are subfields of the customs option, for example: customs: dataDescription: This device contains medical test results. encryptedData: yes encryptionClassification: 5D992 exportCertifierName: John Doe requiresExportLicense: yes deviceValue: 250.00 deviceCountryOfOrigin: Ghana deviceType: externalStorageDevice Example Import to Amazon S3 Manifest with Customs Option The following example is the manifest file content for an import job to Amazon S3, including Customs option. manifestVersion: 2.0 deviceId: ABCDE eraseDevice: yes bucket: my-amz-bucket acl: authenticated-read cacheControl: max-age=3600 contentDisposition: attachment contentLanguage: en contentTypes: csv: application/vnd.ms-excel customs: dataDescription: This device contains medical test results. encryptedData: yes encryptionClassification: 5D992 exportCertifierName: John Doe requiresExportLicense: yes deviceValue: 250.00 deviceCountryOfOrigin: Brazil deviceType: externalStorageDevice diskTimestampMetadataKey: disk-timestamp generator: AWS Import Export Docs ignore: - \.psd$ - \.PSD$ logPrefix: logs logBucket: iemanifest-bucket notificationEmail: [email protected];[email protected] prefix: imported/ setContentEncodingForGzFiles: yes staticMetadata: import-timestamp: Wed, 23 Feb 2011 01:55:59 GMT application: AppImportedData returnAddress: name: John Doe company: Example Corp street1: UG 8 Golf Course Road city: Anycity stateOrProvince: Haryana postalCode: 122002 phoneNumber: +91 431 555 1000 country: India serviceLevel: standard
http://docs.aws.amazon.com/AWSImportExport/latest/DG/ManifestFileRef_international.html
2015-08-28T05:05:01
CC-MAIN-2015-35
1440644060413.1
[]
docs.aws.amazon.com
Difference between revisions of "Securing Joomla extensions" From Joomla! Documentation Revision as of 05:04, 3 March 2012 This Make sure your software does not need register_globals - 7 Check access privileges of users - 8 How to achieve raw component output (for pictures, RSS-feeds etc.) - 9 Various things to be aware of - 10 Resources - 10.1 Secure your software against direct access - 10.2 Secure your software against remote file inclusion - 10.3 Secure your software against SQL injections - 10.4 Secure your software against XSS scripting - 10.5 Make sure your software does not need register_globals - 10.6 Check access privileges of users - 10.7 How to achieve raw component output (for pictures, RSS-feeds etc.) - 10.8 Various things to be aware of -.! Secure your software against remote file inclusion well on the web. Please take a look at the resources listed at the bottom of this post.. - Warning:**mosGetParam (and quotes escaping) is not protecting against some injections for numbers. You must convert the variable to an int with inval($var) or (int) $var in order to prevent those injections.. Secure your software against XSS): echo $GLOBALS['varname']; You should rather use this: global $varname; echo $varname; Resources Secure your software against direct access - No resources so far. Secure your software against remote file inclusion - - For Joomla! 1.5 Secure your software against SQL injections - - - Secure your software against XSS scripting - - - - Make sure your software does not need register_globals - Make sure your level for error_reporting includes E_NOTICE - - - - Check access privileges of users - No resources so far. How to achieve raw component output (for pictures, RSS-feeds etc.) - No resources so far.
https://docs.joomla.org/index.php?title=Securing_Joomla_extensions&diff=65319&oldid=29814
2015-08-28T05:53:35
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
JLanguage::getIgnoredSearchWordsCallback From Joomla! Documentation Revision as of 19WordsCallback Description Getter for ignoredSearchWordsCallback function. Description:JLanguage::getIgnoredSearchWordsCallback [Edit Descripton] public function getIgnoredSearchWordsCallback () - Returns string|function Function name or the actual function for PHP 5.3 - Defined on line 440 of libraries/joomla/language/language.php - Since See also JLanguage::getIgnoredSearchWordsCallback source code on BitBucket Class JLanguage Subpackage Language - Other versions of JLanguage::getIgnoredSearchWordsCallback SeeAlso:JLanguage::getIgnoredSearchWordsCallback [Edit See Also] User contributed notes <CodeExamplesForm />
https://docs.joomla.org/index.php?title=API17:JLanguage::getIgnoredSearchWordsCallback&direction=next&oldid=57236
2015-08-28T05:13:09
CC-MAIN-2015-35
1440644060413.1
[]
docs.joomla.org
Help Center Local Navigation BlackBerry basics shortcuts Depending on the typing input language that you are using, some shortcuts might not be available. - To move the cursor, slide your finger on the trackpad. - To move back a screen, press the Escape key. - To return to the Home screen, when you are not on a call, press the End key. - To view more applications on the Home screen, press the Menu lock the keyboard, on the Home screen, press and hold the asterisk (*) key. To unlock the keyboard, press the asterisk (*) key and the Send key. - To switch between the active notification profile and the Vibrate notification profile, press and hold the Q key. - To delete a highlighted item, press the Backspace/Delete key. Next topic: Phone shortcuts Was this information helpful? Send us your comments.
http://docs.blackberry.com/en/smartphone_users/deliverables/14928/BlackBerry_basics_shortcuts_full_keyboard_50_648474_11.jsp
2015-08-28T05:22:13
CC-MAIN-2015-35
1440644060413.1
[]
docs.blackberry.com