content
stringlengths
0
557k
url
stringlengths
16
1.78k
timestamp
timestamp[ms]
dump
stringlengths
9
15
segment
stringlengths
13
17
image_urls
stringlengths
2
55.5k
netloc
stringlengths
7
77
The ObjectBox data browser (object browser) allows you to view the entities and schema of your database inside a regular web browser, and download entities in JSON format. The object browser runs directly on your device or on your development machine. Behind the scenes this works by bundling a simple HTTP browser into ObjectBox when building your app. If triggered, it will then provide a basic web interface to the data and schema in your box store. We strongly recommend to use the object browser only for debug builds. Add the objectbox-android-objectbrowser library to your debug configuration, and the objectbox-android library to your release configuration. Make sure to also apply the “io.objectbox” plugin after the dependencies block: dependencies {debugImplementation "io.objectbox:objectbox-android-objectbrowser:$objectboxVersion"releaseImplementation "io.objectbox:objectbox-android:$objectboxVersion"}// apply the plugin after the dependencies blockapply plugin: 'io.objectbox' Otherwise the build will fail with a duplicate files error (like Duplicate files copied in APK lib/armeabi-v7a/libobjectbox.so) because the ObjectBox plugin will add the objectbox-android library again. The objectbox-android-objectbrowser artifact adds required permissions to your AndroidManifest.xml (since version 2.2.0). The permissions added are: <!-- Required to provide the web interface --><uses-permission android:<!-- Required to run keep-alive service when targeting API 28 or higher --><uses-permission android: If you only use the browser library for debug builds as recommended above, they will not be added to your release build. After creating your BoxStore, start the object browser using an AndroidObjectBrowser instance. Typically in the onCreate() method of your Application class: boxStore = MyObjectBox.builder().androidContext(this).build();if (BuildConfig.DEBUG) {boolean started = new AndroidObjectBrowser(boxStore).start(this);Log.i("ObjectBrowser", "Started: " + started);} When the app starts an object browser notification should appear. Tapping it will launch a service to keep the app alive and opens the data browser interface in the web browser on the device. To stop the service keeping your app alive, tap the ‘Stop’ button in the notification. To open the browser website on your development machine check the Logcat output when launching the app. It will print the port and the ADB command needed to forward the port to your machine: I/ObjectBrowser: ObjectBrowser started:: Command to forward ObjectBrowser to connected host: adb forward tcp:8090 tcp:8090 If available, port 8090 is used by default. So in most cases just run this command on your dev machine: adb forward tcp:8090 tcp:8090 Once the port is forwarded you can open a browser and go to. When viewing entities tap the download button at the very bottom. This will download entities formatted as JSON.
https://docs.objectbox.io/data-browser
2020-07-02T22:15:15
CC-MAIN-2020-29
1593655880243.25
[]
docs.objectbox.io
Red Hat OpenShift Container Engine is a new product offering from Red Hat that lets you use OpenShift Container Platform as a production platform for launching containers. You download and install OpenShift Container Engine in the same way as OpenShift Container Platform, but OpenShift Container Engine offers a subset of the features that OpenShift Container Platform does. Specifically, the similarities and difference between Red Hat OpenShift Container Engine and OpenShift Container Platform are the following: We will now offer a deeper explanation of the categories in order to more fully explain the feature entitlement of OpenShift Container Engine. OpenShift Container Engine offers full access to an enterprise ready Kubernetes. You will receive a packaged, easy to install Kubernetes that has undergone an extensive compatibility test matrix with many of the software elements you will find in your datacenter. Engine offers the same service level agreements, bug fixes, and CVE protection as OpenShift Container Platform. Engine includes a Red Hat Enterprise Linux Virtual Datacenter and Red Hat CoreOS entitlement that allows you to use an integrated Linux operating system with container runtime all from the same provider. OpenShift Container Engine is compatible with Windows Containers from Microsoft. OpenShift Container Engine comes with the same security options and default settings as the OpenShift Container Platform. Default security context constraints, pod security policies, best practice network and storage settings, service account configuration, SELinux integration, HAproxy edge routing configuration, and all other out of the box protections OpenShift Container Platform offers are available in OpenShift Container Engine. Engine offers full access to our integrated monitoring solution, based on Prometheus, that offer deep coverage and alerting of common Kubernetes’ issues. Red Hat entitles OpenShift Container Engine to the same installation and upgrade automations as OpenShift Container Platform. With an OpenShift Container Engine Subscription users receive support for all storage plugins found in OpenShift Container Platform. In terms of networking, OpenShift Container Engine offers full and supported access to the Kubernetes Container Network Interface (CNI) and therefore allows you to leverage any 3rd party SDN that supports OpenShift Container Platform. It also allows for the use of the included OVS to establish a flat, non-project (Kubernetes namespace) segmented network for the Kubernetes cluster. Engine allows customers to use Kubernetes Network Policy to create microsegmentation between deployed application services on the cluster. OpenShift Container Engine also grants full use of the route API object found in OpenShift including its sophisticated integration with the HAproxy edge routing layer. Core User Experience A user of the OpenShift Container Engine will be most interested in experiencing a native Kubernetes experience. They have full access to Kubernetes Operators, pod deployment strategies, Helm, and OpenShift templates. Engine users will leverage the oc and kubectl command lines. Engine also offers an administrator web based console that shows all aspects of their deployed container services and is more in line with a container as a service experience. OpenShift Container Engine grants access to the Operator Life Cycle Manager that helps cluster administrators and users control access to content on the cluster as well as life cycle operator enabled services that might be in use. With an Engine Subscription users receive access to both the Kubernetes namespace and the OpenShift Project API object that layers on top as well as both cluster level and tenant level Prometheus monitoring metrics and events. If you purchase an OpenShift Container Engine Subscription you receive access to the OpenShift Container Platform entitled content from the Red Hat Container Catalog and Red Hat Connect ISV marketplace. With an Engine Subscription you receive access to all maintained and curated content that the OpenShift eco-system offers either for free, such as the Red Hat Software Collections, or through additional purchases. The Kubernetes service broker, service catalog, and all OpenShift Container Platform service broker offerings are supported with an OpenShift Container Platform Engine Subscription. OpenShift Container Engine is compatible and supported with OpenShift Container Storage should users want to buy this Red Hat product addon. OpenShift Container Engine is compatible and supported with Red Hat Middleware product solutions should users want to buy these Red Hat product addons. OpenShift Container Engine is compatible and supported with future Red Hat Cloud Functions deliverables should users want to experience function based or serverless container services. OpenShift Container Engine is compatible and supported with Red Hat Quay should users want to buy this Red Hat product solution. CNV VM Virtualization Compatible OpenShift Container Engine is compatible and supported with Red Hat product offerings derived from the kubevirt.io open source project should users want to buy such solutions. OpenShift Container Engine can be used in multi-cluster deployment configuration to the same extent OpenShift Container Platform. Consolidated views of clusters and the use of Kubernetes technologies to offer an agnostic layer across public and on premises clouds is allowed with Engine. A OpenShift Container Engine Subscription does not come with support for cluster wide log aggregation solution or for ElasticSearch, Fluentd, Kibana based logging solution. Similarly the chargeback features found in OpenShift Container Platform and Red Hat Service Mesh capabilities derived from the open source Istio.io and kiali.io project that offers OpenTracing observability for containerized services on the platform are not supported. The networking solutions that come out of the box with OpenShift Container Platform are not supported with an OCPE subscription . OpenShift’s Kubernetes CNI plugin for automation of multi-tenant network segmentation between OpenShift projects is not entitled for use with Engine. OpenShift Container Platform offers more granular control of source IPaddress used by application services on the cluster. Those egress IPaddress controls are not entitled for use with Engine. OpenShift Container Platform offers ingress routing to on cluster services that use non-standard ports when no public cloud provider is in use via the VIP pods found in the OpenShift product. That ingress solution is not supported in Engine. Engine users are supported for the Kubernetes ingress control object which offers integrations with public cloud providers to solve this use case. Red Hat Service Mesh that is derived from the istio.io open source project is not supported in the OpenShift Container Engine offering. With an OpenShift Container Engine the following capabilities are not supported: developer experience utilities and tools, OpenShift’s pipeline feature that integrates a streamlined, Kubernetes enabled Jenkins experience in the user’s project space, the OpenShift Container Platform’s source to image feature that allows end users to easily deploy source code, dockerfiles, or container images across the cluster in a manner that automates the segmentation between gold standard container images and line of business code additions, while automating and ability for the cluster to autonomously update the deployed application should either layer change, build strategies, builder pods, or imagestreams for end user container deployments, or the odo developer command line or the developer persona in the OpenShift web console. The following table is a summary of the above explanations to help provide a more clarity around what features are supported for use in OpenShift Container Engine. OpenShift Container Engine is a subscription offering that provides OpenShift Container Platform but with a limited set of supported features at a lower list price. OpenShift Container Engine and OpenShift Container Platform are the same product and therefore all software and features are delivered in both. There is only one download, OpenShift Container Platform. Engine uses the OpenShift Container Platform documentation and support services and bug errata for this reason. If you purchase the OpenShift Container Platform Engine Subscription you need to understand that support for all the features are limited as set forth above.
https://docs.openshift.com/container-platform/3.11/welcome/oce_about.html
2020-07-02T23:12:28
CC-MAIN-2020-29
1593655880243.25
[]
docs.openshift.com
Let’s think Spring! Our Grand Opening and Spring-Ford Chamber of Commerce Mixer event will be on March 30, 2017 at 5:00 PM, at our Limerick location. We have TWO showroom buildings at this location. It is open to anyone and everyone! The party begins with the ribbon cutting ceremony at 5:00 PM with the Chamber of Commerce. There will be food, drink, games, music, and more! Special Grand Opening deals will be available at both of our locations on our Bullfrog Spas, Hydropool Swim Spas, and Finnleo Saunas. Anyone who would like to try out our hot tubs, swim spa, and saunas are more than welcome to do so! Bring your swim suit, no skinny dipping! Come join in the fun, we hope to see you soon!
https://www.aqua-docs.com/grand-opening-celebration/
2020-07-02T22:30:54
CC-MAIN-2020-29
1593655880243.25
[array(['https://www.aqua-docs.com/wp-content/uploads/2017/02/Aqua-Docs-Limerick.jpg', None], dtype=object) ]
www.aqua-docs.com
Plugins Overview The goal of the Sensu Plugins project is to provide a set of community-driven, high-quality plugins, handlers and other code to maximize the effective use of Sensu in various types of auto-scaling and traditional environments. Much of the code is written in Ruby and uses the sensu-plugin framework; some also depend on additional gems or packages(e.g. mysql2 or libopenssl-devel). Some are shell scripts! All languages are welcome but the preference is for pure Ruby when possible to ensure the broadest range of compatibility. Note: Plugins are maintained by volunteers and do not have an SLA or similar. What is a Sensu plugin? A Sensu plugin is a bundle of Sensu artifacts typically service specific. These artifacts typically include: - Check scripts - Metric scripts - Sensu handlers - Sensu mutators How do I use a plugin? Depending on the type of artifact you wish you wish to use they have different setup/configuration. The most common are check/metric scripts. Each plugin has self contained documentation that you should refer to for more in depth information. To install a ruby plugin see here for more details and refer to the plugins self contained documentation for any external dependencies such as os libraries, compilers, etc. To setup: How do I contribute to plugins? Visit the contributing guide.
https://docs.sensu.io/plugins/latest/
2020-07-02T21:59:17
CC-MAIN-2020-29
1593655880243.25
[]
docs.sensu.io
Metrics Metrics are measurements of network behavior. Metrics help you to gain visibility into what is happening in your network in real-time. In the ExtraHop system, metrics are calculated from wire data, and then associated with devices and protocols. The ExtraHop system provides a large number of metrics, which you can explore from protocol pages in the Metrics section of the ExtraHop Web UI. You can also search for metrics in the Metric Catalog, in the Metric Explorer, and by searching for metrics by source and then protocol. Types of metrics Each metric in the ExtraHop system is classified into a metric type. Understanding the distinctions between metric types can help you configure charts or write triggers to capture custom metrics. For example, a heatmap chart can only display dataset metrics. - Count - The number of events that occurred over a specific time period. You can view count metrics as a rate or a total count. For example, a byte is recorded as a count, and can either represent a throughput rate (as seen in a time series chart) or total traffic volume (as seen in a table). Rates are helpful for comparing counts over different time periods. A count metric can be calculated as a per-second average over time. When viewing high-precision, or 1-second, bytes and packet metrics, you can also view a maximum rate and minimum rate. Count metrics include errors, packets, and responses. - Distinct count - The number of unique events that occurred during a selected time interval. The distinct count metric provides an estimate of the number of unique items placed into a HyperLogLog set during the selected time interval. - Dataset - A distribution of data that can be calculated into percentiles values. Dataset metrics include processing time and round trip time. - Maximum - A single data point that represents the maximum value from a specified time period. - Sampleset - A summary of data about a detail metric. Selecting a sampleset metric in a chart enables you to display a mean (average) and standard deviation over a specified time period. - Snapshot - A data point that represents a single point in time. Metric sources In the ExtraHop system, a metric is a measurement of observed network behavior. Metrics are generated from network traffic, and then each metric is associated with a source, such as an application, device, or network. When you select a source from the Metrics section of the Web UI, or in the Metric Explorer when building a chart, you can view metrics associated with that source. Each source provides access to a different collection of metrics. Select from the following sources and groups as you configure dashboard widgets or navigate across protocol pages. Applications An application is a user-defined container for metrics that are associated with multiple devices and protocols. These containers can represent distributed applications on your network environment. For information about creating applications with triggers, see Triggers. Networks A network capture is the entry point into network devices and virtual LANs (VLANs) that are detected from wire data by the ExtraHop system. A flow network is a network device, such as a router or switch, that sends information about flows seen across the device. A flow network can have multiple interfaces. Devices Devices are objects on your network with a MAC address and IP address that have been automatically discovered and classified by the ExtraHop system. Metrics are available for every discovered device on your network. An L2 device has a MAC address only; an L3 device has an IP address and MAC address. Device groups A device group is a user-defined collection of devices that can help you track metrics across multiple devices. You can create a dynamic device group by automatically adding all devices that meet matching criteria, or you can create a static device group by manually selecting individual devices. Matching criteria for dynamic device groups can be a hostname, IP address, MAC address, or any of the filter criteria listed for the device on the Devices page. For example, you can create a dynamic group and then configure a rule to add all devices within a certain IP address range to that group automatically. Activity groups An activity group is a collection of devices automatically grouped together by the ExtraHop system based on network traffic. A device with multiple types of traffic might appear in more than one activity group; for example, if a CIFS client is authenticating through LDAP, the device will appear in both the CIFS Clients and the LDAP Clients activity groups. Activity groups make it easy to identify all the devices associated with a protocol, or determine which devices were associated with protocol activity during a specific time interval. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/6.2/metrics-overview/
2018-03-17T06:30:13
CC-MAIN-2018-13
1521257644701.7
[]
docs.extrahop.com
The Path Fill Types Discover the different effects possible with SkiaSharp path fill types Two contours in a path can overlap, and the lines that make up a single contour can overlap. Any enclosed area can potentially be filled, but you might not want to fill all the enclosed areas. Here's an example: You have a little control over this. The filling algorithm is governed by the SKFillType property of SKPath, which you set to a member of the SKPathFillType enumeration: Winding, the default EvenOdd InverseWinding InverseEvenOdd Both the winding and even-odd algorithms determine if any enclosed area is filled or not filled based on a hypothetical line drawn from that area to infinity. That line crosses one or more boundary lines that make up the path. With the winding mode, if the number of boundary lines drawn in one direction balance out the number of lines drawn in the other direction, then the area is not filled. Otherwise the area is filled. The even-odd algorithm fills an area if the number of boundary lines is odd. With many routine paths, the winding algorithm often fills all the enclosed areas of a path. The even-odd algorithm generally produces more interesting results. The classic example is a five-pointed star, as demonstrated in the Five-Pointed Star page. The FivePointedStarPage.xaml file instantiates two Picker views to select the path fill type and whether the path is stroked or filled or both, and in what order: <ContentPage xmlns="" xmlns: <Grid> <Grid.RowDefinitions> <RowDefinition Height="Auto" /> <RowDefinition Height="*" /> </Grid.RowDefinitions> <Grid.ColumnDefinitions> <ColumnDefinition Width="*" /> <ColumnDefinition Width="*" /> </Grid.ColumnDefinitions> <Picker x: <Picker.Items> <x:String>Winding</x:String> <x:String>EvenOdd</x:String> <x:String>InverseWinding</x:String> <x:String>InverseEvenOdd</x:String> </Picker.Items> <Picker.SelectedIndex> 0 </Picker.SelectedIndex> </Picker> <Picker x: <Picker.Items> <x:String>Fill only</x:String> <x:String>Stroke only</x:String> <x:String>Stroke then Fill</x:String> <x:String>Fill then Stroke</x:String> </Picker.Items> <Picker.SelectedIndex> 0 </Picker.SelectedIndex> </Picker> <skia:SKCanvasView x: </Grid> </ContentPage> The code-behind file uses both Picker values to draw a five-pointed star: = 0.45f * Math.Min(info.Width, info.Height); SKPath path = new SKPath { FillType = (SKPathFillType)Enum.Parse(typeof(SKPathFillType), fillTypePicker.Items[fillTypePicker.SelectedIndex]) }; path.MoveTo(info.Width / 2, info.Height / 2 - radius); for (int i = 1; i < 5; i++) { // angle from vertical double angle = i * 4 * Math.PI / 5; path.LineTo(center + new SKPoint(radius * (float)Math.Sin(angle), -radius * (float)Math.Cos(angle))); } path.Close(); SKPaint strokePaint = new SKPaint { Style = SKPaintStyle.Stroke, Color = SKColors.Red, StrokeWidth = 50, StrokeJoin = SKStrokeJoin.Round }; SKPaint fillPaint = new SKPaint { Style = SKPaintStyle.Fill, Color = SKColors.Blue }; switch (drawingModePicker.SelectedIndex) { case 0: canvas.DrawPath(path, fillPaint); break; case 1: canvas.DrawPath(path, strokePaint); break; case 2: canvas.DrawPath(path, strokePaint); canvas.DrawPath(path, fillPaint); break; case 3: canvas.DrawPath(path, fillPaint); canvas.DrawPath(path, strokePaint); break; } } Normally, the path fill type should affect only fills and not strokes, but the two Inverse modes affect both fills and strokes. For fills, the two Inverse types fill areas oppositely so that the area outside the star is filled. For strokes, the two Inverse types color everything except the stroke. Using these inverse fill types can produce some odd effects, as the iOS screenshot demonstrates: The Android and Windows mobile screenshots show the typical even-odd and winding effects, but the order of the stroke and fill also affects the results. The winding algorithm is dependent on the direction that lines are drawn. Usually when you're creating a path, you can control that direction as you specify that lines are drawn from one point to another. However, the SKPath class also defines methods like AddRect and AddCircle that draw entire contours. To control how these objects are drawn, the methods include a parameter of type SKPathDirection, which has two members: The methods in SKPath that include an SKPathDirection parameter give it a default value of Clockwise. The Overlapping Circles page creates a path with four overlapping circles with an even-odd path fill type: = Math.Min(info.Width, info.Height) / 4; SKPath path = new SKPath { FillType = SKPathFillType.EvenOdd }; path.AddCircle(center.X - radius / 2, center.Y - radius / 2, radius); path.AddCircle(center.X - radius / 2, center.Y + radius / 2, radius); path.AddCircle(center.X + radius / 2, center.Y - radius / 2, radius); path.AddCircle(center.X + radius / 2, center.Y + radius / 2, radius); SKPaint paint = new SKPaint() { Style = SKPaintStyle.Fill, Color = SKColors.Cyan }; canvas.DrawPath(path, paint); paint.Style = SKPaintStyle.Stroke; paint.StrokeWidth = 10; paint.Color = SKColors.Magenta; canvas.DrawPath(path, paint); } It's an interesting image created with a minimum of code:
https://docs.microsoft.com/en-us/xamarin/xamarin-forms/user-interface/graphics/skiasharp/paths/fill-types
2018-03-17T06:15:39
CC-MAIN-2018-13
1521257644701.7
[array(['fill-types-images/filltypeexample.png', 'Five-pointed star partially filles'], dtype=object)]
docs.microsoft.com
WooCommerce: How to Change the “Add to Cart” button text WooCommerce must hard code the wording of certain elements in the plugin, including the Add to Cart button. You may prefer the button say something else or use your language. You can change this one of two ways, by using gettext in a plugin (recommended) or replacing the text through custom functions or templates. The Safe Way Use this method if you do not have WordPress theme development skills or coding knowledge, or if you want your clients to have future control. Note that WooCommerce comes in several languages, so you may find switching the language in the settings is far easier! - Use Poor Guy’s Swiss Knife, a free plugin with just about every WooCommerce interface customization you need. - OR Download and install a translation plugin such as Loco Translate and change the text in the default English file under WooCommerce. For texts found in StoreKit, add a new language to the StoreKit entry (select English for the default) then sync the theme and the file before making your changes. See Loco Translate help here. Via Child Theme - Create a child theme - Add the custom function you need to define the new text. See WooCommerce documentation for code snippets. To learn how best to setup WooCommerce take part in this online course on Treehouse.
http://docs.layerswp.com/doc/woocommerce-how-to-change-the-add-to-cart-button-text/
2018-03-17T06:20:40
CC-MAIN-2018-13
1521257644701.7
[]
docs.layerswp.com
An Act to repeal 441.50; to amend 49.498 (1) (L), 50.01 (1w), 50.01 (5r), 115.001 (11), 118.29 (4), 146.40 (1) (c), 146.40 (1) (f), 250.01 (7), 255.06 (1) (d), 440.03 (11m) (c) 1., 440.03 (13) (b) (intro.), 440.14 (5) (b), 440.15, 441.06 (4), 441.10 (7), 441.115 (2) (a), 441.15 (3) (a) (intro.), subchapter II (title) of chapter 441 [precedes 441.50], 655.001 (9), 905.04 (1) (f), 990.01 (19g), 990.01 (23q) and 990.01 (36m); to repeal and recreate 440.03 (13) (b) (intro.) and 440.15; and to create 14.87, 111.335 (1) (e), 441.06 (1c), 441.10 (1c) and 441.51 of the statutes; Relating to: ratification of the Enhanced Nurse Licensure Compact, extending the time limit for emergency rule procedures, and providing an exemption from emergency rule procedures. (FE)
https://docs.legis.wisconsin.gov/2017/proposals/sb417
2018-03-17T06:21:03
CC-MAIN-2018-13
1521257644701.7
[]
docs.legis.wisconsin.gov
Cprofile Updated: April 17, 2012 Applies To: Windows Server 2003, Windows Server 2003 with SP1, Windows Server 2003 with SP2 Cprofile - Cprofile is deprecated, and is not guaranteed to be supported in future releases of Windows. Cprofile.exe: Clean profile. This tool is included in all Windows Server 2003 operating systems except Windows Server 2003, Web edition. For more information see Terminal Services Tools and Settings.
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff934599(v=ws.10)
2018-03-17T07:16:57
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
async Task Handle(MyMessage message, IMessageHandlerContext context) { //. Enabling callbacks The requesting endpoint has to enable the callbacks via configuration: endpointConfiguration.EnableCallbacks();(); var response = await endpoint.Request<int>(message) .ConfigureAwait(false); log.Info($"Callback received with response:{response}"); Response public class Handler : IHandleMessages<Message> { public Task Handle(Message message, IMessageHandlerContext context) { return context.Reply(10); } } intvalue, the replying endpoint also needs to reference the NServiceBus.package. Callbacks If the endpoint only replies to callbacks enable the callbacks as shown below: endpointConfiguration.EnableCallbacks(makesRequests: false); Enum The enum response scenario allows any enum value to be returned in a strongly-typed manner. Send and Callback var message = new Message(); var response = await endpoint.Request<Status>(message) .ConfigureAwait(false); log.Info($"Callback received with response:{response}"); NServiceBus.package. Callbacks If the endpoint only replies to callbacks enable the callbacks as shown below: endpointConfiguration.EnableCallbacks(makesRequests: false); Response public class Handler : IHandleMessages<Message> { public Task Handle(Message message, IMessageHandlerContext context) { return context.Reply(); var response = await endpoint.Request<ResponseMessage>(message) .ConfigureAwait(false); log.Info($"Callback received with response:{response.Property}"); Response public class Handler : IHandleMessages<Message> { public Task Handle(Message message, IMessageHandlerContext context) { var responseMessage = new ResponseMessage { Property = "PropertyValue" }; return context.Reply(responseMessage); } } Cancellation This API was added in the externalized Callbacks feature. The asynchronous callback can be canceled by registering a CancellationToken provided by a CancellationTokenSource. The token needs to be passed into the Request method as shown below. var cancellationTokenSource = new CancellationTokenSource(); cancellationTokenSource.CancelAfter(TimeSpan.FromSeconds(5)); var message = new Message(); try { var response = await endpoint.Request<int>(message, cancellationTokenSource.Token) .ConfigureAwait(false); } catch (OperationCanceledException) { // Exception that is raised when the CancellationTokenSource is canceled } tied to the endpoint instance making the request, so all responses need to be routed back to the specific instance making the request. This means that callbacks require the endpoint to configure a unique instance Id: endpointConfiguration.MakeInstanceUniquelyAddressable("uniqueId"); This will make each instance of the endpoint uniquely addressable by creating an additional queue that includes the instance Id in the name. Selecting an appropriate Id depends on the transport being used and whether or not the endpoint is scaled out: - For broker transports like Azure ServiceBus, RabbitMQ, etc., the instance Id needs to be unique for each instance, otherwise the instances will end up sharing a single callback queue and a reply could be received by the wrong instance. - For federated transports like MSMQ, where every instance is running on a separate machine and can never share queues, then it is OK to use a single Id like replies, callbacks, etc. Uniquely addressable endpoints will consume messages from their dedicated, instance-specific queues in addition to the main queue that all instances share. Replies will automatically be routed to the correct instance-specific queue.
https://docs.particular.net/nservicebus/messaging/callbacks?version=callbacks_3
2018-03-17T06:38:41
CC-MAIN-2018-13
1521257644701.7
[]
docs.particular.net
Progress® Telerik® Reporting R1 2018 ReportsHostBaseCreateCache Method Note: This API is now obsolete ICache CreateCache() <ObsoleteAttribute("CreateReportResolver method is now obsolete. Please provide service setup using the Telerik.Reporting.Services.ServiceStack.ReportsHostBase.ReportServiceConfiguration property.")> Protected Overridable Function CreateCache As ICache Return ValueType: ICache An instance of cache that will be used from the controller in order to preserve its cache/state. Remarks Override this method in order to create the cache instance. May be one of the built-in caching implementations or a custom implementation. To use one of the built-in caching implementations use the class.
https://docs.telerik.com/reporting/m-telerik-reporting-services-servicestack-reportshostbase-createcache
2018-03-17T06:44:20
CC-MAIN-2018-13
1521257644701.7
[]
docs.telerik.com
Contributing¶ Wanna help out? Great! Here’s how. Spreading the word¶ Even if you are not keen on working on the server code yourself, just spreading the word is a big help - it will help attract more people which leads to more feedback, motivation and interest. Consider writing about Evennia on your blog or in your favorite (relevant) forum. Write a review somewhere (good or bad, we like feedback either way). Rate it on places like ohloh. Talk about it to your friends … that kind of thing. Donations¶ The best way to support Evennia is to become an Evennia patron. Evennia is a free, open-source project and any monetary donations you want to offer are completely voluntary. See it as a way of announcing that you appreciate the work done - a tip of the hat! A patron donates a (usually small) sum every month to show continued support. If this is not your thing you can also show your appreciation via a one-time donation (this is a PayPal link but you don’t need PayPal yourself). Help with Documentation¶ Evennia depends heavily on good documentation and we are always looking for extra eyes and hands to improve it. Even small things such as fixing typos are a great help! The documentation is a wiki and as long as you have a GitHub account you can edit it. It can be a good idea to discuss in the chat or forums if you want to add new pages/tutorials. Otherwise, it goes a long way just pointing out wiki errors so we can fix them (in an Issue or just over chat/forum). Contributing through a forked repository¶ We always need more eyes and hands on the code. Even if you don’t feel confident with tackling a bug or feature, just correcting typos, adjusting formatting or simply using the thing and reporting when stuff doesn’t make sense helps us a lot. The most elegant way to contribute code to Evennia is to use GitHub to create a fork of the Evennia repository and make your changes to that. Refer to the Forking Evennia version control instructions for detailed instructions. Once you have a fork set up, you can not only work on your own game in a separate branch, you can also commit your fixes to Evennia itself. Make separate branches for all Evennia additions you do - don’t edit your local master branch directly. It will make your life a lot easier. If you have a change that you think is suitable for the main Evennia repository, you issue a Pull Request. This will let Evennia devs know you have stuff to share. Contributing with Patches¶ To help with Evennia development it’s recommended to do so using a fork repository as described above. But for small, well isolated fixes you are also welcome to submit your suggested Evennia fixes/addendums as a patch. You can include your patch in an Issue or a Mailing list post. Please avoid pasting the full patch text directly in your post though, best is to use a site like Pastebin and just supply the link. Contributing with Contribs¶ While Evennia’s core is pretty much game-agnostic, it also has a contrib/ directory. The contrib directory contains game systems that are specialized or useful only to certain types of games. Users are welcome to contribute to the contrib/ directory. Such contributions should always happen via a Forked repository as described above. - If you are unsure if your idea/code is suitable as a contrib, ask the devs before putting any work into it. This can also be a good idea in order to not duplicate efforts. This can also act as a check that your implementation idea is sound. We are, for example, unlikely to accept contribs that require large modifications of the game directory structure. - If your code is intended primarily as an example or shows a concept/principle rather than a working system, it is probably not suitable for contrib/. You are instead welcome to use it as part of a new tutorial! - The code should ideally be contained within a single Python module. But if the contribution is large this may not be practical and it should instead be grouped in its own subdirectory (not as loose modules). - The contribution should preferably be isolated (only make use of core Evennia) so it can easily be dropped into use. If it does depend on other contribs or third-party modules, these must be clearly documented and part of the installation instructions. - The code itself should follow Evennia’s Code style guidelines. - The code must be well documented as described in our documentation style guide. Expect that your code will be read and should be possible to understand by others. Include comments as well as a header in all modules. If a single file, the header should include info about how to include the contrib in a game (installation instructions). If stored in a subdirectory, this info should go into a new README.mdfile within that directory. - Within reason, your contribution should be designed as genre-agnostic as possible. Limit the amount of game-style-specific code. Assume your code will be applied to a very different game than you had in mind when creating it. - To make the licensing situation clear we assume all contributions are released with the same license as Evennia. If this is not possible for some reason, talk to us and we’ll handle it on a case-by-case basis. - Your contribution must be covered by unit tests. Having unit tests will both help make your code more stable and make sure small changes does not break it without it being noticed, it will also help us test its functionality and merge it quicker. If your contribution is a single module, you can add your unit tests to evennia/contribs/tests.py. If your contribution is bigger and in its own sub-directory you could just put the tests in your own tests.pyfile (Evennia will find it automatically). - Merging of your code into Evennia is not guaranteed. Be ready to receive feedback and to be asked to make corrections or fix bugs. Furthermore, merging a contrib means the Evennia project takes on the responsibility of maintaining and supporting it. For various reasons this may be deemed to be beyond our manpower. However, if your code were to not be accepted for merger for some reason, we will instead add a link to your online repository so people can still find and use your work if they want.
http://evennia.readthedocs.io/en/latest/Contributing.html
2018-03-17T06:05:47
CC-MAIN-2018-13
1521257644701.7
[]
evennia.readthedocs.io
Deploy the ExtraHop Trace Appliance with VMware This guide explains how to deploy the virtual ExtraHop Trace appliance (EDA 1150v) on the VMware ESXi/ESX platform. Virtual machine requirements Your environment must meet the following requirements to deploy a virtual Trace appliance: - An existing installation of VMware ESX or ESXi server version 6.0 or later capable of hosting the Trace virtual appliance. The virtual Trace appliance is available in the following configuration: - 2 CPUs - 16 GB RAM - 4 GB boot disk - 1 TB packetstore disk. You can reconfigure the disk size between 50 GB and 4 TB before deploying, if necessary. - choose one of the following options: - . - If you do not want to resize the packetstore disk, select the Power on after deployment checkbox and then click Finish to begin the deployment. - Select the virtual Trace appliance in the ESX Inventory and then select Open Console from the Actions menu. - Click the console window and then press ENTER to display the IP address. DHCP is enabled by default on the ExtraHop virtual appliance. To configure a static IP address, see the Configure a static IP address through the CLI section. - In VMware ESXi, configure the virtual switch to receive traffic and restart to see the changes. Configure a static IP address through the CLI The ExtraHop appliance is delivered with DHCP enabled. If your network does not support DHCP, no IP address is acquired, and you must configure a static address manually. -_0<< Connected to Discover and Command Appliance >>IMAGE?
https://docs.extrahop.com/6.2/deploy-eta-vmware/
2018-03-17T06:30:23
CC-MAIN-2018-13
1521257644701.7
[array(['/images/6.2/eda-eta-diagram.png', None], dtype=object) array(['/images/6.2/eda-eta-eca-diagram.png', None], dtype=object)]
docs.extrahop.com
Deploying Microsoft RemoteFX for Virtual Desktop Pools Step-by-Step Guide Updated: February 16, 2011 Applies To: Windows Server 2008 R2 with SP1 About this guide Microsoft® RemoteFX™ is included as part of the RD Virtualization Host role service, and it. This step-by-step guide walks you through the process of setting up a working virtual desktop pool that uses RemoteFX. During this process, you will deploy the following hardware in a test environment:® Domain Services domain controller This guide includes the following topics: Step 1: Setting Up the CONTOSO Domain Step 2: Setting Up the Virtual computer. Prerequisites When implementing RemoteFX, consider the following: The RemoteFX server and the RemoteFX-enabled virtual desktop must meet the RemoteFX hardware requirements. For more information about the hardware requirements for deploying RemoteFX, see Hardware Considerations for RemoteFX (). Ensure that the hyper-threading technology is enabled in the BIOS of the RD Virtualization Host server. Configure the proper RAM as required. Per the Windows® 7 requirements, if you are using an x86-based virtual machine, you must configure at least 1024 megabytes (MB) of RAM. If you are using an x64-based virtual machine, you must configure at least 2048 MB of RAM. Ensure that you are running the matching build of Windows Server® 2008 R2 with Service Pack 1 (SP1) on the RemoteFX server, Windows 7 with SP1 on the virtual machine, and Windows 7 with SP1 on the client computer. Ensure that there is a LAN connection between the client and the RD Virtualization Host server. Ensure that the Windows Aero® desktop experience is enabled on the RemoteFX-enabled virtual desktops. Scenario: Deploying RemoteFX for a virtual desktop pool We recommend that you first use the procedures provided in this guide in a test lab environment. Step-by-step guides are not necessarily meant to be used to deploy Windows Server features without supporting deployment documentation, and they should be used with discretion as stand-alone documents. Upon completion of this step-by-step guide, you will have a eight computers that are connected to a private network by using the following operating systems, applications, and services: The computers form a private network, and they are connected through a common hub or Layer 2 switch. This step-by-step guide uses private addresses throughout the test lab configuration. The private network ID 10.0.0.0/24 is used for the network. The domain controller is named CONTOSO-DC for the domain named contoso.com. The following figure shows the configuration of the test environment. ()
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff817591(v=ws.10)
2018-03-17T07:21:29
CC-MAIN-2018-13
1521257644701.7
[array(['images/dd772706.078f7d16-b54e-47a2-9598-5a51f3164c45%28ws.10%29.gif', None], dtype=object) ]
docs.microsoft.com
PutKeyPolicy Attaches a key policy to the specified customer master key (CMK). You cannot perform this operation on a CMK in a different AWS account. For more information about key policies, see Key Policies in the AWS Key Management Service Developer Guide. Request Syntax { "BypassPolicyLockoutSafetyCheck": boolean, "KeyId": " string", "Policy": " string", "PolicyN - Policy because the new principal might not be immediately visible to AWS KMS. For more information, see Changes that I make are not always immediately visible in the AWS Identity and Access Management User Guide. The key policy size limit is 32 kilobytes (32768 bytes). Type: String Length Constraints: Minimum length of 1. Maximum length of 32768. Pattern: [\u0009\u000A\u000D\u0020-\u00FF]+ Required: Yes - PolicyName The name of the key policy. The only valid value is default. Type: String Length Constraints: Minimum length of 1. Maximum length of 128. Pattern: [\w]+ Required: Yes - intend to prevent the principal that is making the request from making a subsequent PutKeyPolicyrequest on the CMK. The default value is false. Type: Boolean Required: No: 2396 X-Amz-Target: TrentService.PutKeyPolicy X-Amz-Date: 20161207T2030=e58ea91db06afc1bc7a1f204769cf6bc4d003ee090095a13caef361c69739ede { "Policy": "{ \"Version\": \"2012-10-17\", \"Id\": \"custom-policy-2016-12-07\", \/ExampleAdminUser\", \"arn:aws:iam::111122223333:role/ExampleAdminRole\" ] }, \::111122223333:role/ExamplePowerUserRole\" }, \"Action\": [ \"kms:Encrypt\", \"kms:Decrypt\", \"kms:ReEncrypt*\", \"kms:GenerateDataKey*\", \"kms:DescribeKey\" ], \"Resource\": \"*\" }, { \"Sid\": \"Allow attachment of persistent resources\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::111122223333:role/ExamplePowerUserRole\" }, \"Action\": [ \"kms:CreateGrant\", \"kms:ListGrants\", \"kms:RevokeGrant\" ], \"Resource\": \"*\", \"Condition\": { \"Bool\": { \"kms:GrantIsForAWSResource\": \"true\" } } } ] }", "PolicyName": "default", "KeyId": "1234abcd-12ab-34cd-56ef-1234567890ab" } Example Response HTTP/1.1 200 OK Server: Server Date: Wed, 07 Dec 2016 20:30:23 GMT Content-Type: application/x-amz-json-1.1 Content-Length: 0 Connection: keep-alive x-amzn-RequestId: fb114d4c-bcbb-11e6-82b3-e9e4af764a06 See Also For more information about using this API in one of the language-specific AWS SDKs, see the following:
https://docs.aws.amazon.com/kms/latest/APIReference/API_PutKeyPolicy.html
2018-03-17T06:55:40
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
React Native and Over the Air Electrode Over the Air Server Electrode Over the Air (OTA) is a Microsoft™ Code Push compatible server that provides automatic updates for mobile applications over-the-air. Prerequisites - Node v6 or greater - Apache Cassandra - Apache Cassandra Getting Started Documentation - GitHub Account (if using GitHub as auth provider) Running Cassandra Befor running Cassandra, we recommend that you research basic information about running and configuring Cassandra. sh $ curl | tar -xvzf - $ cd apache-cassandra-3.9/ $ ./bin/cassandra Installation At a minimum, install and run electrode-ota-server using the following commands. For most scenarios this is not complete. You will need to be sure to setup SSL, load balancing, and all other requirements for your environment. $ mkdir your_ota $ cd your_ota $ npm init $ npm install electrode-ota-server --save $ mkdir config In the package.json file, add the following code so that the server, by default, will start with the production configuration. This can be overridden with NODE_ENV. "scripts":{ "start":"NODE_ENV=production node node_modules/.bin/electrode-ota-server", "development":"NODE_ENV=development node node_modules/.bin/electrode-ota-server" } Setting up OAuth To use GitHub as an OAuth provider, you need to login to GitHub and add an OTA Application. Step 1. Login to github and select Settings. Step 2. Go to "Developer Settings" and select "OAuth applications". Step 3. Setup your application. Keep in mind protocols and URLs are important. Also you can set up a key for development also (localhost.yourdomain.com). Make sure that resolves for your machine, try adding it to your hosts file. If you don't have a public server running you can add use for the homepage and authorization callback URL (for development only). Step 4. Wild celebration, or double check that everything is correct! Check your clientId and clientSecret. Keep your clientSecret secret (avoid checking it into public github for example). Configure the OTA Server Inside the configuration, create a config/production.json file. You must configure at least one environment. You can create different settings for production, testing, and development by creating separate JSON files for each environment. In production, use TLS or HTTPS for the server. Setting TLS is outside the scope of this document. The configurations are loaded using electrode-confippet. For additional information, see Confippet. Note The variables that should be configured are <%= variable %>. The comments must be removed if using JSON. { "plugins": { "electrode-ota-server-dao-cassandra": { "options": { //You can enter an array of cassandra "contactPoints" but you need at least one. // If you are running cassandra locally you can use "localhost". "contactPoints": [ "<%=cassandra.hosts%>" ], //Optional - If your cassandra server does not require a password "username": "<%=cassandra.username%>", //Optional - If your cassandra server does not require a password "password": "<%=cassandra.password%>" //Optional the keyspace you want to run the server with, by default the keyspace is "wm_ota". "keyspace":"<%=cassandra.keyspace%>" } }, //This allows for other fileservice mechanisms to be plugged in. Currently the files are stored // in the cassandra db, but the could be stored in anything really, including the filesystem. "electrode-ota-server-fileservice-upload":{ "options": { //this needs to be the url of your acquistion server. It can be the same as your current // management server. "downloadUrl": "http://<%=your_ota_server%>/storagev2/" } }, "electrode-ota-server-auth": { "options": { "strategy": { //Authentication Strategy. The OTA uses [bell]() for //OAuth. You can see the vendors and options there. We test with github oAuth. "github-oauth": { "options": { //A Cookie password otherwise a raondom one (Optional) "password":"<%= another cookie password%>", //This is true by default if not running https change to false. You should run over https though "isSecure":true, //The callback hostname of your server. If you are running behind a proxy, //it may be different than what the server thinks it is. (Optional) "location":"<%= the address of your server %>", //Get the Oauth info from github. "clientId": "<%=github oauth clientId%>", "clientSecret": "<%=github oauth clientSecret%>" } }, "session": { "options": { //A Cookie password otherwise a raondom one (Optional) "password":"<%= another cookie password%>", //This is true by default if not running https change to false. You should run over https though "isSecure":true } } } } } } } OTA uses bell for oAuth you can look there for more configuration options. Running OTA There are multiple ways to start OTA, however the easiest is to issue the following command. $ npm start Logging into your OTA Server. To use the server, make the following modifications to your client (iOS/Android(tm)) app, along with setting up Apps with the OTA server. Your server can host multiple applications from multiple developers, and to manage these you can use Microsoft's code-push CLI. Using the Command Line If want to manage your OTA Server using the CLI, follow these steps. Or you can use the Electrode OTA Desktop to do the same thing. Installing the code-push-cli $ npm install code-push-cli -g You can only register once per GitHub account. So the first time each user would need to issue the following command: sh $ code-push register https://<%=your_ota_server%> After registering, if you need to log back in or if your acccess-key is lost or expired, you can log back in using the following command: sh $ code-push login https://<%=your_ota_server%> Server Token Your server token page should look like this. Creating a CodePushDeploymentKey $ code-push app add <%=YourAppName%> Should result in something like sh Successfully added the "YourAppName" app, along with the following default deployments: ┌────────────┬───────────────────────────────────────┐ │ Name │ Deployment Key │ ├────────────┼───────────────────────────────────────┤ │ Production │ 4ANCANCASDASDKASASDASDASDASDASDASDAS- │ ├────────────┼───────────────────────────────────────┤ │ Staging │ ASDASDASDASDASDASDASDASDASDASDASDASD- │ └────────────┴───────────────────────────────────────┘ These are your deployment keys. You will need them in the next step. Changes to Your Application. If you haven't setup your app for code-push, follow Microsoft's guide to setting up the client SDK for React or Cordova. If needed, check out the Code Push Demo App. For IOS Then add the following to the ios/<%=your_app_name%>/Info.plist file. You can open this in sh open ios/<%=your_app_name%>.xcodeprogto edit. or using your favorite text editor. xml <key>CodePushDeploymentKey</key><string><%=your_deployment_key%></string><key>CodePushServerURL</key><string>http://<%=your_ota_server%></string> If your OTA server is not running over HTTPS you will need to add an exception to it in the ios/<%=your_app_name%>/Info.plist file, or use Xcode to update the file. xml <dict><key>NSAllowsArbitraryLoads</key><true/><key>NSExceptionDomains</key><dict><key><%=your_ota_server%></key><dict><key>NSTemporaryExceptionAllowsInsecureHTTPLoads</key><true/></dict></dict></dict> For Android Modify android/app/src/main/java/com/<%=your_app_name%>/MainApplication.java /** * A list of packages used by the app. If the app uses additional views * or modules besides the default ones, add more packages here. **/ @Override protected List<ReactPackage> getPackages() { return Arrays.<ReactPackage>asList( new MainReactPackage(), new CodePush("<%=your_ota_deployment_key%>", this, BuildConfig.DEBUG, "<%=your_ota_server%>") ); } Publishing (React) To update the app over-the-air, see the Microsoft code-push docs for more information on how to add CodePush to your application. sh $ cd your_client_app_dir $ code-push release-react <%=YourAppName%> ios --deploymentName <%=Staging|Production|Etc.%> Electrode Over the Air Desktop You can use either the Microsoft code-push-cli or Electrode OTA Desktop to manage your deployments. You can get the installation file from GitHub. Installation Copy of the Electrode OTA icon to the Applications folder. Logging in Use the token you viewed on the server token screen. The host would be your OTA Server. Creating a New App to Manage You will need an app to get the deployment keys. Creating a New Deployment You can use Staging and Development or create your own, for your workflow. Adding a New Release To Upload a new release select the deployment and click release. Adding Collaborators If you need to share responsibility you can add a collaborator. However, they will need to register as a user first to be able to add them to your App. New Key If you loose a token key, or want one for you CI server you can manage them here.
https://docs.electrode.io/chapter1/over-the-air/react-native-and-over-the-air.html
2018-03-17T06:15:09
CC-MAIN-2018-13
1521257644701.7
[array(['http://www.electrode.io/img/electrode-ota/1-Profile.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/2-Register OAuth.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/3-Register OAuth.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/4-Review OAuth.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewToken.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/DMG.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewToken.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/Login.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/GettingStarted.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewAppSuccess.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewDeployment.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewDeployment1.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/Releases.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/Collaborate.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/AddKey.png', None], dtype=object) array(['http://www.electrode.io/img/electrode-ota/NewKey.png', None], dtype=object) ]
docs.electrode.io
Terminal Services in Windows Server 2008 Updated: November 11, 2011 Applies To: Windows Server 2008 Tip Terminal Services is enhanced in Windows Server 2012. Explore the Evaluation Guide and download the Windows Server 2012 Trial. Terminal Services in Windows Server 2008 provides technologies that enable users to access Windows-based programs that are installed on a terminal server, or to access the full Windows desktop. With Terminal Services, users can access a terminal server from within a corporate network or from the Internet. Note In Windows Server 2008 R2, Terminal Services was renamed Remote Desktop Services. To find out what's new in this version and to find the most up-to-date resources, visit the Remote Desktop Services page on the Windows Server TechCenter. Product Evaluation Getting Started Deployment Troubleshooting Technical Reference Group Policy Settings for Terminal Services in Windows Server 2008 RDP Settings for Terminal Services in Windows Server 2008 Remote Desktop Services (Terminal Services) Command Reference Terminal Services Events in Windows Server 2008 Installed Help - Terminal Services: Help installed with Windows Server 2008, now available in the Windows Server 2008 Technical Library. Other Resources
https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/cc754746(v=ws.10)
2018-03-17T06:53:02
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
Language: Resources Included in Puppet Enterprise 2016.2. A newer version is available; see the version menu above for details. Resources are the fundamental unit for modeling system configurations. Each resource describes some aspect of a system, like a specific service or package. A resource declaration is an expression that describes the desired state for a resource and tells Puppet to add it to the catalog. When Puppet applies that catalog to a target system, it manages every resource it contains, ensuring that the actual state matches the desired state. This page describes the basics of using resource declarations. For more advanced syntax, see Resources (Advanced). Resource types Every resource is associated with a resource type, which determines the kind of configuration it manages. Puppet has many built-in resource types, like files, cron jobs, services, etc. See the resource type reference for information about the built-in resource types. You can also add new resource types to Puppet: - Defined types are lightweight resource types written in the Puppet language. - Custom resource types are written in Ruby, and have access to the same capabilities as Puppet’s built-in types. Simplified syntax Resource declarations have a lot of features, but beginners can accomplish a lot with just a subset of these. For more advanced syntax (including expressions that declare multiple resources at once), see Resources (Advanced). # A resource declaration: file { '/etc/passwd': ensure => file, owner => 'root', group => 'root', mode => '0600', } Every resource has a resource type, a title, and a set of attributes: <TYPE> { '<TITLE>': <ATTRIBUTE> => <VALUE>, } The form of a resource declaration is: - The resource type, which is a word with no quotes. - An opening curly brace ( {). - The title, which is a string. - A colon ( :). - Optionally, any number of attribute and value pairs, each of which consists of: - A closing curly brace ( }). Note that you can use any amount of whitespace in the Puppet language. Title The title is a string that identifies a resource to Puppet’s compiler. A title doesn’t have to match the name of what you’re managing on the target system, but you’ll often want it to: the value of the “namevar” attribute defaults to the title, so using the name in the title can save you some typing. Titles must be unique per resource type. You can have a package and a service both titled “ntp,” but you can only have one service titled “ntp.” Duplicate titles will cause a compilation failure. Note: If a resource type has multiple namevars, the type gets to specify how (and if) the title will map to those namevars. For example, the package type uses the provider attribute to help determine uniqueness, but that attribute has no special relationship with the title. See a type’s documentation for details about how it maps title to namevars. Attributes Attributes describe the desired state of the resource; each attribute handles some aspect of the resource. Each resource type has its own set of available attributes; see the resource type reference for a complete list. Most resource types have a handful of crucial attributes and a larger number of optional ones. Every attribute you declare must have a value; the data type of the value depends on what the attribute accepts. Synonym Note: Parameters and properties When discussing resources and types, parameter is a synonym for attribute. You might also hear property, which has a slightly different meaning when discussing the Ruby implementation of a resource type or provider. (Properties always represent concrete state on the target system. A provider can check the current state of a property, and switch it to new states.) When talking about resource declarations in the Puppet language, you should use either “attribute” or “parameter.” We suggest “attribute.” Behavior A resource declaration adds a resource to the catalog, and tells Puppet to manage that resource’s state. When Puppet applies the compiled catalog, it will: - Read the actual state of the resource on the target system - Compare the actual state to the desired state - If necessary, change the system to enforce the desired state Unmanaged resources If the catalog doesn’t contain a resource, Puppet will do nothing with whatever that resource might have described. This means that ceasing to manage something isn’t the same as deleting it. If you remove a package resource from your manifests, this won’t cause Puppet to uninstall the package; it will just cause Puppet to stop caring about the package. To make sure a package is removed, you would have to manage it as a resource and set ensure => absent. Uniqueness Puppet does not allow you to declare the same resource twice. This is to prevent multiple conflicting values from being declared for the same attribute. Puppet uses the title and name/namevar to identify duplicate resources — if either of these is duplicated within a given resource type, the compilation will fail. If multiple classes require the same resource, you can use a class or a virtual resource to add it to the catalog in multiple places without duplicating it. Relationships and ordering By default, Puppet applies unrelated resources in the order in which they’re written in the manifest. You can disable this with the ordering setting. However,. Changes, events, and reporting If Puppet makes any changes to a resource, it will log those changes as events. These events will appear in Puppet agent’s log and in the run report, which is sent to the Puppet master and forwarded to any number of report processors. Scope independence Resources are not subject to scope — a resource in any scope can be referenced from any other scope, and local scopes do not introduce local namespaces for resource titles. Containment Resources can be contained by classes and defined types — when something forms a relationship with the container, the contained resources are also affected. See Containment for more details. Delaying resource evaluation The Puppet language includes some constructs that let you describe a resource but delay adding it to the catalog. For example: - Classes and defined types can contain groups of resources. These resources will only be managed if you add that class (or defined resource) to the catalog. - Virtual resources are only added to the catalog once they are realized. Special resource attributes Name/namevar Most resource types have an attribute which identifies a resource on the target system. This special attribute is called the “namevar,” and the attribute itself is often (but not always) just name. For example, the name of a service or package is the name by which the system’s service or package tools will recognize it. On the other hand, the file type’s namevar is path, the file’s location on disk. This is different from the title, which identifies a resource to Puppet’s compiler. However, they often have the same value, since the namevar’s value will usually default to the title if it isn’t specified. Thus, the path of the file example above is /etc/passwd, even though we didn’t include the path attribute in the resource declaration. The separation between title and namevar lets you use a consistently-titled resource to manage something whose name differs by platform. For example, the NTP service might be ntpd on Red Hat-derived systems, but ntp on Debian and Ubuntu; to accommodate that, you could title the service “ntp,” but set its name according to the OS. Other resources could then form relationships to it without worrying that its title will change. The resource type reference lists the namevars for all of the core resource types. For custom resource types, check the documentation for the module that provides that resource type. Simple namevars Most resource types only have one namevar. With a single namevar, the value must be unique per resource type, with only rare exceptions (such as exec). If a value for the namevar isn’t specified, it will default to the resource’s title. Multiple namevars Sometimes, a single value isn’t sufficient to identify a resource on the target system. For example, consider a system that has multiple package providers available: the yum provider has a package called mysql, and the gem provider also has a package called mysql that installs completely different (and non-conflicting) software. In this case, the name of both packages would be mysql. Thus, some resource types have more than one namevar, and Puppet combines their values to determine whether a resource is uniquely identified. If two resources have the same values for all of their namevars, Puppet will raise an error. A resource type can define its own behavior for how to map a title to its namevars, if one or more of them is unspecified. For example, the package type has two namevars ( name and provider), but only name will default to the title. For info about other resource types, see that type’s documentation. Ensure Many resource types have an ensure attribute. This generally manages the most important aspect of the resource on the target system — does the file exist, is the service running or stopped, is the package installed or uninstalled, etc. Allowed values for ensure vary by resource type. Most accept present and absent, but there might be additional variations. Be sure to check the reference for each resource type you are working with. Metaparameters Some attributes in Puppet can be used with every resource type. These are called metaparameters. They don’t map directly to system state; instead, they specify how Puppet should act toward the resource. The most commonly used metaparameters are for specifying order relationships between resources. You can see the full list of all metaparameters in the Metaparameter Reference.
https://docs.puppet.com/puppet/4.5/lang_resources.html
2018-03-17T06:27:47
CC-MAIN-2018-13
1521257644701.7
[]
docs.puppet.com
Helsinki Patch 1 Hot Fix 2 Helsinki Patch 1 Hot Fix 2 provides fixes for the Helsinki release. For Helsinki Patch 1 Hot Fix 2: Build date: 08-07-2016_0841 Build tag: glide-helsinki-03-16-2016__patch1-hotfix2-08-05-2016. Fixed problems in Helsinki Patch 1 Hot Fix 2 Problem Short description Description Project ManagementPRB702989 Performance delay while loading a top project (which has 8000+ project tasks) via planning console There is a performance delay while loading a top project via the planning console. This issue occurs in a Helsinki Patch 1 instance when trying to load a top project with 8000 tasks. The browser (e.g. Chrome, Firefox) is having a hard time keeping up. Fixes included with Helsinki Patch 1 1 Hot Fix 1 The PRBs in HP1HF1 were also fixed in * HP2, HP3: PRB687962 HP3: PRB686241 Helsinki Patch
https://docs.servicenow.com/bundle/helsinki-release-notes/page/release-notes/r_Helsinki-Patch-1-HF-2-PO.html
2018-03-17T06:43:25
CC-MAIN-2018-13
1521257644701.7
[]
docs.servicenow.com
Step 2: Check the Environment Topics Check for Service Outages Amazon EMR uses several Amazon Web Services internally. It runs virtual servers on Amazon EC2, stores data and scripts on Amazon S3, indexes log files in Amazon SimpleDB, and reports metrics to CloudWatch. Events that disrupt these services are rare — but when they occur — can cause issues in Amazon EMR. Before you go further, check the Service Health Dashboard. Check the region where you launched your cluster to see whether there are disruption events in any of these services. Check Usage Limits If you are launching a large cluster, have launched many clusters simultaneously, or you are an IAM user sharing an AWS account with other users, the cluster may have failed because you exceeded an AWS service limit. Amazon EC2 limits the number of virtual server instances running on a single AWS region to 20 on-demand or reserved instances. If you launch a cluster with more than 20 nodes, or launch a cluster that causes the total number of EC2 instances active on your AWS account to exceed 20, the cluster will not be able to launch all of the EC2 instances it requires and may fail. When this happens, Amazon EMR returns an EC2 QUOTA EXCEEDED error. You can request that AWS increase the number of EC2 instances that you can run on your account by submitting a Request to Increase Amazon EC2 Instance Limit application. Another thing that may cause you to exceed your usage limits is the delay between when a cluster is terminated and when it releases all of its resources. Depending on its configuration, it may take up to 5-20 minutes for a cluster to fully terminate and release allocated resources. If you are getting an EC2 QUOTA EXCEEDED error when you attempt to launch a cluster, it may be because resources from a recently terminated cluster may not yet have been released. In this case, you can either request that your Amazon EC2 quota be increased, or you can wait twenty minutes and re-launch the cluster. Amazon S3 limits the number of buckets created on an account to 100. If your cluster creates a new bucket that exceeds this limit, the bucket creation will fail and may cause the cluster to fail. Check the Amazon VPC Subnet Configuration If your cluster was launched in a Amazon VPC subnet, the subnet needs to be configured as described in Plan and Configure Networking. In addition, check that the subnet you launch the cluster into has enough free elastic IP addresses to assign one to each node in the cluster. Restart the Cluster The slow down in processing may be caused by a transient condition. Consider terminating and restarting the cluster to see if performance improves.
https://docs.aws.amazon.com/emr/latest/ManagementGuide/emr-troubleshoot-slow-2.html
2018-03-17T06:49:52
CC-MAIN-2018-13
1521257644701.7
[]
docs.aws.amazon.com
Electrode React Component Archetype The Electrode Component Archetype helps developers quickly build React components. Recently, the WalmartLabs Electrode team made a significant restructure to the component directory using Lerna to manage the component structure. With the new structure, you can have multiple components inside the package directory and a single demo app to test all the components. Generally, you keep one component in one repo, but you can also have multiple smaller components that complement each other. We also moved demo-server to a directory outside of the packages, imported components from packages, and presented components using the demo app. Let's go through the most important sections to understand the new Component Archetype together and we will cover how to generate Electrode components by using the latest Electrode generators in the Create an Electrode Component section.
https://docs.electrode.io/chapter1/intermediate/component-archetype/
2018-03-17T06:15:41
CC-MAIN-2018-13
1521257644701.7
[]
docs.electrode.io
Microsoft Security Bulletin MS15-113 - Critical Cumulative Security Update for Microsoft Edge (3104519) Published: November 10, 2015 Version: 1.0 Executive Summary. This security update is rated Critical for Microsoft Edge on Windows 10. For more information, see the Affected Software section. The update addresses the vulnerabilities by modifying how Microsoft Edge handles objects in memory and by helping to ensure that Microsoft Edge properly implements the ASLR security feature. For more information about the vulnerabilities, see the Vulnerability Information section. For more information about this update, see Microsoft Knowledge Base Article 3104519. Affected Software The following software versions or editions are affected. Versions or editions that are not listed are either past their support life cycle or are not affected. To determine the support life cycle for your software version or edition, see Microsoft Support Lifecycle. Affected Software . Where specified in the Severity Ratings and Impact table, Critical, Important, and Moderate values indicate severity ratings. For more information, see Security Bulletin Severity Rating System. Refer to the following key for the abbreviations used in the table to indicate maximum impact: Vulnerability Information Multiple Microsoft Edge Memory Corruption Vulnerabilities Multiple remote code execution vulnerabilities exist when Microsoft Edge improperly accesses objects in memory. The vulnerabilities could corrupt memory in such a way that an attacker could execute arbitrary code in the context of the current user. An attacker could host a specially crafted website that is designed to exploit the vulnerabilities through Microsoft Edge, and then convince a user to view the website. The attacker could also take advantage of compromised websites and websites that accept or host user-provided content or advertisements by adding specially crafted content that could exploit the vulnerabilities.. Systems where Microsoft Edge is used frequently, such as workstations or terminal servers, are at the most risk from the vulnerabilities. The update addresses the vulnerabilities by modifying how Microsoft Edge. Microsoft Browser ASLR Bypass – CVE-2015-6088 A security feature bypass exists when Microsoft Edge Edge. Edge properly implement the ASLR security feature. Microsoft received information about this bypass through coordinated bypass-10 08:13-08:00.
https://docs.microsoft.com/en-us/security-updates/SecurityBulletins/2015/ms15-113
2018-03-17T07:40:29
CC-MAIN-2018-13
1521257644701.7
[]
docs.microsoft.com
You can contact us at [email protected] To get your support related question answered in the fastest timing, please head over to our support page and open Support ticket. To open a support ticket you should have an active support license associated with your account. We recommend you to check out our one-stop resources – FAQ section and knowledge base to find quick answers to your questions. Moreover, the knowledge base contains information for developers on how they can integrate Aheto Plugin in their theme or even create unique add-ons.
https://docs.foxthemes.me/knowledge-base/2483-manual-support/
2021-11-27T08:15:02
CC-MAIN-2021-49
1637964358153.33
[]
docs.foxthemes.me
GroupDocs.Merger for .NET 21.6 Release Notes This page contains release notes for GroupDocs.Merger for .NET 21.6 Major Features There are a few fixed bugs and improvements in this regular monthly release. The most notable are: - Expanded IJoinOptions with FileType property and involved it to Join method; - GeneratePreview fails under linux. Full List of Issues Covering all Changes in this Release Public API and Backward Incompatible Changes This section lists public API changes that were introduced in GroupDocs.Merger for .NET 21.
https://docs.groupdocs.com/merger/net/groupdocs-merger-for-net-21-6-release-notes/
2021-11-27T08:30:51
CC-MAIN-2021-49
1637964358153.33
[]
docs.groupdocs.com
Opsgenie Python Account API To use the Account functionalities of Opsgenie's API via the Python SDK you will first have to import the SDK library and (configure)(doc:opsgenie-python-api-v2-1#section-client-initialization) it with the API Key that you procured from your Opsgenie Integrations. For additional configurations, you may refer to the Python SDK Configurations page present in the Opsgenie documentation. # Importing relevant Opsgenie SDK libraries including the Alert API client import opsgenie_sdk class Example: def __init__(self, opsgenie_api_key): self.conf = self.conf = opsgenie_sdk.configuration.Configuration() self.conf.api_key['Authorization'] = '<Your-API-Key>' self.api_client = opsgenie_sdk.api_client.ApiClient(configuration=self.conf) self.account_api = opsgenie_sdk.AccountApi(api_client=self.api_client) Get Account Info You can get Account information pertinent to your account using the Alert client that is initialized prior to performing any other Opsgenie API functionality. To get information related to your account, use the get_info() function which returns a GetAccountInfoResponse. Check the Account API documentation for more. def get_info(self): try: info_response = self.account_api.get_info() print(info_response) return info_response except ApiException as err: print("Exception when calling AccountApi->get_info: %s\n" % err) Updated 8 months ago Did this page help you?
https://docs.opsgenie.com/docs/opsgenie-python-account-api
2021-11-27T08:05:58
CC-MAIN-2021-49
1637964358153.33
[]
docs.opsgenie.com
improved Revamped time series charts 2 months ago by Zac Yang We have revamped the time series charts in the results user interface. The new charts load & render significantly faster, particularly for results with thousands of data points - such as hourly foot traffic over several years. In addition, the new charts provide a sleeker experience for zooming & panning, and smarter date axes ticks.
https://docs.orbitalinsight.com/changelog/revamped-time-series-charts
2021-11-27T08:35:26
CC-MAIN-2021-49
1637964358153.33
[]
docs.orbitalinsight.com
OVM_BridgeDepositBox OVM_BridgeDepositBox# OVM specific bridge deposit box. Uses OVM cross-domain-enabled logic for access control. #Functions constructor(address _crossDomainAdmin, uint64 _minimumBridgingDelay, uint256 _chainId, address _l1Weth, address timerAddress) (public) Construct the Optimism Bridge Deposit Box #Parameters: - _crossDomainAdmin: Address of the L1 contract that can call admin functions on this contract from L1. - _minimumBridgingDelay: Minimum second that must elapse between L2->L1 token transfer to prevent dos. - _chainId: L2 Chain identifier this deposit box is deployed on. - _l1Weth: Address of Weth on L1. Used to inform if the deposit should wrap ETH to WETH, if deposit is ETH. - timerAddress: Timer used to synchronize contract time in testing. Set to 0x000... in production. setCrossDomainAdmin(address newCrossDomainAdmin) (public) Changes the L1 contract that can trigger admin functions on this L2 deposit deposit box. This should be set to the address of the L1 contract that ultimately relays a cross-domain message, which is expected to be the Optimism_Messenger. Only callable by the existing admin via the Optimism cross domain messenger. #Parameters: - newCrossDomainAdmin: address of the new L1 admin contract. setMinimumBridgingDelay(uint64 newMinimumBridgingDelay) (public) Changes the minimum time in seconds that must elapse between withdraws from L2->L1. Only callable by the existing crossDomainAdmin via the optimism cross domain messenger. #Parameters: - newMinimumBridgingDelay: the new minimum delay. whitelistToken(address l1Token, address l2Token, address l1BridgePool) (public) Enables L1 owner to whitelist a L1 Token <-> L2 Token pair for bridging. Only callable by the existing crossDomainAdmin via the optimism cross domain messenger. ) (public) L1 owner can enable/disable deposits for a whitelisted token. Only callable by the existing crossDomainAdmin via the optimism cross domain messenger. #Parameters: - l2Token: address of L2 token to enable/disable deposits for. - depositsEnabled: bool to set if the deposit box should accept/reject deposits. bridgeTokens(address l2Token, uint32 l1Gas) (public) Called by relayer (or any other EOA) to move a batch of funds from the deposit box, through the canonical token bridge, to the L1 Withdraw box. The frequency that this function can be called is rate limited by the minimumBridgingDelay to prevent spam on L1 as the finalization of a L2->L1 tx is quite expensive. #Parameters: - l2Token: L2 token to relay over the canonical bridge. - l1Gas: Unused by optimism, but included for potential forward compatibility considerations. _setCrossDomainAdmin(address newCrossDomainAdmin) (internal)XDomainAdmin(address newAdmin)FromCrossDomainAccount(address _sourceDomainAccount) Enforces that the modified function is only callable by a specific cross-domain account. Parameters: - _sourceDomainAccount: The only account on the originating domain which is authenticated to call this function..
https://docs.umaproject.org/contracts/insured-bridge/ovm/OVM_BridgeDepositBox
2021-11-27T08:21:43
CC-MAIN-2021-49
1637964358153.33
[]
docs.umaproject.org
a persistent data directory .
https://docs.unity3d.com/2019.3/Documentation/ScriptReference/Application-persistentDataPath.html
2021-11-27T08:06:20
CC-MAIN-2021-49
1637964358153.33
[]
docs.unity3d.com
The dashboard shows an overview of the overall site data with a statistic chart and table. A quick view of the topmost selling event with the total number of bookings. Overall sales reports of all the events with organizer earnings, admin commission, payout status, and more. You can also filter sales reports by Events.
https://eventmie-pro-docs.classiebit.com/docs/1.5/admin/dashboard
2021-11-27T07:43:01
CC-MAIN-2021-49
1637964358153.33
[]
eventmie-pro-docs.classiebit.com
Backtrace (show call-stack)¶ backtrace [options] [count] Print backtrace of all stack frames, or innermost count frames. With a negative argument, print outermost -count frames. An arrow indicates the ‘current frame’. The current frame determines the context used for many debugger commands such as expression evaluation or source-line listing. opts are: -d | –deparse - show deparsed call position -s | –source - show source code line -f | –full - locals of each frame -h | –help - give this help Examples:¶ backtrace # Print a full stack trace backtrace 2 # Print only the top two entries backtrace -1 # Print a stack trace except the initial (least recent) call. backtrace -s # show source lines in listing backtrace -d # show deparsed source lines in listing backtrace -f # show with locals backtrace -df # show with deparsed calls and locals backtrace --deparse --full # same as above See also frame, info locals, deparse and list.
https://python2-trepan.readthedocs.io/en/latest/commands/stack/backtrace.html
2021-11-27T09:23:17
CC-MAIN-2021-49
1637964358153.33
[]
python2-trepan.readthedocs.io
Important You are viewing documentation for an older version of Confluent Platform. For the latest, click here. Developer Guide¶ Table of Contents - Code examples - Configuring a Kafka Streams application - Writing a Kafka Streams application - Overview - Libraries and maven artifacts - Using Kafka Streams within your application code - Processor API - Kafka Streams DSL - Interactive Queries - Overview - Demo applications - Your application and interactive queries - Querying local state stores (for an application instance) - Querying remote state stores (for the entire application) - Memory management - Running a Kafka Streams application - Managing topics of a Kafka Streams application - Data types and serialization - Security - Application Reset Tool Code examples¶ Before we begin the deep-dive into Kafka Streams in the subsequent sections, you might want to take a look at a few examples first. Application examples for Kafka Streams. Application examples for Kafka Streams Security examples¶ - Java programming language - Without lambda expressions for Java 7+: Interactive Queries examples¶ As of Kafka 0.10.1.0 it is possible to query state stores created via the Kafka Streams DSL and the Processor API. Please refer to Interactive Queries for further information. - Java programming language - With lambda expressions for Java 8+: - WordCountInteractiveQueriesExample - KafkaMusicExample". Note ZooKeeper dependency of Kafka Streams and zookeeper.connect: This configuration option is temporary and will be removed post 0.10.1.0 release.); Some consumer and producer configuration parameters do use the same parameter name. For example, send.buffer.bytes or receive.buffer.bytes which are used to configure TCP buffers; request.timeout.ms and retry.backoff.ms which control retries for client request (and some more). If you want to set different values for consumer and producer for such a parameter, you can prefix the parameter name with consumer. or producer.: Properties streamsSettings = new Properties(); // same value for consumer and producer streamsSettings.put("PARAMETER_NAME", "value"); // different values for consumer and producer streamsSettings.put("consumer.PARAMETER_NAME", "consumer-value"); streamsSettings.put("producer.PARAMETER_NAME", "producer-value"); // alternatively, you can use streamsSettings.put(StreamsConfig.consumerPrefix("PARAMETER_NAME"), "consumer-value"); streamsSettings.put(StreamsConfig.producerPrefix("PARAMETER_NAME"), "producer-value"); In addition, Kafka Streams uses different default values for some of the underlying client configs, which are summarized below. For detailed descriptions of these configs, please read Producer Configs and Consumer Configs in Apache Kafka web documentation respectively. RocksDB Configuration: Kafka Streams uses RocksDB as the default storage engine for persistent stores. In order to change the default configuration values for RocksDB, you need to implement RocksDBConfigSetter and provide your custom class via rocksdb.config.setter. public class CustomRocksDBConfig implements RocksDBConfigSetter { // put your code here } Properties streamsSettings = new Properties(); streamsConfig.put(StreamsConfig.ROCKSDB_CONFIG_SETTER_CLASS_CONFIG, CustomRocksDBConfig.class); Consumer Auto Commit (enable.auto.commit): To guarantee at-least-once processing semantics, Kafka Streams will always override this consumer config value to false in order to turn off auto committing. Instead, consumers will only commit explicitly via commitSync calls when Kafka Streams library or users decide to commit the current processing state..1.0-cp2</version> </dependency> <dependency> <groupId>org.apache.kafka</groupId> <artifactId>kafka-clients</artifactId> <version>0.10.1.0-cp2</version> </dependency> <!-- Dependencies below are required/recommended only when using Apache Avro. --> <dependency> <groupId>io.confluent</groupId> <artifactId>kafka-avro-serializer</artifactId> <version>3.1. elapsed stream-time (by default, stream-time is configured to represent event-time). Thus, punctuate()is purely data-driven and not related to wall-clock time (even if you use WallclockTim) { Long oldValue = kvStore.get(word); if (oldValue == null) { kvStore.put(word, 1L); } else { kvStore.put(word, oldValue + 1L); } } } @Override public void punctuate(long timestamp) { KeyValueIterator<String, Long> iter = this.kvStore.all(); while (iter.hasNext()) { KeyValue<String, Long>. Enable / Disable State Store Changelogs¶ By default, persistent key-value stores are backed by a compacted topic, which we sometimes refer to as the state store’s associated changelog topic or simply changelog. The purpose of compacting this topic is to prevent the topic from growing out of bounds, to reduce the storage consumed in the associated Kafka cluster, and to minimize recovery time if a state store needs to be reconstructed from its changelog topic. Similarly, persistent window stores are backed by a topic that uses both compaction and deletion. Using deletion in addition to compaction is required for the changelog topics of window stores because of the structure of the message keys that are being sent to the changelog topics: for window stores, the message keys are composite keys that include not only the “normal” key but also window timestamps. For such composite keys it would not be sufficient to enable just compaction in order to prevent a changelog topic from growing out of bounds. With deletion enabled old windows that have expired will be cleaned up by the log cleaner as the log segments expire. The default retention setting is Windows#maintainMs() + 1 day. This setting can be overriden by setting StreamsConfig.WINDOW_STORE_CHANGE_LOG_ADDITIONAL_RETENTION_MS_CONFIG in the StreamsConfig. Disable logging example: Attention If the changelog is disabled then the attached state store is no longer fault tolerant and it can’t have any standby replicas. StateStoreSupplier countStore = Stores.create("Counts") .withKeys(Serdes.String()) .withValues(Serdes.Long()) .persistent() .disableLogging() .build(); Enable logging with configuration example: You can add any log config from kafka.log.LogConfig. Unrecognized configs will be ignored. Map<String, String> logConfig = new HashMap(); // override min.insync.replicas logConfig.put("min.insync.replicas", "1"); StateStoreSupplier countStore = Stores.create("Counts") .withKeys(Serdes.String()) .withValues(Serdes.Long()) .persistent() .enableLogging(logConfig) .build(); with the defined processing state stores. a record stream, the resulting aggregate is no longer/KTable): groupBy, groupByKey(KStream) plus count, reduce, aggregate(via KGroupedStreamand+"))) // Group the stream by word to ensure the key of the record is the word. .groupBy((key, word) -> word) // Count the occurrences of each word (record key). // // This will change the stream type from `KGroupedStream<String, String>` to // `KTable<String, Long>` (word -> count). We must provide a name for // the resulting KTable, which will be used to name e.g. its associated // state store and changelog topic. .count(+")); } }) .groupBy(new KeyValueMapper<String, String, String>>() { @Override public String apply(String key, String word) { return word; } }) .count( guarantees to keep a window for at least this specified time. The default value is one day and can be changed.. Thus, sliding windows are not aligned to the epoch, but on the data record timestamps...groupByKey() .count(.groupByKey(). .count( via, you can also apply a custom processor as a stream sink at the end of the processing (cf. above) to write to external databases and systems. If you do, please be aware that it is now your responsibility to guarantee message delivery semantics when talking to such external systems (e.g., to retry on delivery failure or to prevent message duplication). Interactive Queries¶ Overview¶ Introduced in Kafka 0.10.1,. The following diagrams. Demo applications¶ Before we explain interactive queries in detail, let us point out that we provide. Once you have familiarized yourself with the concept of interactive queries by reading the following sections, you may want to get back to the examples above and use them as a starting point for your own applications. Your application and interactive queries¶ Interactive queries allow you to tap into the state of your application, and notably to do that from outside your application. However, an application is not interactively queryable out of the box: you make it queryable by leveraging the API of Kafka Streams. It is important to understand that the state of your application – to be extra clear, we might call it “the full state of the entire application” – is typically split across many distributed instances of your application, and thus across many state stores that are managed locally by these application instances. Accordingly, the API to let you interactively query your application’s state has two parts, a local and a remote one: - Querying local state stores (for an application instance): You can query that (part of the full) state that is managed locally by an instance of your application. Here, an application instance can directly query its own local state stores. You can thus use the corresponding (local) data in other parts of your application code that are not related to calling the Kafka Streams API. Querying state stores is always read-only to guarantee that the underlying state stores will never be mutated out-of-band, e.g. you cannot add new entries; state stores should only ever be mutated by the corresponding processor topology and the input data it operates on. - Querying remote state stores (for the entire application): To query the full state of your entire application we must be able to piece together the various local fragments of the state. In addition to being able to (a) query local state stores as described in the previous bullet point, we also need to (b) discover all the running instances of your application in the network, including their respective state stores and (c) have a way to communicate with these instances over the network, i.e. an RPC layer. Collectively, these building blocks enable intra-app communcation (between instances of the same app) as well as inter-app communication (from other applications) for interactive queries. Kafka Streams provides all the required functionality for interactively querying your application’s state out of the box, with but one exception: if you want to expose your application’s full state via interactive queries, then – for reasons we explain further down below – it is your responsibility to add an appropriate RPC layer to your application that allows application instances to communicate over the network. If, however, you only need to let your application instances access their own local state, then you do not need to add such an RPC layer at all. Querying local state stores (for an application instance)¶ Important A Kafka Streams application is typically running on many instances. The state that is locally available on any given instance is only a subset of the application’s entire state. Querying the local stores on an instance will, by definition, only return data locally available on that particular instance. We explain how to access data in state stores that are not locally available in section Querying remote state stores (for the entire application). The method KafkaStreams#store(...) finds an application instance’s local state stores by name and by type. The name of a state store is defined when you are creating the store, either when creating the store explicitly (e.g. when using the Processor API) or when creating the store implicitly (e.g. when using stateful operations in the DSL). We show examples of how to name a state store further down below. The type of a state store is defined by QueryableStoreType, and. Both store types return read-only versions of the underlying state stores. This read-only constraint is important to guarantee that the underlying state stores will never be mutated (e.g. new entries added) out-of-band, i.e. only the corresponding processing topology of Kafka Streams is allowed to mutate and update the state stores in order to ensure data consistency. You can also implement your own QueryableStoreType as described in section Querying local custom state stores. Note Kafka Streams materializes one state store per stream partition, which means your application will potentially manage many underlying state stores. The API to query local state stores enables you to query all of the underlying stores without having to know which partition the data is in. The objects returned from KafkaStreams#store(...) are therefore wrapping potentially many underlying state stores. Querying local key-value stores¶ To query a local key-value store, you must first create a topology with a key-value key-value store named "CountsKeyValueStore" for the all-time word counts groupedByWord.count("CountsKeyValueStore"); // Start an instance of the topology KafkaStreams streams = new KafkaStreams(builder, config); streams.start(); Above we created a key-value store named “CountsKeyValueStore”. This store will hold the latest count for any word that is found on the topic “word-count-input”. Once the application has started); } // Get the values for all of the keys available in this application instance KeyValueIterator<String, Long> range = keyValueStore.all(); while (range.hasNext()) { KeyValue<String, Long> next = range.next(); System.out.println("count for " + next.key + ": " + value); } Querying local window stores¶ A window store differs from a key-value store in that you will potentially have many results for any given key because the key can be present in multiple windows. However, there will ever be at most one result per window for a given key. To query a local window store, you must first create a topology with a window window state store named "CountsWindowStore" that contains the word counts for every minute groupedByWord.count(TimeWindows.of(60000), "CountsWindowStore"); Above we created a window store named “CountsWindowStore” that contains the counts for words in 1-minute windows. Once the application has started); } Querying local custom state stores¶ Note Custom state stores can only be used through the Processor API. They are not currently supported by the DSL. Any custom state stores you use in your Kafka Streams applications can also be queried. However there are some interfaces that will need to be implemented first: - Your custom state store must implement StateStore. - You should have an interface to represent the operations available on the store. - It is recommended that you also provide an interface that restricts access to read-only operations so users of this API can’t mutate the state of your running Kafka Streams application out-of-band. - You also need to provide an implementation of StateStoreSupplierfor creating instances of your store.Supplier implements StateStoreSupplier { // implementation of the supplier for MyCustomStore } To make this store queryable you need to: - Provide an implementation of QueryableStoreType. - Provide a wrapper class that will have access to all of the underlying instances of the store and will be used for querying. Implementing QueryableStoreType is straight forward: even a single instance of a Kafka Streams application may run multiple stream tasks and, by doing so, manage multiple local instances of a particular state store. The wrapper class hides this complexity and lets you query a “logical” state store with a particular name without having to know about all of the underlying local instances of that state store. When implementing your wrapper class you will need to make use of. An example implemention); } } Putting it all together you can now find and query your custom store: StreamsConfig config = ...; TopologyBuilder builder = ...; ProcessorSupplier processorSuppler = ...; // Create CustomStoreSupplier for store name the-custom-store MyCustomStoreSuppler customStoreSupplier = new MyCustomStoreSupplier("the-custom-store"); // Add the source topic builder.addSource("input", "inputTopic"); // Add a custom processor that reads from the source topic builder.addProcessor("the-processor", processorSupplier, "input"); // Connect your custom state store to the custom processor above builder.addStateStore(customStoreSupplier, "the-processor"); KafkaStreams streams = new KafkaStreams(builder, config); streams.start(); // Get access to the custom store MyReadableCustomStore<String,String> store = streams.store("the-custom-store", new MyCustomStoreType<String,String>()); // Query the store String value = store.read("key"); Querying remote state stores (for the entire application)¶ Typically, the ultimate goal for interactive queries is not to just query locally available state stores from within an instance of a Kafka Streams application as described in the previous section. Rather, you want to expose the application’s full state (i.e. the state across all its instances) to other applications that might be running on different machines. For example, you might have a Kafka Streams application that processes the user events in a multi-player video game, and you want to retrieve the latest status of each user directly from this application so that you can display it in a mobile companion app. Three steps are needed to make the full state of your application queryable: - You must add an RPC layer to your application so that the instances of your application may be interacted with via the network – notably to respond to interactive queries. By design Kafka Streams does not provide any such RPC functionality out of the box so that you can freely pick your favorite approach: a REST API, Thrift, a custom protocol, and so on. You can follow the reference examples we provide to get started with this (details further down below). - You need to expose the respective RPC endpoints of your application’s instances via the application.serverconfiguration setting of Kafka Streams. Because RPC endpoints must be unique within a network, each instance will have its own value for this configuration setting. This makes an application instance discoverable by other instances. - In the RPC layer, you can then discover remote application instances and their respective state stores (e.g. for forwarding queries to other app instances if an instance lacks the local data to respond to a query) as well as query locally available state stores (in order to directly respond to queries) in order to make the full state of your application queryable. Discover any running instances of the same application as well as the respective RPC endpoints they expose for interactive queries Adding an RPC layer to your application¶ As Kafka Streams doesn’t provide an RPC layer you are free to choose your favorite approach. There are many ways of doing this, and it will depend on the technologies you have chosen to use. the remote discovery of state stores running within a (typically distributed) Kafka Streams application you need to set the application.server configuration property in StreamsConfig. The application.server property defines a unique host:port pair that points to the RPC endpoint of the respective instance of a Kafka Streams application. It’s important to understand that the value of this configuration property varies across the instances of your application. When this property is set, then, for every instance of an application, Kafka Streams will keep track of the instance’s RPC endpoint information, its state stores, and assigned stream partitions through instances of StreamsMetadata. Tip You may also consider leveraging the exposed RPC endpoints of your application for further functionality, such as piggybacking additional inter-application communication that go beyond interactive queries. Below is an example of configuring and runningConfig config = new StreamsConfig(props); KStreamBuilder builder = new KStreamBuilder(); KStream<String, String> textLines = builder.stream(stringSerde, stringSerde, "word-count-input"); final KGroupedStream<String, String> groupedByWord = textLines .flatMapValues(value -> Arrays.asList(value.toLowerCase().split("\\W+"))) .groupBy((key, word) -> word, stringSerde, stringSerde); // This call to `count()` creates a state store named "word-count". groupedByWord.count("word-count"); // ... definition of the processing topology follows here ... // Start an instance of the topology KafkaStreams streams = new KafkaStreams(builder, streamsConfiguration); respective local state stores¶ With the application.server property set, we can now find the locations of remote app instances and their state stores.: - We can discover the running instances of the application as well as the state stores they manage locally. - Through the RPC layer that was added to the application,. Now, if you are interested in seeing how an end-to-end application with interactive queries looks like, then we recommend to take a look at the demo applications we provide. Memory management¶ Record caches in the DSL¶ Developers of a Kafka Streams application using the DSL have the option to specify, for an instance of a processing topology, the total memory (RAM) size of the record cache that is used for: - Internal caching and compacting of output records before they are written from a processor node to its state stores, if any. - Internal caching and compacting of output records before they are forwarded from a processor node to downstream processor nodes, if any. Here is a motivating example: - The input is a sequence of records <K,V>: <1, 1>, <88, 5>, <1, 20>, <1, 300>(Note: The focus in this example is on the records with key == 1.) - A processor node computes the sum of values, grouped by key, for the input above. Without caching, what is emitted for key 1is a sequence of output records: <1, null>, <1, 1>, <1, 21>, <1, 321>. - With caching, all three records for key 1 would likely be compacted in cache, leading to a single output record <1, 321>that is written to the state store and being forwarded to any downstream processor nodes. The cache size is specified through the cache.max.bytes.buffering parameter: ... // Enable record cache of size 10 MB. config. I.e., there are as many caches as there are threads, but no sharing of caches across threads happens. The basic API for the cache is made of put() and get() calls. Records are evicted using a simple LRU scheme once>, which we also refer to as “being compacted”. Note that this has the same effect as Kafka’s log compaction, but happens earlier while the records are still in memory. Upon flushing R2 is 1) forwarded to the next processing node and 2) written to the local state store. The semantics of caching is that data is flushed to the state store and forwarded to the next downstream processor node whenever the earliest of commit.interval.ms or cache pressure hits. Both commit.interval.ms and cache.max.bytes.buffering are global parameters. They apply to all processor nodes in the topology, i.e., it is not possible to specify different parameters for each node. Below we provide some example settings for both parameters based on desired scenarios. To turn off caching the cache size can be set to zero: ... // Disable record cache config, RocksDB Block Cache could be set to 100MB and Write Buffer size to 32 MB. See RocksDB config. To enable caching but still have an upper bound on how long records will be cached, the commit interval can be set appropriately (in this example, it is set to 1000 milliseconds): ... // Enable record cache of size 10 MB. config.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 10 * 1024 * 1024L); // Set commit interval to 1 second. config.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 1000); The illustration below shows the effect of these two configurations visually. For simplicity we have records with 4 keys: blue, red, yellow and green. Without loss of generality, let’s assume the cache has space for only 3 keys. When the cache is disabled, we observer that all the input records will be output. With the cache enabled, we make the following observations. First, most records are output at the end of a commit intervals (e.g., at t1 one blue records is output, which is the final over-write of the blue key up to that time). Second, some records are output because of cache pressure, i.e. before the end of a commit interval (cf. the red record right before t2). With smaller cache sizes we expect cache pressure to be the primary factor that dictates when records are output. With large cache sizes, the commit interval will be the primary factor. Third, the number of records output has been reduced (here: from 15 to 8). State store caches in the Processor API¶ Developers of a Kafka Streams application using the Processor API have the option to specify, for an instance of a processing topology, the total memory (RAM) size of the state store cache that is used for: - Internal caching and compacting of output records before they are written from a processor node to its state stores, if any. - Internal caching of output records before they are forwarded from a processor node to downstream processor nodes, if any. Note that, unlike record caches in the DSL, the state store cache in the Processor API will not compact any output records that are being forwarded downstream. In other words, downstream nodes see all records, whereas state stores see a reduced number of records. It is important to note that this does not impact correctness of the system but is merely a performance optimization for the state stores. A note on terminology: we use the narrower term state store caches when we refer to the Processor API and the broader term record caches when we are writing about the DSL. We made a conscious choice to not expose the more general record caches to the Processor API so that we keep it simple and flexible. For example, developers of the Processor API might chose to store a record in a state store while forwarding a different value downstream, i.e., they might not want to use the unified record cache for both state store and forwarding downstream. Following from the example first shown in section Defining a State Store, to enable caching, you can add the enableCaching call (note that caches are disabled by default and there is no explicit disableCaching call) : StateStoreSupplier countStore =. Running a Kafka Streams application¶ In this section we describe how you can launch your stream processing application, and how you can elasticly add capacity to and remove capacity from your application during runtime.. When the application instance starts running, the defined processor topology will be initialized as one or more stream tasks. If the processor topology defines any state stores, these state stores will also be (re-)constructed, if possible, during the initialization period of their associated stream tasks (for more details read the State restoration during workload rebalance section).-tolerant state in environments). State restoration during workload rebalance¶ As mentioned above, when the processing workload is rebalanced among the existing application instances either due to scaling changes (e.g. adding capacity to the application) or due to unexpected failures, some stream tasks will be migrated from one instance to another. And when a task is migrated, the processing state of this task will be fully restored before the application instance can resume processing in order to guarantee correct processing results. In Kafka Streams, state restoration is usually done by replaying the corresponding changelog topic to reconstruct the state store; additionally, users can also specify num.standby.replicas to minimize changelog-based restoration latency with replicated local state stores (see Standby Replicas for more details). As a result, when the stream task is (re-)initialized on the application instance, its state store is restored in the following way: - If no local state store exists then replay the changelog from the earliest to the current offset. In doing so, the local state store is reconstructed to the most recent snapshot. - If a local state store exists then replay the changelog from the previously checkpointed offset. Apply the changes to restore the state state to the most recent snapshot. This will take less time as it is applying a smaller portion of the changelog. How many application instances to run?¶ How many instances can or should you run for your application? Is there an upper limit for the number of instances and, similarly, for the parallelism of your application? In a nutshell, the parallelism of a Kafka Streams application – similar to the parallelism of Kafka – is primarily determined by the number of partitions of the input topic(s) from which your application is reading. For example, if your application reads from a single topic that has 10 partitions, then you can run up to 10 instances of your applications (note that you can run further instances but these will be idle).. Managing topics of a Kafka Streams application¶ A Kafka Streams application executes by continuously reading from some Kafka topics, processing the read data, and then writing process results back into Kafka topics. In addition, the application itself may also auto-create some other Kafka topics in the Kafka brokers such as state store changelogs topics. Therefore it is important for users to be able to understand the difference between these topics and understand how they could be managed along with Kafka Streams applications. In Kafka Streams, we distinguish between user topics and internal topics. Both kinds are normal Kafka topics but they are used differently and, in the case of internal topics, follow a specific naming convention. User topics: Topics that exist externally to a certain application that will be read or written by the application, including: - Input topics : topics that are specified via source processors in the application’s topology (e.g. KStreamBuilder#stream(), KStreamBuilder#table()and TopologyBuilder#addSource()methods). - Output topics : topics that are specified via sink processors in the application’s topology (e.g. KStream#to(), KTable.to()and TopologyBuilder#addSink()methods). - Intermediate topics : topics that are both input and output topics of the application’s topology (e.g., via the KStream#through()methods). In practice, all user topics must be created and managed manually ahead of time (e.g., via the topic tools). Note that in some cases these topics may be shared among multiple applications to read from or write to, in which case application users need to coordinate on managing such topics; in some other cases these topics are managed in a centralized way (e.g., by the team who operates the Kafka broker clusters) and application users then would not need to manage topics themselves but simply obtain access to them. Note Auto-creation of topics is strongly discouraged: It is strongly recommended to NOT rely on the broker-side topic auto-creation feature to create user). Internal topics: Topics that are used internally by the Kafka Streams application while executing, for example, the changelog topics for state stores. Such topics are created by the application under the hood, and exclusively used by that stream application only. If security is enabled on the Kafka brokers, users need to set the corresponding security configs to authorize the underlying clients with corresponding admin functionality to be able to create such topics (details can be found in section Security). Internal topics currently follow the naming convention <application.id>-<operatorName>-<suffix>, but this convention is not guaranteed for future releases. Data types and serialization¶ Overview¶ Every Kafka(), groupByKey(), groupBy(). Kafka.1.0-cp>).. Scope¶ As mentioned in section Managing topics of a Kafka Streams application, there are two different categories of topics in Kafka Streams: user topics (input, output, and intermediate), and internal topics. The reset tool treats these topics differently when resetting the application. What the application reset tool does: - For any specified input topics: - Reset to the beginning of the topic, i.e., set the application’s committed consumer offsets for all partitions to each partition’s earliestoffset tool with care and double-check its parameters: If you provide wrong parameter values (e.g. typos in application.id) or specify parameters inconsistently (e.g. specifying the wrong input topics for the application), this tool might invalidate the application’s state or even impact other applications, consumer groups, or Kafka topics of your Kafka cluster.: #.
https://docs.confluent.io/3.1.1/streams/developer-guide.html
2021-11-27T08:36:53
CC-MAIN-2021-49
1637964358153.33
[array(['../_images/streams-interactive-queries-api-02.png', '../_images/streams-interactive-queries-api-02.png'], dtype=object) array(['../_images/streams-elastic-scaling-1.png', '../_images/streams-elastic-scaling-1.png'], dtype=object) array(['../_images/streams-elastic-scaling-2.png', '../_images/streams-elastic-scaling-2.png'], dtype=object)]
docs.confluent.io
WebServer Synopsis [Startup] WebServer=n n is either 1 (true) or 0 (false). The default value is 1. Description When WebServer is enabled (n = 1), the Apache private web server starts when InterSystems IRIS® data platform starts. The private web server enables REST and SOAP APIs, as well as the Management Portal. For information on the private web server, see the section Private Web Server and the Management Portal in the Web Gateway Configuration Guide. In a secure (locked down) InterSystems IRIS container (described in Locked Down InterSystems IRIS Container in Running InterSystems Products in Containers), the private web server is always disabled. The Webserver parameter can be included in a configuration merge file to deploy locked down InterSystems IRIS containers with the private web server (and thus the Management Portal) enabled. Changing This Parameter On the Startup page of the Management Portal (System Administration > Configuration > Additional Settings > Startup), in the WebServer row, select Edit. Select WebServer to enable the private web server.). A restart is required for this setting to take effect.
https://docs.intersystems.com/healthconnectlatest/csp/docbook/stubcanonicalbaseurl/csp/docbook/DocBook.UI.Page.cls?KEY=RACS_WebServer
2021-11-27T09:16:33
CC-MAIN-2021-49
1637964358153.33
[]
docs.intersystems.com
How to change the way search results are displayed in SharePoint Server 2013 Share, quickly find information about a list item, we’ve set up a Search Center that searches across all of the lists. Throughout this series, I’ll show you how I’ve changed the way search results are displayed from this… ... to this: In this series, we’ll cover: - Understanding how search results are displayed - Understanding how item display templates and hit highlighting work - How to create a new result type - How to display values from custom properties in search results - option 1 - How to display values from custom properties in search results - option 2 - How to display values from custom managed properties in the hover panel - How to add a custom action to the hover panel - How to change the text that is displayed in the Search Box Web Part --- - Addendum: How to change the order in which search results are displayed in SharePoint Server 2013 How. Here’s how to understand this high level representation in the context of Microsoft’s internal Search Center. - A Microsoft writer creates a list item for an article she'll be writing. Site columns, such as Title, Content Summary and Technical Subject, are used to store values, or in other words, information, about the article. - The list has been marked for continuous crawl. This means that that the list will be crawled at a set interval, for example, every minute.You can see the crawl schedule in List Settings --> Catalog Setting. - From Site Settings --> Search Schema you can search for managed properties.In my scenario, there’s a managed property named ContentSummaryOWSMTXT, and another one named owstaxIdTechnicalSubject. They represent the site columns Content Summary and Technical Subject (for more details about the “transformation” of site columns into managed properties, see the blog post From site column to managed property - What's up with that?). - On a search page, a user enters a query, for example customize search results. - On a search results page, search results are displayed in a Search Results Web Part. The Web Part uses display templates that specify that the values from the managed properties ContentSummaryOWSMTXT and owstaxIdTechnicalSubject should be displayed in the search results (the display templates specify many other things as well, but for now, let’s just concentrate on the values of these two managed properties). The second search result is the list item created in step 1. We can see that the values from the managed properties ContentSummaryOWSMTXT and owstaxIdTechnicalSubject are are displayed in the search result. Understanding how search results are displayed Additional Resources Overview of search in SharePoint Server 2013
https://docs.microsoft.com/en-us/archive/blogs/tothesharepoint/how-to-change-the-way-search-results-are-displayed-in-sharepoint-server-2013
2021-11-27T08:00:18
CC-MAIN-2021-49
1637964358153.33
[]
docs.microsoft.com
!- hide page elements from anonymous users -> changes.mady.by.user Andy Anderson Saved on Jan 26, 2018 Saved on Feb 09, 2018 ... Whilst our service desk will always try to be helpful, they can only support the OpenAthens end. Information about ExLibris Summon was correct at the time of writing. Powered by a free Atlassian Confluence Community License granted to eduserv. Evaluate Confluence today.
https://docs.openathens.net/pages/diffpages.action?pageId=328037&originalId=327844
2021-11-27T09:33:35
CC-MAIN-2021-49
1637964358153.33
[]
docs.openathens.net
Reports¶ - We have three types of reports: - Sales by Vendor - Total Sales Report - Settlement Report Sales by Vendor Report¶ - Click on ‘View Report’ - A form will appear, where you need to provide inputs for generating a report. - You may either select ‘All Vendors’ if you want the sales report of all the Vendors together or you can select a particular Vendor from the list, if you want to only view the sales report for that particular Vendor. - You can provide the ‘From’ and ‘To’ date for generating the report. - On clicking on ‘Generate Report’, the report will be ready. - The report can be exported to excel/CSV. Total Sales Report¶ - Click on ‘View Report’ - Select the period for which you want to generate the total sales report. Radio buttons are available for selection. - Mention the dates, if you want to generate reports for a particular date range. This is not a mandatory field. - Then, you may select a price range, if you want to generate report for only the sales between that particular range. - On clicking on ‘Generate Report’ the report will be ready. - The report can be exported to excel/CSV.
https://docs.spurtcommerce.com/Admin/MarketPlace/Reports.html
2021-11-27T07:58:00
CC-MAIN-2021-49
1637964358153.33
[]
docs.spurtcommerce.com
Quick Start Contents To use Genesys Predictive Routing (GPR), you'll need to install and configure the products and components listed below. Install - Install any Genesys components that aren't already part of your environment. You'll need: - Interaction Concentrator - Genesys Info Mart - Install Genesys Predictive Routing, which consists of the following components: - Data Loader—Imports agent and interaction data automatically from the Genesys Info Mart Database. Uploads customer and outcome data from user-prepared CSV files. - Data Loader is deployed in a Docker container and requires that you deploy a supported version of Docker. - URS Strategy Subroutines Configure To complete your setup of Predictive Routing, configure the following components: - Set the desired values for the configuration options, as described in the configuration instructions for the URS Strategy Subroutines component and Configure Data Loader to upload data. The complete list of all Predictive Routing configuration options is available from the Genesys Predictive Routing Options Reference. - You use configuration options to configure a wide range of application behavior, including: - The mode GPR is running in, which might be off. - Schemas and other configuration for Data Loader and data uploads. - Login parameters and access URLs. - KPI criteria to decide what makes for a better match. - Scoring thresholds, agent hold-out, and dynamic interaction priority. - Many other important functions. - (STAFF users only) To configure how the match between interactions and agents is determined, follow the procedures accessed from the Create and Test Predictors and Models page, located in the Predictive Routing Help. Review your data Data Loader draws interaction and agent data from the Genesys Info Mart Database. Using Data Loader, you can also import agent, customer, and outcome data compiled in .csv format. The GPR web application enables you to view and analyze your uploaded data. The resulting data is uploaded to the GPR Core Platform as datasets. The agent and customer datasets are given the particular names of Agent Profile and Customer Profile. A dataset is a collection of raw event data. The primary purpose of a dataset is to be the source of predictor data. - You configure the dataset schemas and upload parameters using configuration options set on the Data Loader <tt>Application</tt> object. - After Data Loader imports a dataset, you can append additional data as long as it is consistent with the schema that has already been established. Create predictors and models Note: Only STAFF users can create predictors and models. Predictors are based on the dataset information that you have imported. - A predictor defines a view on that underlying dataset. It can select from some or all of the data in the dataset; you can use a predictor with multiple datasets. - A model is based on a predictor, and uses the same target metric or KVP as that predictor. You can configure multiple models for each predictor. These models can use different selections of the features available in the underlying dataset. Models are the objects actually used to perform agent scoring and interaction matchups. As you configure a predictor, you can choose which metric you want to work with, what kinds of situations you want to evaluate, and other parameters, constructing a way to determine the Next Best Action in the specified situation, based on the possible actions available at that time. As you gather more data, you can add that new data to your dataset, and have the predictor test against the actual results coming in, enabling you to refine how successful your predictor is. Reporting and analysis You can report on various parameters, such as: - The success of your predictors. - The results of A/B testing. - The factors affecting a KPI you are trying to influence. Reports are available through the following reporting applications: - The Predictive Routing interface. See Monitor trends and performance in the Predictive Routing Help for specific information. - Genesys CX Insights (GCXI), as part of the Genesys historical reporting offering. The following reports are available in the Historical Reporting with Genesys CX Insights documentation: - Pulse, which provides the Agent Group KPIs by Predictive Model and Queue KPIs by Predictive Model templates for real-time reporting.
https://all.docs.genesys.com/PE-GPR/9.0.0/Deployment/quickStart
2021-11-27T08:55:21
CC-MAIN-2021-49
1637964358153.33
[]
all.docs.genesys.com
Date: Tue, 19 Jan 2010 17:05:24 -0800 From: Diego Montalvo <[email protected]> To: [email protected] Subject: Restarting after Make Install.... Message-ID: <[email protected]> Next in thread | Raw E-Mail | Index | Archive | Help Been using FreeBSD for a long time now, but have never really been sure if FreeBSD needs to be restarted after installing a Port or Ports using "make install clean"? What is the best practice... Used to restarting Windows for everything... Diego Want to link to this message? Use this URL: <>
https://docs.freebsd.org/cgi/getmsg.cgi?fetch=556806+0+/usr/local/www/mailindex/archive/2010/freebsd-questions/20100124.freebsd-questions
2021-11-27T07:42:46
CC-MAIN-2021-49
1637964358153.33
[]
docs.freebsd.org
4.6.1. Source IP Objects Source IP objects are used to define networks or individual hosts from which users can access and for which the individual access rule applies in which these objects are used. There are already two predefined objects that cannot be changed and represent the entire IPv4 or IPv6 address space. These are combined in a group “Any”, which can be changed. 4.6.1.1. Network Objects With the network objects, IPv4 and IPv6 networks can be defined in the form of a CIDR notation or individual hosts. The objects can also be assigned to one or more groups in the creation step. Mandatory information is a name for the network object and an IP address in CIDR notation. Optionally, a description can be added as well. 4.6.1.2. Group Objects Group objects are used - as the name suggests - to group individual objects so that this group can later be used in an access rule. Please note that group objects cannot be nested within other group objects. Mandatory information is a name for the group object. Optionally, a description can be added.
https://docs.susshi.io/susshi_chef/access_objects/source_ips.html
2021-11-27T08:19:50
CC-MAIN-2021-49
1637964358153.33
[array(['../../_images/list3.png', '../../_images/list3.png'], dtype=object) array(['../../_images/new_net.png', '../../_images/new_net.png'], dtype=object) array(['../../_images/new_group1.png', '../../_images/new_group1.png'], dtype=object) ]
docs.susshi.io
SnapCenter is a unified, scalable platform for application-consistent data protection. SnapCenter provides centralized control and oversight, while delegating the ability for users to manage application-specific backup, restore, and clone jobs. With SnapCenter, database and storage administrators learn a single tool to manage backup, restore, and cloning operations for a variety of applications and databases. SnapCenter manages data across endpoints in the NetApp Data Fabric. You can use SnapCenter to replicate data between on-premise environments, between on-premise environments and the cloud, and between private, hybrid, or public clouds.
http://docs.netapp.com/ocsc-30/topic/com.netapp.doc.ocsc-ag/GUID-0CC54DEB-2A3B-4E98-8615-C97A225FEB2A.html
2021-11-27T08:03:51
CC-MAIN-2021-49
1637964358153.33
[]
docs.netapp.com
Using Cloudera Manager Safety Valves to configure scheduler properties Certain scheduler properties can neither be converted by the fs2cs conversion utility nor be configured using the YARN Queue Manager UI service. After migrating to CDP, you must manually configure these properties using the Cloudera Manager advanced configuration snippet (Safety Valves). Use the fs2cs conversion utility to generate the capacity-scheduler.xmland yarn-site.xmloutput files. Complete the scheduler migration process. Identify the scheduler properties that need to be configured manually and not supported by the Queue Manager UI. For more information, see Fair Scheduler features and conversion details. - In Cloudera Manager, select the YARN service. - Click the Configuration tab. - Search for capacity-scheduler, and find the Capacity Scheduler Configuration Advanced Configuration Snippet (Safety Valve). - Click View as XML, and insert the complete capacity-scheduler.xmlfile, generated by the converter tool. - Add the necessary configuration properties. - Click Save Changes. - Search for yarn-site, and find the YARN Service Advanced Configuration Snippet (Safety Valve) for yarn-site.xml. - Click View as XML and add the required configuration in an XML format.Optionally, use + and - to add and remove properties. - Click Save Changes. - Restart the YARN service.
https://docs.cloudera.com/cdp-private-cloud-upgrade/latest/yarn-scheduler-conversion/topics/yarn-fs2cs-safety-valves.html
2021-11-27T09:06:38
CC-MAIN-2021-49
1637964358153.33
[]
docs.cloudera.com
In this task, you will create a Statistics Collector that records the average content in inventory by type:: 1. data.item.Type -1. data.item.Type In this step, you listened to an On Reset event that yields all Type values as its row value. This means that On Reset, the Statistics Collector will add one row per Type. You will later use this event to create one label per row that calculates the average content for each type. In addition, you listened to the On Slot Entry and On Slot Exit events, which happen as items enter and exit inventory. You will later use these events to update the average content for each type. In this step, you will add columns to the Statistics Collector. To do this: "Content". In this step, you added two columns: Type and AvgContent. The Type column is straightforward; on reset, when each row is added to the table, this column will record the Type associated with each row. The AvgContent column will show a continuous value, so it updates whenever the table is accessed. In the next step, you will create a Tracked Variable for each row called Content. This column shows the average of that value. In this step, you will add triggers to the Statistics Collector. To do this: "Content". The other settings can be left at their default values. "Content". data.Delta?. You can right click on the AvgContentByType collector in the Toolbox and select the View Table option. If you reset and run the model, you will see the table record the average content by type. You may wish to run the model as fast as possible to see the data populate. When the model is reset, the statistics collector gets all Type values listed in the ProductInfo table, and creates a row for each one. In addition, the statistics collector initializes a Tracked Variable for each row. A Tracked Variable is a special kind of value; you can get and set it like a normal label value. However, it also calculates the average of the values that you set it to. In this case, that average is time-weighted, which makes it perfect for calculating average content.. This event doesn't update the table directly. However, in the On Row Updating trigger, this event does increment the value of the Content label. When an item exits the Storage System, the same process happens, except that the Content label is decremented, rather than incremented. When you view the table, the AvgContent column is updated, displaying the current value of the average content. Now you will create a chart to show the data in the statistics collector as a bar chart. To do this: If you reset and run the model now, the bar chart will show the data in the statistics collector. As before you may have to run the model for some time to see the results.
https://docs.flexsim.com/en/22.0/Tutorials/AdditionalTools/Tutorial2StatsCollector/2-7AvgContentByType/2-7AvgContentByType.html
2021-11-27T09:33:05
CC-MAIN-2021-49
1637964358153.33
[]
docs.flexsim.com
More improvements to pull requests experience In the last sprint, we announced a batch of improvements to the new pull request experience. In this sprint, we are doubling down our investments in that space with another round of enhancements. In January 2021, we plan to make the new experience generally available. Features Azure Repos - Single-click to toggle between inline and diff views - Navigation to parent commits - More space for folders and files with long names in the PR files tab - Preserve scroll position when resizing diff pane in PR files tab - Improved usage of space for new PR file diff mobile view - Enhanced images in PR Summary view - Enhanced branch experience when creating a new PR Azure Pipelines - Historical graph for agent pools (Preview) - ServiceNow change management integration with YAML pipelines Azure Repos This update includes the following enhancements to the pull request experience based on your feedback. Note Please note that the new pull request experience will be enabled for all organizations in January 2021, and you will not be able to toggle back to the older experience. Single-click to toggle between inline and diff views In the previous experience, you could toggle between inline and diff views with a single click. We brought this functionality back in the new experience without having to select a dropdown. _1<<_2<< Image of the file tree when hovering over a directory: _4<< Improved usage of space for new PR file diff mobile view We updated this page to make better use of the space so that users can see more of the file in mobile views instead of having 40% of the screen taken up by a header. _6<< Azure Pipelines Note Azure Pipelines images are continuously updated in an effort to provide users with the best experience possible. These routine updates are predominantly aimed at addressing bugs or out of date software. They will often have no impact on your pipelines, however this is not always the case. Your pipeline may be impacted if it takes a dependency on a piece of software that has either been removed or updated on the image. To learn more about upcoming updates on our Windows, Linux and MacOS images, please read the following announcements: To view release notes for upcoming (pre-release) and deployed changes, please subscribe to the following release notes: Historical graph for agent pools (Preview) We often receive questions from users wondering why their jobs aren’t running. The most common answer to this question is that the pool doesn’t have enough concurrency, however it has historically been difficult to diagnose this. Today, we are excited to announce a public preview of historical usage graphs for agent pools. These graphs allow you to view jobs running in your pools up against your pool concurrency over a span of up to 30 days. You can drill into this data at four different intervals of time (1, 7, 14, 30 days). Agent pool usage data is sampled and aggregated by the Analytics service every 10 mins. The number of jobs is plotted based on the max number of running jobs for the specified interval of time. This feature is enabled by default. To try it out, follow the guidance below. - Within project settings, navigate to the pipelines “Agent pools” tab - From the agent pool, select a pool (e.g., Azure Pipelines) - Within the pool, select the “Analytics” tab_8<< You can also add the UpdateServiceNowChangeRequest task in a server job to update the change request with deployment status, notes etc._9<< You can also get advice and your questions answered by the community on Stack Overflow. Thanks, Vijay Machiraju
https://docs.microsoft.com/en-us/azure/devops/release-notes/2020/sprint-179-update?WT.mc_id=DOP-MVP-4039781
2021-11-27T08:36:06
CC-MAIN-2021-49
1637964358153.33
[array(['media/179-repos-0-5.gif', 'Single click to toggle between inline and diff views'], dtype=object) array(['media/179-repos-0-0.png', 'Navigation to parent commits'], dtype=object) array(['media/179-repos-0-1.png', 'More space for folders and files'], dtype=object) array(['media/179-repos-0-2.png', 'Name display'], dtype=object) array(['media/179-repos-0-6.gif', 'Search for a commit on a mobile device'], dtype=object) array(['media/179-repos-0-3.png', 'Improved usage of space new PR filename'], dtype=object) array(['media/179-repos-0-4.png', 'branch experience enhancement'], dtype=object) array(['media/179-pipelines-0-0.png', 'historical graph'], dtype=object) array(['media/179-pipelines-0-1.png', 'ServiceNow Change Management Integration'], dtype=object) array(['../media/make-a-suggestion.png', 'Make a suggestion'], dtype=object) ]
docs.microsoft.com
Info Signals¶ info signals [signal-name] info signals * Show information about how debugger treats signals to the program. Here are the boolean actions we can take: - Stop: enter the debugger when the signal is sent to the debugged program - Print: print that the signal was received - Stack: show a call stack - Pass: pass the signal onto the program If signal-name is not given, we the above show information for all signals. If ‘*’ is given we just give a list of signals.
https://python2-trepan.readthedocs.io/en/latest/commands/info/signals.html
2021-11-27T08:29:19
CC-MAIN-2021-49
1637964358153.33
[]
python2-trepan.readthedocs.io
Stack¶ Examining the call stack. The call stack is made up of stack frames. The debugger assigns numbers to stack frames counting from zero for the innermost (currently executing) frame. At any time the debugger identifies one frame as the “selected” frame. Variable lookups are done with respect to the selected frame. When the program being debugged stops, the debugger selects the innermost frame. The commands below can be used to select other frames by number or address.
https://python2-trepan.readthedocs.io/en/stable/commands/stack.html
2021-11-27T08:49:15
CC-MAIN-2021-49
1637964358153.33
[]
python2-trepan.readthedocs.io
Genesys Data Layer Release Notes All Genesys software is © Copyright 2013 . Genesys Customer Care links: Licensing: - Genesys Licensing Guide - Licensing section of the Genesys Migration Guide Information on supported hardware and third-party software is here: Search the table of all articles in this guide, listed in alphabetical order, to find the article you need. January 29, 2019 Genesys Data Layer [View details] - This is the first release of Genesys Data Layer. This release is only for lab deployments.
https://all.docs.genesys.com/GDL/9.0/ReleaseNote
2021-11-27T08:42:48
CC-MAIN-2021-49
1637964358153.33
[]
all.docs.genesys.com
SchedulerControl.DropAppointment Event Namespace: DevExpress.Xpf.Scheduling Assembly: DevExpress.Xpf.Scheduling.v21.2.dll Declaration public event DropAppointmentEventHandler DropAppointment Public Event DropAppointment As DropAppointmentEventHandler Event Data The DropAppointment event's data class is DropAppointmentEventArgs. The following properties provide information specific to this event: The event data class exposes the following methods: Remarks Set the event’s Cancel property to true to disable the default Scheduler’s drop behavior. The example below illustrates how to override the AllowAppointmentConflicts property that is set to false. private void Scheduler_DropAppointment(object sender, DropAppointmentEventArgs e) { for (int i = 0; i < e.DragAppointments.Count; i++) { //if a dragged appointment does not intersect another appointment if (e.ConflictedAppointments[i].Count == 0) { //it can be moved to the desired position e.SourceAppointments[i].Start = e.DragAppointments[i].Start; e.SourceAppointments[i].End = e.DragAppointments[i].End; } } See Also Feedback
https://docs.devexpress.com/WPF/DevExpress.Xpf.Scheduling.SchedulerControl.DropAppointment
2021-11-27T09:40:40
CC-MAIN-2021-49
1637964358153.33
[]
docs.devexpress.com
Inserting Chart Axis Title Dynamically in Word Document, Word Processing Document Adding Syntax to be evaluated by GroupDocs.Assembly Engine Chart Title <<title>><)
https://docs.groupdocs.com/assembly/net/inserting-chart-axis-title-dynamically-in-word-document/
2021-11-27T08:57:32
CC-MAIN-2021-49
1637964358153.33
[]
docs.groupdocs.com
BAFTA Young Game Designers Una nota a los soñadores fuera del Reino Unido. El post que hay más abajo aparecerá sin traducir, ya que trata sobre una competición para la creación de juegos para todas las plataformas, gestionada por la British Academy of Film & Television Arts (no Media Molecule), la cual solo está disponible para residentes en el!(se abre en una nueva pestaña)(se abre en una nueva pestaña) and then filling out and uploading this form(se abre en una nueva pestaña)(se abre en una nueva pestaña) Some tips on how to make a winning entry(se abre en una nueva pestaña) The YGD BAFTA FAQ(se abre en una nueva pestaña) The competition Terms & Conditions(se abre en una nueva pestaña) And lastly - if you're going for it, then a huge WELL DONE and GOOD LUCK from all of us at Media Molecule! We're ridiculously proud of you. Go show 'em what you're made of..
https://docs.indreams.me/es-MX/community/news/bafta-young-game-designers
2021-11-27T09:28:23
CC-MAIN-2021-49
1637964358153.33
[]
docs.indreams.me
Release Notes¶ v0.35.10¶ v0.35.9¶ v0.35.8¶ v0.35.7¶ v0.35.6¶ v0.35.5¶ v0.35.3¶ Bug Fixes¶ nexus.release_staging_repos was failing with an error due to an undeclared variable. Upon inspection, there were several other issues at play as well: * No unit test (which would have caught an undeclared variable). * Initial sleep of 20s is significantly longer than many repos take to release. * Only checked release status every 40s, while printing every 20s. * Rather than checking release status, we were checking for “close” status. Nexus closes repos before releasing them, so this is not the correct status to look for when waiting for the repo to release. A unit test has been added, several variables issues have been corrected, timing was adjusted (waiting just 5 seconds before the initial check for success), and the code will now check for “release” status. v0.35.2¶ New Features¶ lftools dco checkwill now include a check for signoff files to remove commits from the “missing DCO” list. By default, this will check the directory “dco_signoffs”, but the --signoffsoption can be used to specify a different directory. This patch adds the possibility for the user to change the Version Schema. This is done by the new –version_regexp <regexp> parameter. If this parameter is not used, then the ONAP #.#.# (“^d+.d+.d+$”) regexp will be used. The parameter can be an regexp, like “^d+.d+.d+$”, or a file name, which contains a longer regexp. Sample command lftools nexus docker releasedockerhub -o onap -r aai -v –version_regexp “^d+.d+.d+$” Other Notes¶ Changed the printouts to console when releasing a staging repo. Printout every 20 seconds, and checking if released every 40 second. Conventional Commit message subject lines are now enforced. This affects CI. Additionally, if developers want to protect themselves from CI failing on this please make sure of the following you have pre-commit installed that you have run pre-commit install –hook-type commit-msg The shade library for openstack is deprecated. We are switching to the openstacksdk for image commands. v0.35.1¶ v0.35.0¶ New Features¶ Add –repofile to releasedockerhub Enables providing a file with the repo names. - -f, --repofile Repo Name is a file name, which contains one repo per row Sample lftools nexus docker releasedockerhub –org onap –repo /tmp/test_repos.txt –repofile Where the input file has the following syntax, one repo per row, ‘Nexus3 docker.release repo’; ‘dockerhub dockername’ Sample onap/org.onap.dcaegen2.deployments.tls-init-container; onap/org.onap.dcaegen2.deployments.tls-init-container onap/policy-api; onap/policy-api onap/clamp-backend; onap/clamp-backend onap/msb/msb_apigateway; onap/msb-msb_apigateway Enhancements for saml support. Added lftools gerrit create-saml-group. Takes a gerrit endpoint and an ldap group as parameters. Creates a saml group for this ldap group so that project creation can be automated. Project creation call now translates ldap group to saml group and adds saml group as project owner. Upgrade Notes¶ lftools image upload command: NOTE: qemu-img is now required to be installed and on the path for image uploading to work v0.34.2¶ Upgrade Notes¶ Pin osc-lib to 2.2.0 to allow sharing images between projects. Using lftools openstack image share returns an error Error: “You are not authorized to find project with the name”. The issue is seen because of bug in osc_lib [1], and fixed in version osc_lib==2.2.0 [1] [2] [3] v0.34.1¶ Bug Fixes¶ Remove pinned distlib requirement. Distlib is a common requirement for other libraries, and having it pinned is causing failures in builds. It is not explicitly used in lftools, so it does not need to be pinned. Fixes ERROR: virtualenv 20.0.26 has requirement distlib<1,>=0.3.1, but you’ll have distlib 0.3.0 which is incompatible. v0.34.0¶ v0.33.1¶ Bug Fixes¶ Requests can’t handle a put call for very large data objects. However, it can accept the data as a file-like object instead, and the size issue will not show up. Documented here:. v0.33.0¶ v0.31.0¶ v0.30.0¶ New Features¶ Nexus3 API operations. Usage: lftools nexus3 [OPTIONS] FQDN COMMAND [ARGS]… Commands: asset Asset primary interface. privilege Privilege primary interface. repository Repository primary interface. role Role primary interface. script Script primary interface. tag Tag primary interface. task Task primary interface. user User primary interface. Options: --help Show this message and exit. Enable project_version_update API method. Allows enabling or disabling a project version (visibility in the U/I) via an api call. v0.29.0¶ New Features¶ Add “create role” subcommand for nexus, which enables users to create Nexus roles outside of project creation. Add openstack cost command. The cost is sum of the costs of each member of the running stack. Added –exact to the releasedockerhub command. This enables user to only work on a specific repo (specified by –repo) lftools gerrit [OPTIONS] COMMAND [ARGS] abandonchanges Abandon all OPEN changes for a gerrit project. addfile Add an file for review to a Project. addgithubrights Grant Github read for a project. addgitreview Add git review to a project. addinfojob Add an INFO job for a new Project. createproject Create a project via the gerrit API. list-project-inherits-from List who a project inherits from. list-project-permissions List Owners of a Project. Known Issues¶ Addinfofile trips up on extended characters in usernames. Project lead must be added by hand to lftools infofile create. Upgrade Notes¶ lftools.ini needs configuration on internal jenkins for auth. Documenting and implementing this is an internal endevor and beyond the scope of these release notes. Bug Fixes¶ Print rule failures for unclosed repos Catch and print errors thrown by check_response_code in lftools/lfidapi.py. Use proper python3 config parser. Add has_section check for configparser lftools github update repo will properly return “repo not found” lftools infofile create will now take tsc approval string and set date. lftools infofile will allow INFO.yaml to be created before ldap group. yaml4info now correctly outputs to STDOUT so that its output can be properly captured and printed by python. lfidapi now correctly exits if a group does not exist. v0.28.0¶ New Features¶ New command lftools infofile create-info-file Creates an initial info file for a project. Must be on the VPN to use. Add the ability to update existing project’s properties. This is done by invoking lftools rtd project-update PROJECT_NAME key=’value’ where key is the name of a json API key for the RTD API and value is the new value you require. Upgrade Notes¶ Drop support for python2.7 and python3.4(EOL) lftools now requires python >= 3.6 This allows us to remove remaining pins, and to move from glob2 to builtin glob v0.27.1¶ New Features¶ Added a get_filesize method to calculate filesize is an appropriate format. This may be useful in logs if an upload fails. Add support for RTD subprojects, including list, details, create, delete. v0.27.0¶ New Features¶ Expanded DCO shell script with ‘check’ and ‘match’ commands. The check mode checks a git repo for missing DCO signatures. The match mode confirms whether or not the DCO signature(s) match the git commit author’s email address. Read the Docs CRUD operations. Usage: Usage: lftools rtd [OPTIONS] COMMAND [ARGS] Commands: project-list Get a list of Read the Docs projects. project-details Retrieve project details. project-version-list Retrieve project version list. project-version-details Retrieve project version details. project-create Create a new project. project-build-list Retrieve a list of a project's builds. project-build-details Retrieve specific project build details. project-build-trigger Trigger a new build. Options: --help Show this message and exit. v0.26.1¶ v0.26.0¶ New Features¶ –team now lists members of a specific team check_votes now takes click.option(‘–github_repo’) Used in automation to determine is 50% of committers have voted on an INFO.yaml change nexus release now checks “{}/staging/repository/{}/activity” Ensures that Repository is in closed state Checks if Repository is already released (exit 0) Check for failures, if found (exit 1) Added click.option(‘-v’, ‘–verify-only’, is_flag=True, required=False) if -v is passed, only checks for errors, skips release v0.25.5¶ New Features¶ Support multiple nexus sections in lftools.ini In the format: [nexus.example.org] username= password= [nexus.example1.org] username= password= [nexus] section is taken from -s “server” passed to release job. https part of passed url is stripped before match. Upgrade Notes¶ current [nexus] section of lftools.ini must be changed to [nexus.example.com] where nexus.example.com matches the “server” string passed to lftools nexus release -s The https part of passed url is stripped before match. example provided would require auth section in lftools.ini of [nexus.example.org] v0.25.4¶ Bug Fixes¶ Remove drop of staging repos on release The api returns that the relese is completed. in the background java threads are still running. Then we call drop and nexus has threads promoting and dropping at the same time. In this way we lose data. Something else needs to drop, the api does not correctly handle this. v0.25.3¶ Known Issues¶ Pytest 5 has come out and requires Python >= 3.5 which we’re not presently testing on. Pytest is now pinned to 4.6.4 until we update. Bug Fixes¶ Change out lfidapi module print statements to use the logger facility. This allows us to split appart information, debugging, and error log statements so that they can be easily enabled and captured on the correct streams. There was a subtle bug where a function call was being overwritten by a local variable of the same name and then a call to the function was being attempted. v0.25.2¶ v0.25.1¶ New Features¶ Add a --forceoption to delete stacks command. This will help with re-factoring the code in global-jjb scripts using in builder-openstack-cron job to remove orphaned stacks/node and continue with the next stack to delete. v0.25.0¶ New Features¶ Github list and create repositories. Usage: Usage: lftools github [OPTIONS] COMMAND [ARGS]… Commands: audit List Users for an Org that do not have 2fa enabled. create Create a Github repo for within an Organizations. list List and Organizations GitHub repos. Options: --help Show this message and exit. Bug Fixes¶ - There is a possibility that there exists a file called Archives, and if so, there will be an OSError crash 02:15:01 File “/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py”, line 236, in deploy_archives 02:15:01 copy_archives(workspace, pattern) 02:15:01 File “/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py”, line 170, in copy_archives 02:15:01 for file_or_dir in os.listdir(archives_dir): 02:15:01 OSError: [Errno 20] Not a directory: ‘/w/workspace/autorelease-update-validate-jobs-fluorine/archives’ - This fix raises an Exception, and exists lftools with (1), if there is any issues with the Archive directory (missing, a file instead of directory, or something else) Fix OSError in lftools deploy archives due to pattern If the pattern is not properly done, the resulting file list might contain duplicated files. This fix will remove the duplicated patterns, as well as the duplicated matched files. This fix should fix the following crash 08:24:05 File “/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py”, line 204, in copy_archives 08:24:05 os.makedirs(os.path.dirname(dest)) 08:24:05 File “/usr/lib64/python2.7/os.py”, line 157, in makedirs 08:24:05 mkdir(name, mode) 08:24:05 OSError: [Errno 17] File exists: ‘/tmp/lftools-da.m80YHz/features/benchmark/odl-benchmark-api/target/surefire-reports’ Handle config parser correctly which defaults to “[jenkins]” section when no server is passed. This fixes the issue with checking if the key exists in the configuration read before reading the key-value. The issue is reproducible by running lftools jenkins plugins –help or tox -e docs, with jenkins.inimissing the “[jenkins]” section. lfidapi create group checks if group exists before posting Unicode compatibility in deploy_logs for Python 2 and 3 was improved in several ways. The former method to pull and write log files did not work properly in Python 3, and was not very robust for Python 2. Both reading and writing logs is now handled in a unicode-safe, 2/3 compatible way. v0.24.0¶ v0.22.2¶ Bug Fixes¶ Fix the unhelpful stack trace when a deploy nexus-zip fails to upload. Traceback (most recent call last): File "/home/jenkins/.local/bin/lftools", line 10, in <module> sys.exit(main()) File "/home/jenkins/.local/lib/python2.7/site-packages/lftools/cli/__init__.py", line 110, in main cli(obj={}) File "/usr/lib/python2.7/site-packages/click/core.py", line 721, in __call__ return self.main(*args, **kwargs) File "/usr/lib/python2.7/site-packages/click/core.py", line 696, in main rv = self.invoke(ctx) File "/usr/lib/python2.7/site-packages/click/core.py", line 1065, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python2.7/site-packages/click/core.py", line 1065, in invoke return _process_result(sub_ctx.command.invoke(sub_ctx)) File "/usr/lib/python2.7/site-packages/click/core.py", line 894, in invoke return ctx.invoke(self.callback, **ctx.params) File "/usr/lib/python2.7/site-packages/click/core.py", line 534, in invoke return callback(*args, **kwargs) File "/usr/lib/python2.7/site-packages/click/decorators.py", line 17, in new_func return f(get_current_context(), *args, **kwargs) File "/home/jenkins/.local/lib/python2.7/site-packages/lftools/cli/deploy.py", line 63, in archives deploy_sys.deploy_archives(nexus_url, nexus_path, workspace, pattern) File "/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py", line 236, in deploy_archives deploy_nexus_zip(nexus_url, 'logs', nexus_path, archives_zip) File "/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py", line 362, in deploy_nexus_zip raise requests.HTTPError(e.value) AttributeError: 'HTTPError' object has no attribute 'value' Now instead it returns a much more helpful error message: ERROR: Failed to upload to Nexus with status code: 401. test.zip Fixes an OSError exception that is not handled, in the lftools command: lftools deploy archives The code resides in the copy_archives function in deploy.py file. This exception is caused by a missing archives directory, which a for loop expects to be there. The fix is simply to verify if archives file/directory exists, and if it does then perform the for loop. 12:07:36 File “/home/jenkins/.local/lib/python2.7/site-packages/lftools/deploy.py”, line 166, in copy_archives 12:07:36 for file_or_dir in os.listdir(archives_dir): 12:07:36 OSError: [Errno 2] No such file or directory: ‘/w/workspace/music-mdbc-master-verify-java/archives’ v0.22.0¶ v0.21.0¶ v0.20.0¶ New Features¶ Gerrit project create and github enable replication commands. Usage: lftools gerrit [OPTIONS] COMMAND [ARGS]… Commands: create Create and configure permissions for a new gerrit repo. Options: --enable Enable replication to Github. This skips creating the repo. --parent Specify parent other than "All-Projects" --help Show this message and exit. LFID Api Tools. Usage: lftools lfidapi [OPTIONS] COMMAND [ARGS]… Commands: create-group Create group. invite Email invitation to join group. search-members List members of a group. user Add and remove users from groups. Options: --help Show this message and exit Add Nexus command to release one or more staging repositories. Via the Nexus 2 REST API, this command performs both a “release” and a “drop” action on the repo(s), in order to best reproduce the action of manually using the “Release” option in the Nexus UI. Usage: lftools nexus release [OPTIONS] [REPOS]… - Options: - -s, --server TEXT Nexus server URL. Can also be set as NEXUS_URL in the environment. This will override any URL set in settings.yaml. Add command to list openstack containers. Usage: lftools openstack --os-cloud example object list-containers This command will collect all tags from both Nexus3 and Docker Hub, for a particular org (for instance ‘onap’), as well as a repo (default all repos). With this information, it will calculate a list of valid tags that needs to be copied to Docker Hub from Nexus3. - Usage: lftools nexus docker releasedockerhub - Options: - -o, --org TEXT Specify repository organization. [required] - -r, --repo TEXT Only repos containing this string will be selected. Default set to blank string, which is every repo. - -s, --summary Prints a summary of missing docker tags. - -v, --verbose Prints all collected repo/tag information. - -c, --copy Copy missing tags from Nexus3 repos to Docker Hub repos. - -p, --progbar Display a progress bar for the time consuming jobs. Verify YAML Schema. Usage: Usage: lftools schema verify [OPTIONS] YAMLFILE SCHEMAFILE Commands: verify a yaml file based on a schema file. Options: --help Show this message and exit. Known Issues¶ Currently, if the Docker Hub repo is missing, it is not created specifically, but implicitly by docker itself when we push the docker image to an non- existing Docker Hub repo. The command handles any org (onap or hyperledger for instance), “BUT” it requires that the versioning pattern is #.#.# (1.2.3) for the project. In regexp terms : ^d+.d+.d+$ v0.19.0¶ New Features¶ Provide additional methods to pass LFID to lftools than lftools.ini Via explicit --passwordparameter Via environment variable LFTOOLS_PASSWORD At runtime if --interactivemode is set Refactored deploy_nexus function from shell/deploy to pure Python to be more portable with Windows systems. Also added a number of unit tests to cover all executable branches of the code. Refactored deploy_nexus_stage function from shell/deploy to pure Python to be more portable with Windows systems. Also added a number of unit tests to cover all executable branches of the code. Add --confparameter to jenkins subcommand to allow choosing a jjb config outside of the default paths. Docker list and delete commands for Nexus docker repos. Usage: lftools nexus docker [OPTIONS] COMMAND [ARGS]… Commands: delete Delete all images matching the PATTERN. list List images matching the PATTERN. The shell/deploy file’s copy_archives() function has been reimplemented in pure Python for better portability to Windows systems. Refactored deploy_archives() function from shell/deploy to pure Python to be more portable with Windows systems. Refactored deploy_logs() function from shell/deploy to pure Python to be more portable with Windows systems. Refactored deploy_nexus_zip() function from shell/deploy to pure Python to be more portable with Windows systems. Refactored nexus_stage_repo_close(), and nexus_repo_stage_create() function from shell/deploy to pure Python to be more portable with Windows systems. Also added a number of unit tests to cover all executable branches of the code. Refactored upload_maven_file_to_nexus function from shell/deploy to pure Python to be more portable with Windows systems. Also added a number of unit tests to cover all executable branches of the code. Deprecation Notes¶ shell/deploy script’s deploy_nexus function is now deprecated and will be removed in a future release. shell/deploy script’s deploy_nexus_stage function is now deprecated and will be removed in a future release. The shell/deploy script’s copy_archives() function is now deprecated and will be removed in a later version. We recommend migrating to the lftools pure Python implementation of this function. shell/deploy script’s deploy_archives() function is now deprecated and will be removed in a future release. shell/deploy script’s deploy_logs() function is now deprecated and will be removed in a future release. shell/deploy script’s deploy_nexus_zip() function is now deprecated and will be removed in a future release. shell/deploy script’s nexus_stage_repo_close() and nexus_stage_repo_create() function is now deprecated and will be removed in a future release. shell/deploy script’s upload_maven_file_to_nexus function is now deprecated and will be removed in a future release. v0.18.0¶ New Features¶ Add new cmd to fetch Jenkins token from user account. An optional --changeparameter can be passed to have Jenkins change the API token. Usage: lftools jenkins token [OPTIONS] Get API token. - Options: - --change Generate a new API token. Show this message and exit. Add jenkins token init command to initialize a new server section in jenkins_jobs.ini. This command uses credentials found in lftools.ini to initialize the new Jenkins server configuration. Usage: lftools jenkins token init [OPTIONS] NAME URL Add jenkins token reset command to automatically reset API tokens for all Jenkins systems configured in jenkins_jobs.ini. Usage: lftools jenkins token reset [OPTIONS] [SERVER] We now support locating the jenkins_jobs.ini in all the same default search paths as JJB supports. Specifically in this order: $PWD/jenkins_jobs.ini ~/.config/jenkins_jobs/jenkins_jobs.ini /etc/jenkins_jobs/jenkins_jobs.ini Add a new delete-staleoption to the stack command. This function compares running builds in Jenkins to active stacks in OpenStack and determines if there are orphaned stacks and removes them. Add an openstack image sharesub-command to handle sharing images between multiple tenants. Command accepts a space-separated list of tenants to share the provided image with. Usage: lftools openstack image share [OPTIONS] IMAGE [DEST]... Add an openstack image uploadsub-command to handle uploading images to openstack. Usage: Usage: lftools openstack image upload [OPTIONS] IMAGE NAME... Bug Fixes¶ The get-credentials command is now fixed since it was was broken after refactoring done in Gerrit patch I2168adf9bc992b719da6c0350a446830015e6df6. v0.17.0¶ New Features¶ Add support to the jenkins command to parse jenkins_jobs.inifor configuration if server parameter passed is not a URL. Add a jobs sub-command to jenkins command to enable or disable Jenkins Jobs that match a regular expression. Add stack command. Add stack create sub-command. Usage: lftools openstack stack create NAME TEMPLATE_FILE PARAMETER_FILE Add stack delete sub-command. Usage: lftools openstack stack create NAME v0.16.1¶ v0.16.0¶ New Features¶ Add a new --debugflag to enable extra troubleshooting information. This flag can also be set via environment variable DEBUG=True. $ lftools ldap Usage: lftools ldap [OPTIONS] COMMAND [ARGS]… Commands: autocorrectinfofile Verify INFO.yaml against LDAP group. csv Query an Ldap server. inactivecommitters Check committer participation. yaml4info Build yaml of commiters for your INFO.yaml. $ lftools infofile Commands: get-committers Extract Committer info from INFO.yaml or LDAP... sync-committers Sync committer information from LDAP into... Deprecation Notes¶ Remove support for modifying the logger via logging.ini. It was a good idea but in practice this is not really used and adds extra complexity to lftools.
https://docs.releng.linuxfoundation.org/projects/lftools/en/latest/release-notes.html
2021-11-27T08:11:41
CC-MAIN-2021-49
1637964358153.33
[]
docs.releng.linuxfoundation.org
My WebLink | | About | Sign Out Search 01-05-2005 City Controller Accepts New Position sbend > Public > News Releases > 2005 > 01-05-2005 City Controller Accepts New Position Metadata Thumbnails Annotations Entry Properties Last modified 4/11/2011 1:03:53 PM Creation date 4/11/2011 1:03:47 CONTROLLER ACCEPTS NEW POSITION Page 1 of 2 <br /> � v%• 'Building a 21St Century City " <br /> ' rs Stephen J. Luecke, Mayor <br /> RELEASE FROM: FOR RELEASE: <br /> MAYOR LUECKE'S OFFICE WEDNESDAY, JAN. 5, 2005 <br /> CITY CONTROLLER ACCEPTS NEW POSITION <br /> Mayor Stephen J. Luecke's Office has announced the resignation of City Controller Rick 011ett effective <br /> January 21, 2005. Mr. 011ett has accepted a new position as Chief Administrative Officer at the Culver <br /> Academies. 011ett was appointed City Controller on October 31, 2001. <br /> In accepting the resignation, with regret, Mayor Luecke commented, "For the past three years, Rick has <br /> been a valued member of my management team. He has not only provided keen fiscal oversight, but also <br /> offered important suggestions on delivering City services more efficiently and effectively." 011ett played a key <br /> role in securing the best financing for important economic development projects. He also participated on the <br /> negotiating team which reached a successful agreement with the Firefighters union last year. <br /> During his time as Controller, the City continued to be recognized for its financial reporting and budget <br /> process. "With Rick's leadership, we are improving our city website and expanding our ability to provide <br /> information and services electronically,' continued Mayor Luecke. I am grateful for his stewardship of the <br /> City's resources, for his assistance to Department Heads and Council Members and for his service to the <br /> people of South Bend. We wish him well in his new position at the Culver Academies." <br /> The Mayor has already initiated the process for selecting a new Controller. He has identified a list of <br /> individuals who are being contacted and he will also be accepting additional resumes. During the transition <br /> period, Liz Rowe, Director/City Finance,Tom Skarbek, Director/Budgeting & Financial Reporting, and Janice <br /> Hall, Director of Human Resources will share coverage and management of the Department of <br /> Administration and Finance activities. <br /> Mayor Luecke also noted that"I have been fortunate to have had two talented and dedicated public servants <br /> who have served as Controller during my time as Mayor. I will be seeking a third person who is similarly <br /> committed to keeping the City financially sound and accountable to the residents of the community. The <br /> Office of the Controller is vital to developing resources and providing leadership necessary for maintaining <br /> our progress in building a 21st Century City" <br /> For more information, please contact Mikki Dobski, Communications & Special Projects —574-235-5855; <br /> 574-876-1564. <br /> Back I City Home Page I City Directory I Site Map I Search Go <br /> powered by FreeFind <br /> Created by the City of South Bend Department of Information Technologies. <br /> Copyright©2005 City of South Bend.All rights reserved. <br /> 2005/010505 Controller.htm 1/13/2006 <br /> The URL can be used to link to this page Your browser does not support the video tag.
https://docs.southbendin.gov/WebLink/0/doc/18274/Page1.aspx
2021-11-27T08:32:47
CC-MAIN-2021-49
1637964358153.33
[]
docs.southbendin.gov
Using Sandman¶ The Simplest Application¶ Here’s what’s required to create a RESTful API service from an existing database using sandman $ sandmanctl sqlite:////tmp/my_database.db That’s it. sandman will then do the following: - Connect to your database and introspect it’s contents - Create and launch a RESTful¶ Beyond sandmanctl¶ sandmanctl is really just a simple wrapper around the following: from ``sandman`` import app app.config['SQLALCHEMY_DATABASE_URI'] = 'sqlite:///chinook' from sandman.model import activate activate(browser=True) app.run() Notice you don’t even need to tell ``sandman`` what tables your database contains. Just point sandman at your database and let it do all the heavy lifting. If you put the code above into a file named runserver.py, You can start this new service and make a request. While we’re at it, lets make use of sandman‘s awesome filtering capability by specifying a filter term: $ python runserver.py & * Running on > curl GET "" you should see the following: { "resources": [ { "ArtistId": 1, "Name": "AC/DC", "links": [ { "rel": "self", "uri": "/artists/1" } ] } ] } If you were to leave off the filtering term, you would get all results from the Artist table. You can also paginate these results by specifying ?page=2 or something similar. The number of results returned per page is controlled by the config value RESULTS_PER_PAGE, which defaults to 20. A Quick Guide to REST APIs¶ Before we get into more complicated examples, we should discuss some REST API basics. The most important concept is that of a resource. Resources are sources of information, and the API is an interface to this information. That is, resources are the actual “objects” manipulated by the API. In sandman, each row in a database table is considered a resource. Groups of resources are called collections. In sandman, each table in your database is a collection. Collections can be queried and added to using the appropriate HTTP method. sandman supports the following HTTP methods: * GET * POST * PUT * DELETE * PATCH (Support for the HEAD and OPTIONS methods is underway.) Creating Models¶ A Model represents a table in your database. You control which tables to expose in the API through the creation of classes which inherit from sandman.model.models.Model. If you create a Model, the only attribute you must define in your class is the __tablename__ attribute. sandman uses this to map your class to the corresponding database table. From there, sandman is able to divine all other properties of your tables. Specifically, sandman creates the following: - an __endpoint__attribute that controls resource URIs for the class - a __methods__attribute that determines the allowed HTTP methods for the class as_dictand from_dictmethods that only operate on class attributes that correspond to database columns - an updatemethod that updates only the values specified (as opposed to from_dict, which replaces all of the object’s values with those passed in the dictionary parameter links, primary_key, and resource_urimethods that provide access to various attributes of the object derived from the underlying database model Creating a models.py file allows you to get even more out of sandman. In the file, create a class that derives from sandman.models.Model for each table you want to turn into a RESTful resource. Here’s a simple example using the Chinook test database (widely available online): from sandman.model import register, activate, Model class Artist(Model): __tablename__ = 'Artist' class Album(Model): __tablename__ = 'Album' class Playlist(Model): __tablename__ = 'Playlist' class Genre(Model): __tablename__ = 'Genre' # register can be called with an iterable or a single class register((Artist, Album, Playlist)) register(Genre) # activate must be called *after* register activate(browser=False) Hooking up Models¶ The __tablename__ attribute is used to tell sandman which database table this class is modeling. It has no default and is required for all classes. Providing a custom endpoint¶ In the code above, we created four sandman.model.models.Model classes that correspond to tables in our database. If we wanted to change the HTTP endpoint for one of the models (the default endpoint is simply the class’s name pluralized in lowercase), we would do so by setting the __endpoint__ attribute in the definition of the class: class Genre(Model): __tablename__ = 'Genre' __endpoint__ = 'styles' Now we would point our browser (or curl) to localhost:5000/styles to retrieve the resources in the Genre table. Restricting allowable methods on a resource¶ Many times, we’d like to specify that certain actions can only be carried out against certain types of resources. If we wanted to prevent API users from deleting any Genre resources, for example, we could specify this implicitly by defining the __methods__ attribute and leaving out the DELETE method, like so: class Genre(Model): __tablename__ = 'Genre' __endpoint__ = 'styles' __methods__ = ('GET', 'POST', 'PATCH', 'PUT') For each call into the API, the HTTP method used is validated against the acceptable methods for that resource. Performing custom validation on a resource¶ Specifying which HTTP methods are acceptable gives rather coarse control over how a user of the API can interact with our resources. For more granular control, custom a validation function can be specified. To do so, simply define a static method named validate_<METHOD>, where <METHOD> is the HTTP method the validation function should validate. To validate the POST method on Genres, we would define the method validate_POST, like so: class Genre(Model): __tablename__ = 'Genre' __endpoint__ = 'styles' __methods__ = ('GET', 'POST', 'PATCH', 'PUT') @staticmethod def validate_POST(self, resource=None): if isinstance(resource, list): return True # No classical music! return resource and resource.Name != 'classical' The validate_POST method is called after the would-be resource is created, trading a bit of performance for a simpler interface. Instead of needing to inspect the incoming HTTP request directly, you can make validation decisions based on the resource itself. Note that the resource parameter can be either a single resource or a collection of resources, so it’s usually necessary to check which type you’re dealing with. This will likely change in a future version of sandman. Configuring a model’s behavior in the admin interface¶ sandman uses Flask-Admin to construct the admin interface. While the default settings for individual models are usually sufficient, you can make changes to the admin interface for a model by setting the __view__ attribute to a class that derives from flask.ext.admin.contrib.sqla.ModelView. The Flask-Admin’s documentation should be consulted for the full list of attributes that can be configured. Below, we create a model and, additionally, tell sandman that we want the table’s primary key to be displayed in the admin interface (by default, a table’s primary keys aren’t shown): from flask.ext.admin.contrib.sqla import ModelView class ModelViewShowPK(ModelView): column_display_pk = True class Artist(Model): __tablename__ = 'Artist' __view__ = ModelViewShowPK Custom `__view__` classes are a powerful way to customize the admin interface. Properties exist to control which columns are sortable or searchable, as well as as what fields are editable in the built-in editing view. If you find your admin page isn’t working exactly as you’d like, the chances are good you can add your desired functionality through a custom __view__ class. Model Endpoints¶ If you were to create a Model class named Resource, the following endpoints would be created: resources/ GET: retrieve all resources (i.e. the collection) POST: create a new resource resources/<id> GET: retrieve a specific resource PATCH: update an existing resource PUT: create or update a resource with the given ID DELETE: delete a specific resource resources/meta GET: retrieve a description of a resource’s structure The root endpoint¶ For each project, a “root” endpoint ( /) is created that gives clients the information required to interact with your API. The endpoint for each resource is listed, along with the /meta endpoint describing a resource’s structure. The root endpoint is available as both JSON and HTML. The same information is returned by each version. The /meta endpoint¶ A /meta endpoint, which lists the models attributes (i.e. the database columns) and their type. This can be used to create client code that is decoupled from the structure of your database. A /meta endpoint is automatically generated for every Model you register. This is available both as JSON and HTML. Automatic Introspection¶ Of course, you don’t actually need to tell sandman about your tables; it’s perfectly capable of introspecting all of them. To use introspection to make all of your database tables available via the admin and REST API, simply remove all model code and call activate() without ever registering a model. To stop a browser window from automatically popping up on sandman initialization, call activate() with browser=False. Running sandman alongside another app¶ If you have an existing WSGI application you’d like to run in the same interpreter as sandman, follow the instructions described here. Essentially, you need to import both applications in your main file and use Flask’s DispatcherMiddleware to give a unique route to each app. In the following example, sandman-related endpoints can be accessed by adding the /sandman prefix to sandman‘s normally generated URIs: from my_application import app as my_app from sandman import app as sandman_app from werkzeug.wsgi import DispatcherMiddleware application = DispatcherMiddleware(my_app, { '/sandman': sandman_app, }) This allows both apps to coexist; my_app will be rooted at / and sandman at /sandman. Using existing declarative models¶ If you have a Flask/SQLAlchemy application that already has a number of existing declarative models, you can register these with sandman as if they were auto-generated classes. Simply add your existing classes in the call to sandman.model.register()
http://sandman.readthedocs.io/en/latest/using_sandman.html
2018-01-16T17:21:50
CC-MAIN-2018-05
1516084886476.31
[]
sandman.readthedocs.io
public interface SearchRequestStore Store used for CRUD of SearchRequests @Deprecated java.util.Collection<SearchRequest> getAllRequests() getAll() EnclosedIterable<SearchRequest> get(SharedEntityAccessor.RetrievalDescriptor descriptor) EnclosedIterableof SearchRequests for the specified List of ids. descriptor- retrieval descriptor EnclosedIterable<SearchRequest> getAll() EnclosedIterableof all SearchRequests in the database. EnclosedIterable<IndexableSharedEntity<SearchRequest>> getAllIndexableSharedEntities() EnclosedIterableof all IndexableSharedEntities representing SearchRequests in the database. Note: this is used so that we can retrieve all the meta data about a SearchRequest without having to deal with the Query. java.util.Collection<SearchRequest> getAllOwnedSearchRequests(com.opensymphony.user.User user) user- The user who created the SearchRequests SearchRequestthat user created. SearchRequest getRequestByAuthorAndName(com.opensymphony.user.User author, java.lang.String name) author- Author of the SearchRequest name- Name of the SearchRequest SearchRequest getSearchRequest(@NotNull java.lang.Long id) id- Id of the SearchRequest SearchRequest create(@NotNull SearchRequest request) SearchRequest, user, name of search request and description and persists the XML representation of the SearchRequest object to the database along with the rest of the details request- SearchResult that should be persisted SearchRequest update(@NotNull SearchRequest request) request- the request to persist. SearchRequestthat was persisted to the database SearchRequest adjustFavouriteCount(@NotNull java.lang.Long searchRequestId, int incrementValue) searchRequestId- the identifier of the search request to decrease. incrementValue- the value to increase the favourite count by. Can be a number < 0 to decrease the favourite count. SearchRequest. void delete(@NotNull java.lang.Long id) id- of the search request to be removed from storage EnclosedIterable<SearchRequest> getSearchRequests(Project project) search requestsassociate with a given Project. project- Project that is associated with the filters SearchRequestthat have their project set to the given project EnclosedIterable<SearchRequest> getSearchRequests(com.opensymphony.user.Group group) search requestsassociated with a given Group. group- the group that is associated with the filters SearchRequestthat have their project set to the given project
https://docs.atlassian.com/software/jira/docs/api/4.2/com/atlassian/jira/issue/search/SearchRequestStore.html
2018-01-16T17:35:10
CC-MAIN-2018-05
1516084886476.31
[]
docs.atlassian.com
Getting Started with Processes (Policy Manager 8.x) Learn about common and operation-specific processes, process management tools, and use cases. Process Reference (PM8x) Managing Processes (PM8x) Supported Platforms: 8.0x, 8.2x, 8.4x Table of Contents - Process Contexts - Process Function Reference - Policy Manager Scripting API - Process Management Tools - Example: Adding a Script Activity to an operation The Akana API Platform and Akana Policy Manager allow you to define web service orchestration concepts using an XML-based graphical editor called the Process Palette. Here, you can: - Define the sequence of messages as they flow through for each virtual service operation. - Configure a variety of different Activities for achieving the results required for each operation. Let's take a quick walkthrough of the process types and tools available, to get you started. Process Contexts There are several approaches to configuring process definitions: - You can configure a set of common processes for an organization, using the Add Process function. Common processes are available for all resources within the organization. - You can define a process for a specific operation: - In Policy Manager, choose Services > Operations > Process or Services > Operations > Fault. - In the developer portal, go to the API Implementation page. Referencing a Process If you define processes at the organization level, you can reference/re-use them at the operation level. This approach is most efficient for processes that might be used multiple times, and can help make your process development cycle more efficient. The example below shows referencing a process using a Process Activity in the Process Editor. Referencing a Script In the same way that you can reuse a process by referencing it, you can also reuse a script. You can define a series of utility scripts for performing common tasks. To do this: - Add a script: - In Policy Manager: Scripts folder - In the developer portal: Organization > Scripts - In your virtual service operation process definition: - Add a Script Activity. - Import a pre-defined script. - Add a script reference for the function you want to perform. For example, you might have a requirement to validate some data as part of your process. In this example, you might create a reusable script that includes a function for validating data (validateData), and call the script TestScript. Then, in your process definition, you can import TestScript and reference the function in the script source. This example is shown below. Process Function Reference The Process Function Reference includes a tour of the Process Editor and descriptions and usage examples for each Activity type. To ensure a smooth process when building your process definitions, familiarize yourself with the functionality, techniques, and rules for using each Activity. For more information, refer to Process Reference for Policy Manager 8.x. Policy Manager Scripting API The Policy Manager Scripting API provides a series of interfaces and classes that you can use to build process-related scripts. You can access this API in the following locations: - In Policy Manager: Documentation files in the docs\scriptDocs folder of the Akana Platform Folder. - On the Akana Documentation Repository, choose the applicable version: Policy Manager Scripting API. Process Management Tools For an overview of all the tools in the Process Palette, and functions that apply to the common process definitions, see Managing Processes. Example: Adding a Script Activity to an operation The procedure below walks you through adding a Script activity to a specific operation using the Process Editor. When you've added the script, you can run the operation in Test Client and you will see a new header in the Response/Headers tab. Example: to add a specific script activity to an operation - Open the Process Editor for a specific operation. - Drag and drop the Script Activity icon to the grid. - Change the arrows so that the order of activity is Receive > Invoke Activity > Script Activity > Reply: - Drag the end arrow from Reply Activity to Script Activity. This directs the Script Activity to execute directly after the Invoke Activity. - Click the center of the Script Activity and drag and drop (draw) a new arrow to the center of the Reply Activity. This directs the Reply Activity to execute after the Script Activity. - Specify values for the Script Activity: - Double-click any activity to go to the Activity Edit page. - From the Language drop-down list, choose JavaScript. - In the Script text box, type or paste the following script: var msg = processContext.getVariable("message"); var headers = msg.getTransportHeaders(); headers.add("myHeaderName", "myHeaderValue"); msg.setTransportHeaders(headers); - Click the Save icon (on the palette) to save the data. - Click Cancel to exit the Process Editor and return to the Implementation page. For additional information on using script activity variables, refer to the Process Editor reference documentation for the Script activity. What's Next? Now that you've familiarized yourself with the high-level approach of building processes, and know what reference materials and tools are available, you can start building your processes.
http://docs.akana.com/ag/processes/getting_started_with_processes_pm8x.htm
2018-01-16T16:56:22
CC-MAIN-2018-05
1516084886476.31
[array(['images/sample_process_8x.jpg', 'Create Common Processes in the Processes folder'], dtype=object) array(['images/process_with_referenced_process_activity_8x.jpg', 'Referencing a process using a Process Activity in the Process Editor'], dtype=object) array(['images/script_example.jpg', 'Referencing a Script Function'], dtype=object) ]
docs.akana.com
Full Changelog¶ tip (XXX-XX-XX)¶ 1.3 (2011-03-24)¶ -)¶ Documentation: - Some webhelpers.mischelpers were undocumented. - Spelling corrections throughout, done by Marius Gedminas. webhelpers.date: - Adjust test in ‘test_date.py’ to account for leap years. (#61, reported by Andrey Rahmatullin / wrar) webhelpers.html.grid, webhelpers.pylonslib.grid: - Add ‘request’ and ‘url’)¶ - WebHelpers now depends on MarkupSafe. literaland escapenow use it. - webhelpers.html.builder: literaland escapenowhelper has a “type” argument. - Don’t put an “id” attribute on a hidden fields generated by the form()helper, including the magic _methodfield. The IDs will clash if there are multiple forms on the page. - webhelpers.html.tools: - Preserve case of “method” arg in button_to()for XHTML compatibility. Patch by transducer. - webhelpers.text: - Urlencode urlifyreturn value in case it contains special characters like ”?”. Reported by [email protected]. - webhelpers.util: - Fix bug in update_paramsin handling existing query strings. Support multiple values per parameter. 1.1 (2010-08-09)¶ -rc1 (2010-05-24)¶ - webhelpers.html.tags: - Change ‘id’helper to display an exception as Python would but without the traceback. 1.0b7 (2010-05-16)¶ -or routes.url_for(in that order). It will raise NotImplementedErrorif none of these are available. - Don’t allow extra positional args in constructor. The implementation does nothing with them, so it shouldn’t allow them. - Import sqlalchemy.ormas well as sqlalchemy. User Sybiam reports an error otherwise. - Add code to work with other iterable containers, contributed by Marcin Kuzminski. - webhelpers.pylonslib.flash: - New argument ignore_duplicateto prevent adding the same message multiple times. 1.0b6 (2010-04-23)¶ - webhelpers.containers / webhelpers.misc: NotGivenmoved_sizefor displaying numbers in SI units (“1.2 kilobytes”, “1.2 kB”, “1.0 KiB”). Contributed by Wojciech Malinowski. 1.0b5 (2010-03-18)¶ - webhelpers.html.converters: - Re-add import of renderand sanitizefrom webhelpers.html.render. That module is not public. - webhelpers.misc: - New exception OverwriteError. - Add excludeargument)¶ - webhelpers.string24. WebHelpers no longer supports Python 2.3. - webhelpers.feedgenerator: - Add a basic Geometryclass for the Geo helpers. - webhelpers.html.grid_demo: - Demonstrates webhelpers.html.grid. Run as “python -m webhelpers.html.grid_demo OUTPUT_DIRECTORY”. - webhelpers.html.converters: - Don’t import renderand sanitizeto converters module. (Reversed in 1.0b5.) - webhelpers.html.secure_form: - Move module to webhelpers.pylonslib.secure_formbecause it depends on pylons.session. - webhelpers.misc: - New helper flattento interpolate embedded lists and tuples. - New helper subclasses_onlytosmoved from cgito urlparsein Python 2.6. Patch by Mike Verdone. 1.0b3 (2009-12-29)¶ - webhelpers.feedgenerator: - Allow either lat-lon and lon-lat formats in geometry data. The default is lat-lon. For lon-lat, set GeoFeedMixin.is_input_latitude_firsttoimplements the old rails helpers. - webhelpers.util: - New helper update_paramsto update query parameters in a URL. 1.0b2 (2009-12-21)¶ webhelpers.constants: - Fix spelling of Massachusetts. webhelpers.feedgenerator: Sync with Django rev 11910. This adds GeoRSS and makes the API more extensible, as well as fixing a few bugs. (Re-added the Atom1 ‘published’ property.) (The ‘generator’ and ‘source’larg which, if true, inserts a newline between content elements and at the end of the tag for readability. Example: HTML.a("A", "B",AB</a>' HTML.a("A", "B",\nA\nB\n</a>\n' This does not affect HTML attributes nor the higher-level tag helpers. The exact spacing is subject to change. The tag building code has been refactored to accommodate this. webhelpers.html.tags: form()puts its hidden “_method” field in a ‘<div style=”display:none”>’ to conform to XHTML syntax. The style prevents the div from being displayed or affecting the layout. A new arg hidden_fieldsmay be a dict or iterable of additional hidden fields, which will be added to the div. - Set magic ID attribute in hiddenhelper to match behavior of the other tag helpers. image()can now calculate the width and height automatically from an image file, using either the PIL algorithm or the pure Python algorithm in webhelpers.media. It also logs the dimensions to the debug log for troubleshooting. webhelpers.html.tools: - Reimplement highlight()using the HTML builder. New arguments add flexibility. Deprecate the highlighterargument, which creates tags via string interpolation. - Fixed auto_link()to parse slash characters in query string. Patch by hanula; Bitbucket issue #10. - Fix HTML overescaping and underescaping in auto_link(). Patch by Marius Gedminas. A parsing bug remains: webhelpers.markdown / webhelpers.html.converters: webhelpers.markdownwill not be upgraded to the version 2 series but will remain at 1.7. Users who want the latest bugfixes and extensions should download the full Markdown package or the alternative Markdown2 from PyPI. - The markdown()helper in webhelpers.html.convertersnow has support for external Markdown implementations. You can pass a specific module via the markdownargument, otherwise it will attempt to import markdownor fall back to webhelpers.markdown. - To see which version is autoloaded, call _get_markdown_module()and inspect the .__file__, .version, and/or .version_infoattributes of the return value. webhelpers.media: - Bugfix in get_dimensions_pil. webhelpers.paginate: - Change for SQLAlchemy 0.6. (bug #11) webhelpers.pylonslib: - Fix HTML overescaping. Patch by Marius Gedminas. 1.0b1 (2009-11-20)¶ - Delete deprecated subpackage: rails. These are replaced by new helpers in date, html, misc, number, text. - Delete other deprecated subpackages: commands, hinclude, htmlgen, pagination. Pagination is replaced by paginate. - webhelpers.constants: uk_countiesreturns tuples rather than strings. - webhelpers.feedgenerator: rfc3339_datenow accepts date objects without crashing. - Add ‘generator’ and ‘source’ properties to RSS2 feeds. Patch by Vince Spicer. (Removed in 1.0b2 due to bugs.) - Add ‘published’ property to Atom1 feeds. - webhelpers.html.converters: - New helper render()formats HTML to text. - New helper sanitize()strips HTML tags from user input. - webhelprs.html.tags: - New helper css_classes()to add classes to a tag programmatically. - Fix bug in tag helpers when passing id_argument (although idis recommended instead). - Add OptionGroup class and optgroup support to select(). Patch by Alexandre Bourget. - webhelpers.html.tools: - New helper strip_tags()deletes HTML tags in a string. - webhelpers.paginate: - Allow all versions of SQLAlchemy > 0.3. - convert “_range” and “_pagelink” function to Page class method so that they can be overridden - pager “onclick” argument use template string value. So, javascript code can use “partial_url” or “page” value or any. Backward compatibility is considered. - Add presliced list option to avoid slicing when list is already. - webhelpers.pylonslib: - is now a package. - The Flashclass now accepts severity categories, thanks to Wichert Akkerman. The docstring shows how to set up auto-fading messages using Javascript a la Mac OSX’s “Growl” feature. This is backward compatible although you should delete existing sessions when upgrading from 0.6.x. webhelpers.pylonslib.minifycontains enhanced versions of javascript_linkand stylesheet_linkto minify (shrink) files for more efficient transmission. (EXPERIMENTAL: tests fail in unfinished/disabled_test_pylonslib_minify.py; see .) - webhelpers.text: - Port several helpers from Ruby’s “stringex” package. - urlify()converts any string to a URL-friendly equivalent. - remove_formatting(). - If the unidecodepackage is installed, these two helpers will also transliterate non-ASCII characters to their closest pronounciation equvivalent in ASCII. - Four other helpers reduce HTML entities or whitespace. 0.6.4 (12/2/2008)¶ - text(), password(), checkbox(), textarea(), and select() have a magic ‘id attribute. If not specified it defaults to the name. To suppress the ID entirely, pass id="". This is to help set the ID for title(). radio() doesn’t do this because it generates the ID another way. hidden() doesn’t because hidden fields aren’t used with labels. - Bugfixes in mt.select(): - selected values not being passed as list. - allow currently-selected value to be a long. - Delete experimental module webhelpers.html.form_layout. 0.6.3 (10/7/2008)¶ - Bugfix in distribute() found by Randy Syring. - New helpers title() and required_legend() in webhelpers.html.tags. - New directory webhelpers/public for static files - Suggested stylesheet webhelpers/public/stylesheets/webhelpers.css (You’ll have to manually add this to your application.) 0.6.2 (10/2/2008)¶ - nl2br() and format-paragraphs were not literal-safe. - webhelpers.converters: - New helper transpose() to turn a 2D list sideways (making the rows columns and the columns rows). - webhelpers.markdown: - Upgrade to Markdown 1.7. - Add a warning about escaping untrusted HTML to webhelpers.html.converters.markdown() docstring. - Did not include Markdown’s extensions due to relative import issues. Use the full Markdown package if you want footnotes or RSS. - webhelpers.media: - New module for muiltimedia helpers. Initial functions determine the size of an image and choose a scaling factor. - webhelpers.html.tags: - Options tuple contains Option objects for select/checkbox/radio groups. select() now uses this automatically. - checkbox() and radio() now have a labelargument. - webhelpers.number: - Population standard deviation contributed by Lorenzo Catucci. - webhelpers.html.form_layout: form field layout (PRELIMINARY, UNSTABLE). 0.6.1 (7/31/2008)¶ - Include a faster version of cgi.escape for use by the literal object. - Fixed bug in SimplerXMLGenerator that the FeedGenerator uses, so that it doesn’t use a {} arg. - New helpers: - nl2br() and format_paragraphs() in webhelpers.html.converters. - ul() and ol() in webhelpers.html.tags. - series() in webhelpers.text. - HTML.tag() is a synonym for make_tag(), both in webhelpers.html.builder. - Change default form method to “post” (rather than “POST”) to conform to XHTML. - Add DeprecationWarning for webhelpers.rails package, webhelpers.rails.url_for(), and webhelpers.pagination. 0.6 (07/08/2008)¶ - Add webhelpers.html.builder to generate HTML tags with smart escaping, along with a literal type to mark preformatted strings. - Deprecate webhelpers.rails, including its Javascript libraries (Prototype and Scriptaculous). Wrap all rails helpers in a literal. - Many new modules: - constants - countries, states, and provinces. - containers - high-level collections, including flash messages. - date - date/time (rails replacement). - html.converters - text-to-HTML (rails replacement). - html.tags - HTML tags (rails replacement). - html.tools - larger HTML chunks (rails replacement). - mail - sending email. - misc - helpers that are neither text, numeric, container, nor date. - number - numeric helpers and number formatters. - paginate - successor to deprecated pagination module. - text - non-HTML text formatting (rails replacement). - Removed dependency on simplejson and normalized quotes. Patch by Elisha Cook. COMPATIBILITY CHANGES IN 0.6 DEV VERSION¶ - image(), javascript_link(), stylesheet_link(), and auto_discovery_link() in webhelpers.html.tags do not add prefixes or suffixes to the URL args anymore; they output the exact URL given. Same for button_to() in webhelpers.html.tools. - webhelpers.html.tags.javascript_path was deleted. 0.3.3 (02/27/08)¶ - Fixed strip_unders so that it won’t explode during iteration when the size changes. - Updated feedgenerator with the latest changes from Django’s version (only a few additional attributes). 0.3.2 (09/05/07)¶ - Added capability to pass pagination a SA 0.4 Session object which will be used for queries. This allows compatibility with Session.mapper’d objects and normal SA 0.4 mapper relations. - Updated SQLAlchemy ORM pagination for SA 0.4 Session.mapper objects. - Updated Scriptaculous to 1.7.1 beta 3 (1.7.0 is incompatible with Prototype 1.5.1). Thanks errcw. Fixes #288. 0.3.1 (07/14/07)¶ - Added the secure_form_tag helper module, for generating form tags including client-specific authorization tokens for preventing CSRF attacks. Original patch by David Turner. Fixes #157. - current_url now accepts arguments to pass along to url_for. Fixes #251. - Updated prototype to 1.5.1.1. - Added image support to button_to. Patch by Alex Conrad. Fixes #184. - Fix radio_button and submit_to_remote not handling unicode values. Fixes #235. - Added support for the defer attribute to javascript_include_tag. Suggested by s0undt3ch. Fixes #214. - Added a distutils command compress_resources, which can combine CSS and Javascript files, and compress Javascript via ShrinkSafe. Add “command_packages=webhelpers.commands” in [global] in setup.cfg to enable this command for your package. 0.3 (03/18/2007)¶ -. - Added environ checking with Routes so that page will be automatically pulled out of the query string, or from the Routes match dict if available. - Added ability for paginate to check for objects that had SQLAlchemy’s assign_mapper applied to them. - Added better range checking to paginator to require a positive value that is less than the total amount of pages available for a page. -. - Fixed the broken markdown function. - Upgraded markdown from 1.5 to 1.6a. - Sync’d Prototype helper to 6057. - Sync’d Urls helper to 6070. - Sync’d Text helper to 6096. - Sync’d Date helper to 6080. - Sync’d Tags helper to 5857. - Sync’d Asset tag helper to 6057. - Sync’d Rails Number helper to 6045. - Updated Ajax commands to internally use with_to avoid name conflicts with Python 2.5 and beyond. Reported by anilj. Fixes #190. - Applied patch from David Smith to decode URL parts as Routes does. Fixes #186. - Changed pagination to give better response if its passed an invalid object. Patch from Christoph Haas. - Fixed scriptaculous helper docs example. Fixes #178. - Updated scriptaculous/prototype to Prototype 1.5.0 and Scriptaculous 1.7.0. - Updated scriptaculous javascripts to 1.6.5. Fixes #155. - Updated remote_function doc-string to more clearly indicate the arguments it can receive. - Synced Rails Javascript helper to 5245 (escape_javascript now escaping backslashes and allow passing html_options to javascript_tag). 0.2.2 (10/20/06)¶ - Fixed tag_options function to not str() string and let html_escape handle it so unicode is properly handled. Reported with fix by Michael G. Noll. - Added sqlalchemy.Query support to the pagination orm wrappers, patch from Andrija Zarić - Fixed python 2.3 compliance in webhelpers.rails (use of sorted()) (Thanks Jamie Wilkinson) 0.2.1 (9/7/06)¶ - Adding counter func to text helpers, patch from Jamie Wilkinson. - Sync’d Rails Text helper to 4994. - Sync’d Rails Asset tag helper to 4999. - Sync’d Rails Form tag helper to 5045, also doesn’t apply to our version. - Sync’d Rails Javascript func to 5039, doesn’t apply to us. - Updated Scriptaculous to 1.6.3. - Updated Prototype to 1.5.0_rc1. - Updated radio_button so that id’s are unique. Brings up to date with Rails changeset #4925, also fixes #103. - More precise distance_of_time_in_words (Follows bottom half of #4989 Rails changeset) - button_to accepts method keyword so you can PUT and DELETE with it. (Follows #4914 Rails changeset) - Fixed auto_link to parse more valid url formats (Thanks Jamie Wilkinson). - Sync’d text helper from latest Rails version. - Fixed form tag’s method matching to be case insensitive. 0.2 (8/31/06)¶ - Adding simplejson req, adding use of json’ification. Updated scriptaculous helpers to split out JS generation for use in JS Generation port. - Finished sync’ing Rails ports (urls, tags) in WebHelpers. Closes #69. url and prototype tests updated, url helpers updated to handle method argument. - Sync’d scriptaculous helper. - Sync’d javascript, prototype helpers and prototype.js to latest Rails modifications. Added more prototype tests. - Sync’d form_options, form_tag helpers. form_tag’s form function can now accept other HTTP methods, and will include a hidden field for them if its not ‘get’ or ‘post’. - Sync’d number helper, added number unit tests. - Added markdown.py (python-markdown) for new markdown support in text helper. - Added textile.py (PyTextile) for new textilize support in text helper. - Brought asset/date/text helpers up to date with revision info. 0.1.3 (Release)¶ - Brought feedgenerator in line with Django’s version, which fixed the missing support for feed categories and updated classes for new-style. Other minor feed updates as well. Now synced as of Django r3143. - Fixed typo in feedgenerator import, reported by [email protected]. - Added webhelpers.rails.asset_tag, for generating links to other assets such as javascripts, stylesheets, and feeds.
http://webhelpers.readthedocs.io/en/latest/changelog.html
2018-01-16T16:57:36
CC-MAIN-2018-05
1516084886476.31
[]
webhelpers.readthedocs.io
Offset an object to create a new object whose shape parallels the shape of the original object. OFFSET creates a new object whose shape parallels the shape of a selected object. Offsetting a circle or an arc creates a larger or smaller circle or arc, depending on which side you specify for the offset. A highly effective drawing technique is to offset objects and then trim or extend their ends. Special Cases for Offset Polylines and Splines 2D polylines and splines are trimmed automatically when the offset distance is larger than can otherwise be accommodated. Closed 2D polylines that are offset to create larger polylines result in potential gaps between segments. The OFFSETGAPTYPE system variable controls how these potential gaps are closed.
http://docs.autodesk.com/ACD/2011/ENU/filesAUG/WS1a9193826455f5ffa23ce210c4a30acaf-6a29.htm
2018-01-16T17:31:10
CC-MAIN-2018-05
1516084886476.31
[]
docs.autodesk.com
- Break up huge files into manageable chunks so that they can be uploaded and opened in Excel - Create an FAQ accordion - Get a visitor's location with JavaScript - jQuery Animated Typing with Typed.js - Mobile Navigation Anchor Links Not Working? - Reveal: jQuery Modals Made Easy - Show & hide specific content on mobile devices - TwentyTwenty jQuery Plugin - Use Google Fonts on your website
http://docs.minionmade.com/
2018-04-19T11:28:46
CC-MAIN-2018-17
1524125936914.5
[]
docs.minionmade.com
Scope of Support When can I request support? If you have issues with the installation or configuration of functionality within Foodomaa™. If you do not understand the working process of a system/module in Foodomaa™. If something is not working in the way it should. If you find bugs or security issues. When is support out of the scope of Foodomaa? Requesting for customization services. Requesting for a TeamViewer or any online session for the understanding working of a piece of code or the whole system. Integration with 3rd party applications (Tools, POS, ERP, SAAS, CMS, etc.) Integration with a new payment gateway. Custom WebView support. (We do not guarantee the working of Foodomaa™ if you wrap it within a WebView/TWA for Android or iOS) If the application core files (PHP, JS, CSS, HTML, BLADE, JSON) files are modified or edited in any way, we will not be able to provide support unless all files are restored to their original form of the corresponding version. If the Database is modified(creating, updating, or deleting) manually, the support is Void until a fresh install is done or a backup of the original Database is restored(if available) Channels for Support The Offical and only channel for support of Foodomaa™ is through our HelpDesk Portal. to visit the Portal. We DO NOT reply to email conversations and comments on CodeCanyon. Creating tickets on our HelpDesk Portal is the only way to get support. The Official Community Forums is on its way and will be released soon. Extras - Previous Limitations Next - Extras Refund Policy Last modified 1yr ago Copy link Contents When can I request support? When is support out of the scope of Foodomaa? Channels for Support
https://docs.foodomaa.com/extras/scope-of-support
2022-05-16T16:10:24
CC-MAIN-2022-21
1652662510138.6
[]
docs.foodomaa.com
So where do we start? Pride - A community of ostriches Discord - Communication platform built to create and foster a strong, inspiring community Slackor Cisco Spark, but allows us to focus on a key feature we love within the Legesher community - our voice. #introductionschannel 2⃣Check out our Code of Conduct 3⃣ Learn how to contribute to Legesher #Welcomechannel. #introductions: place to connect with other members of the pride. #announcements: place where major updates will be communicated. Code Github - a system that uses git, a version control software so you can version, share, collaborate and deploy code. committing a change), you’re able to go back to that “snapshot” of the project and restart from there if you find yourself in a pigeon hole. Just like checkpoints in video games, if you fail a challenge you can restart at the checkpoint. Best thing about git, is that you can create checkpoints whenever and however often you want! Help me set up an organization nextfor you will be contributing to the Legesherorganization's repositories to start! Text Editor - interface that allows you to open and edit your code in an user-friendly environment packagesto enhance the capabilities of the editor. This is where many of the legesher packages will be made available / what we will be building off of. For example, on the image above you can see the language-legesher-pythonpackage installed and enabled on my editor.
https://docs.legesher.io/legesher-docs/getting-started
2022-05-16T14:24:29
CC-MAIN-2022-21
1652662510138.6
[]
docs.legesher.io
MicroPython external C modules¶ When developing modules for use with MicroPython you may find you run into limitations with the Python environment, often due to an inability to access certain hardware resources or Python speed limitations. If your limitations can’t be resolved with suggestions in Maximising MicroPython speed, writing some or all of your module in C (and/or C++ if implemented for your port) is a viable option. If your module is designed to access or work with commonly available hardware or libraries please consider implementing it inside the MicroPython source tree alongside similar modules and submitting it as a pull request. If however you’re targeting obscure or proprietary systems it may make more sense to keep this external to the main MicroPython repository. This chapter describes how to compile such external modules into the MicroPython executable or firmware image. Both Make and CMake build tools are supported, and when writing an external module it’s a good idea to add the build files for both of these tools so the module can be used on all ports. But when compiling a particular port you will only need to use one method of building, either Make or CMake. An alternative approach is to use Native machine code in .mpy files which allows writing custom C code that is placed in a .mpy file, which can be imported dynamically in to a running MicroPython system without the need to recompile the main firmware. Structure of an external C module¶ A MicroPython user C module is a directory with the following files: *.c/ *.cpp/ *.hsource code files for your module. These will typically include the low level functionality being implemented and the MicroPython binding functions to expose the functions and module(s). Currently the best reference for writing these functions/modules is to find similar modules within the MicroPython tree and use them as examples. micropython.mkcontains the Makefile fragment for this module. $(USERMOD_DIR)is available in micropython.mkas the path to your module directory. As it’s redefined for each c module, is should be expanded in your micropython.mkto a local make variable, eg EXAMPLE_MOD_DIR := $(USERMOD_DIR) Your micropython.mkmust add your modules source files relative to your expanded copy of $(USERMOD_DIR)to SRC_USERMOD, eg SRC_USERMOD += $(EXAMPLE_MOD_DIR)/example.c If you have custom compiler options (like -Ito add directories to search for header files), these should be added to CFLAGS_USERMODfor C code and to CXXFLAGS_USERMODfor C++ code. micropython.cmakecontains the CMake configuration for this module. In micropython.cmake, you may use ${CMAKE_CURRENT_LIST_DIR}as the path to the current module. Your micropython.cmakeshould define an INTERFACElibrary and associate your source files, compile definitions and include directories with it. The library should then be linked to the usermodtarget. add_library(usermod_cexample INTERFACE) target_sources(usermod_cexample INTERFACE ${CMAKE_CURRENT_LIST_DIR}/examplemodule.c ) target_include_directories(usermod_cexample INTERFACE ${CMAKE_CURRENT_LIST_DIR} ) target_link_libraries(usermod INTERFACE usermod_cexample) See below for full usage example. Basic example¶ This simple module named cexample provides a single function cexample.add_ints(a, b) which adds the two integer args together and returns the result. It can be found in the MicroPython source tree in the examples directory and has a source file and a Makefile fragment with content as descibed above: micropython/ └──examples/ └──usercmodule/ └──cexample/ ├── examplemodule.c ├── micropython.mk └── micropython.cmake Refer to the comments in these files for additional explanation. Next to the cexample module there’s also cppexample which works in the same way but shows one way of mixing C and C++ code in MicroPython. Compiling the cmodule into MicroPython¶ To build such a module, compile MicroPython (see getting started), applying 2 modifications: Set the build-time flag USER_C_MODULESto point to the modules you want to include. For ports that use Make this variable should be a directory which is searched automatically for modules. For ports that use CMake this variable should be a file which includes the modules to build. See below for details. Enable the modules by setting the corresponding C preprocessor macro to 1. This is only needed if the modules you are building are not automatically enabled. For building the example modules which come with MicroPython, set USER_C_MODULES to the examples/usercmodule directory for Make, or to examples/usercmodule/micropython.cmake for CMake. For example, here’s how the to build the unix port with the example modules: cd micropython/ports/unix make USER_C_MODULES=../../examples/usercmodule You may need to run make clean once at the start when including new user modules in the build. The build output will show the modules found: ... Including User C Module from ../../examples/usercmodule/cexample Including User C Module from ../../examples/usercmodule/cppexample ... For a CMake-based port such as rp2, this will look a little different (note that CMake is actually invoked by make): cd micropython/ports/rp2 make USER_C_MODULES=../../examples/usercmodule/micropython.cmake Again, you may need to run make clean first for CMake to pick up the user modules. The CMake build output lists the modules by name: ... Including User C Module(s) from ../../examples/usercmodule/micropython.cmake Found User C Module(s): usermod_cexample, usermod_cppexample ... The contents of the top-level micropython.cmake can be used to control which modules are enabled. For your own projects it’s more convenient to keep custom code out of the main MicroPython source tree, so a typical project directory structure will look like this: my_project/ ├── modules/ │ ├── example1/ │ │ ├── example1.c │ │ ├── micropython.mk │ │ └── micropython.cmake │ ├── example2/ │ │ ├── example2.c │ │ ├── micropython.mk │ │ └── micropython.cmake │ └── micropython.cmake └── micropython/ ├──ports/ ... ├──stm32/ ... When building with Make set USER_C_MODULES to the my_project/modules directory. For example, building the stm32 port: cd my_project/micropython/ports/stm32 make USER_C_MODULES=../../../modules When building with CMake the top level micropython.cmake – found directly in the my_project/modules directory – should include all of the modules you want to have available: include(${CMAKE_CURRENT_LIST_DIR}/example1/micropython.cmake) include(${CMAKE_CURRENT_LIST_DIR}/example2/micropython.cmake) Then build with: cd my_project/micropython/ports/esp32 make USER_C_MODULES=../../../../modules/micropython.cmake Note that the esp32 port needs the extra CMakeLists.txt file. You can also specify absolute paths to USER_C_MODULES. ..for relative paths due to the location of its main All modules specified by the USER_C_MODULES variable (either found in this directory when using Make, or added via include when using CMake) will be compiled, but only those which are enabled will be available for importing. User modules are usually enabled by default (this is decided by the developer of the module), in which case there is nothing more to do than set USER_C_MODULES as described above. If a module is not enabled by default then the corresponding C preprocessor macro must be enabled. This macro name can be found by searching for the MP_REGISTER_MODULE line in the module’s source code (it usually appears at the end of the main source file). The third argument to MP_REGISTER_MODULE is the macro name, and this must be set to 1 using CFLAGS_EXTRA to make the module available. If the third argument is just the number 1 then the module is enabled by default. For example, the examples/usercmodule/cexample module is enabled by default so has the following line in its source code: MP_REGISTER_MODULE(MP_QSTR_cexample, example_user_cmodule, 1); Alternatively, to make this module disabled by default but selectable through a preprocessor configuration option, it would be: MP_REGISTER_MODULE(MP_QSTR_cexample, example_user_cmodule, MODULE_CEXAMPLE_ENABLED); In this case the module is enabled by adding CFLAGS_EXTRA=-DMODULE_CEXAMPLE_ENABLED=1 to the make command, or editing mpconfigport.h or mpconfigboard.h to add #define MODULE_CEXAMPLE_ENABLED (1) Note that the exact method depends on the port as they have different structures. If not done correctly it will compile but importing will fail to find the module.
https://docs.micropython.org/en/v1.15/develop/cmodules.html
2022-05-16T15:58:10
CC-MAIN-2022-21
1652662510138.6
[]
docs.micropython.org
security session limit application modify Contributors Modify per-application session limit Availability: This command is available to cluster administrators at the advanced privilege level. Description This command allows modification of a per-application management session limit. modifying management session limits for some custom applications. cluster1::*> security session limit application modify -interface ontapi -application custom* -max-active-limit 4 2 entries were modified.
https://docs.netapp.com/us-en/ontap-cli-9111/security-session-limit-application-modify.html
2022-05-16T14:18:28
CC-MAIN-2022-21
1652662510138.6
[]
docs.netapp.com
Crate string_generator Provides ways of creating random strings from patterns. Error type for string generation. empty contains any catagories that contained no variants, and so therefore had no effect on the generated string; failed contains any catagories that were requested by the pattern but did not exist in the catagories hashmap; attempt contains what the program could construct ignoring any errors. It should be noted, that if any error occurs, there will be null characters (U+0000) present in the string. empty failed attempt Generate a string from a set of catagories, a pattern, and a random number generator. The patterns are scanned through one at a time, then used to pull a slice out of the hashmap. An entry from this is then reandomly selected, and turned into a character. These are all combined into a string and returned. Should an error occur, an instance of GenStringError will be returned. GenStringError
https://docs.rs/string_generator/latest/string_generator/
2022-05-16T14:29:21
CC-MAIN-2022-21
1652662510138.6
[]
docs.rs
Exporting a COE dashboard Export the Business Information and Customized COE Dashboard from the staging environment to the production environment. Procedure - Open the Bot Insight window. See Viewing a dashboard. - Click the Configure tab. - Search for the COE dashboard.The default COE dashboard appears in the search results. The dashboard name appears in orange color text. - Select the COE dashboard that you want to view from the search results.The selected COE dashboard appears. - Click Export.The Export window appears. - Navigate to the export location. - Click OK.The .bipkg COE dashboard export file appears in the export location.
https://docs.automationanywhere.com/ko-KR/bundle/enterprise-v11.3/page/enterprise/topics/bot-insight/user/exporting-coe-dashboard.html
2022-05-16T16:43:22
CC-MAIN-2022-21
1652662510138.6
[]
docs.automationanywhere.com
Action structure reference This section is a reference for action configuration only. For a conceptual overview of the pipeline structure, see CodePipeline pipeline structure reference. Each action provider in CodePipeline uses a set of required and optional configuration fields in the pipeline structure. This section provides the following reference information by action provider: Valid values for the ActionTypefields included in the pipeline structure action block, such as Ownerand Provider. Descriptions and other reference information for the Configurationparameters (required and optional) included in the pipeline structure action section. Valid example JSON and YAML action fields. This section is updated periodically with more action providers. Reference information is currently available for the following action providers: Topics - Amazon ECR - Amazon ECS and CodeDeploy blue-green - Amazon Elastic Container Service - Amazon S3 deploy action - Amazon S3 source action - AWS AppConfig - AWS CloudFormation - AWS CloudFormation StackSets - AWS CodeBuild - AWS CodeCommit - AWS CodeDeploy - CodeStarSourceConnection for Bitbucket, GitHub, and GitHub Enterprise Server actions - AWS Device Farm - AWS Lambda - Snyk - AWS Step Functions
https://docs.aws.amazon.com/codepipeline/latest/userguide/action-reference.html
2022-05-16T16:50:29
CC-MAIN-2022-21
1652662510138.6
[]
docs.aws.amazon.com
Cross-selling is a sales technique used to get a customer to spend more by purchasing a product that's related to what's being bought already. What cross-sell code will do? The widget will show cross-sell offers on the thank you page. with modern UI and lots of customisation option to match the need of different types of customers. How it will benefit the merchant? By cross-selling customer will buy more products which in turn increases the conversion How to setup the widget? - Drag and drop the widget to thankyou page - Select the cross sell products and set the discount How it looks on thank you page Important: If your store is password protected then the cross-sell widget may not show on the order confirmation page (If you take view order from Shopify admin > Orders). If you are authenticated with a password on the storefront then, do a test order checkout to see the cross-sell widget on the actual thank you page. Did we miss something? Not to worry! Just email our support team at [email protected]
https://docs.dinosell.com/how-to-use-cross-sell-widget-001c739d313a4b87bc77ca0fe83c7465
2022-05-16T14:50:58
CC-MAIN-2022-21
1652662510138.6
[]
docs.dinosell.com
The "xzip_list_dir" Function #Syntax #Description Returns an array of strings containing the file names and extensions (and optionally, relative paths) contained within an individual directory within the given archive.! Unlike xzip_list, the resulting array indices do not correspond to archive indices. However, many functions support inputting an array of files, in which case an array returned by this function can be passed in directly. tip By default, only the contents of the exact folder specified will be listed. To include the contents of any subfolders as well, see xzip_recurse. #Example This will retrieve a list of files contained in a specific subfolder of an archive and extract the first file in the list.
https://docs.xgasoft.com/xzip/reference-guide/content/xzip_list_dir/
2022-05-16T16:02:16
CC-MAIN-2022-21
1652662510138.6
[]
docs.xgasoft.com
Reply to Users This page covers the different ways you can reply to a user as well as their separate workflows and how to access them. Communication with your app users can be initiated by reaching out to the user from your dashboard to reply to a specific bug report, crash Crash Reports A crash is a negative experience for users of an application. A way to let your users know that you are aware of the crash they encountered and that you're working on a fix for it is by sending. Alternatively, you can reply to multiple specific users. To see a list of all users affected by a particular crash, select View all users from the chat pop-up. Click on this link to see a list of all users affected by that crash. From this page, you can select multiple users and send a single reply to the selected users. This is particularly useful when you want to reach out to specific app users affected by a crash, but not all affected users. This page in your dashboard lists all of your app users affected by a crash. Lastly, you can reply to the individual affected user of a specific crash occurrence by selecting the Reply to User (or View Chat if one already exists) button in the occurrence page and the chat pop-up will appear. Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence._10<< Click on the buttons shown here in the survey results page of your dashboard to open a conversation with your survey responders._12<< .
https://docs.instabug.com/docs/android-user-conversations
2022-06-25T08:36:24
CC-MAIN-2022-27
1656103034877.9
[array(['https://files.readme.io/b6d906e-Bug_Report_Content_-_Reply_to_User_Copy.png', 'Bug Report Content - Reply to User Copy.png Reply to your users by clicking on the button shown here in each bug report.'], dtype=object) array(['https://files.readme.io/b6d906e-Bug_Report_Content_-_Reply_to_User_Copy.png', 'Click to close... Reply to your users by clicking on the button shown here in each bug report.'], dtype=object) array(['https://files.readme.io/784be94-Crash_Reporting_-_Reply_Button_Copy.png', 'Crash Reporting - Reply Button Copy.png Click on the **Reply to Users** button to open the chat pop-up and send a message to all users affected by that crash.'], dtype=object) array(['https://files.readme.io/784be94-Crash_Reporting_-_Reply_Button_Copy.png', 'Click to close... Click on the **Reply to Users** button to open the chat pop-up and send a message to all users affected by that crash.'], dtype=object) array(['https://files.readme.io/ebc0b88-Crash_Reporting_-_Reply_Many_Copy_2.png', 'Crash Reporting - Reply Many Copy 2.png Click on this link to see a list of all users affected by that crash.'], dtype=object) array(['https://files.readme.io/ebc0b88-Crash_Reporting_-_Reply_Many_Copy_2.png', 'Click to close... Click on this link to see a list of all users affected by that crash.'], dtype=object) array(['https://files.readme.io/d48b235-Crash_Reporting_-_Reply_Many_Copy_3.png', 'Crash Reporting - Reply Many Copy 3.png This page in your dashboard lists all of your app users affected by a crash.'], dtype=object) array(['https://files.readme.io/d48b235-Crash_Reporting_-_Reply_Many_Copy_3.png', 'Click to close... This page in your dashboard lists all of your app users affected by a crash.'], dtype=object) array(['https://files.readme.io/9c9032d-Crash_Reporting_-_Reply_Occurrence_Copy.png', 'Crash Reporting - Reply Occurrence Copy.png Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence.'], dtype=object) array(['https://files.readme.io/9c9032d-Crash_Reporting_-_Reply_Occurrence_Copy.png', 'Click to close... Click on this button to open the chat pop-up and send a message to the individual user affected by that crash occurrence.'], dtype=object) array(['https://files.readme.io/3657cb2-Surveys_-_Reply_Copy.png', 'Surveys - Reply Copy.png Click on the buttons shown here in the survey results page of your dashboard to open a conversation with your survey responders.'], dtype=object) array(['https://files.readme.io/3657cb2-Surveys_-_Reply_Copy.png', 'Click to close... Click on the buttons shown here in the survey results page of your dashboard to open a conversation with your survey responders.'], dtype=object) array(['https://files.readme.io/39ea5bd-Chats_Icons_Legends.png', 'Chats Icons Legends.png The chat icons in your dashboard are visible in your bugs page and survey results pages.'], dtype=object) array(['https://files.readme.io/39ea5bd-Chats_Icons_Legends.png', 'Click to close... The chat icons in your dashboard are visible in your bugs page and survey results pages.'], dtype=object) ]
docs.instabug.com
Settings Settings for a saved project can be accessed from your project’s left navigation bar. This dialog reflects any information that you entered for the project. Clicking into editable sections will display a “save button” and allow you to make changes. Settings includes three sections: Project Settings Original Training Settings Delete Project Next step… Review general Key Concepts, Features, and Terms mentioned throughout Sparsify.
https://docs.neuralmagic.com/sparsify/v0.8.0/source/userguide/07-settings.html
2022-06-25T07:37:40
CC-MAIN-2022-27
1656103034877.9
[]
docs.neuralmagic.com
Power Analysis The power analysis tool leverages past data for a given metric to estimate the relationship between three variables: - Minimum detectable effect (MDE): The smallest change in the metric that the experiment can detect. For example: An MDE of 1% means that if there's a true effect of 1% or larger on our metric, we expect the experiment will show a statistically significant result. If the effect is smaller than 1%, then it will likely fall inside the confidence intervals and not be statisctically significant. - Number of days: How long the experiment is active. Longer running experiment typically have more observations, leading to tighter confidence intervals and smaller MDE. - Allocation: The percentage of traffic that participates in the experiment. Larger allocation leads to smaller MDE, so it's often desireable to allocate as many users as possible to get faster or more sensisitve results. When there's a risk of negative impact however, it's useful to know the smallest allocation that can achieve the desired MDE. Using the Tool To begin, launch the tool from the experiment creation flow: Select a metric of interest Select the type of analysis to perform Click on Start Calculation to see the results Fixed MDE Analysis Choose this option if the smallest effect size that the experiment should detect is known. Enter the desired MDE as a percentage of the current metric value. For example: If a website currently gets 1,000 clicks per day, an MDE of 5% means we can detect a change of 50 or more clicks per day. The results show the minimum number of days needed to reach this MDE for different allocation percentages. In the example below, the experiment should run for at least 11 days with 100% allocation or 21 days at 50%. Fixed Allocation Analysis Choose this option to understand how the length of the experiment impacts the MDE. The example below shows how the MDE for a click count metric shrinks over time in an experiment with 100% allocation. On the first day the MDE is 64%, after 30 days it's down to 11%. Advanced Options Advanced settings to custumize the analysis: - One-sided or Two-sided test: Toggle this setting to select the type of z-test to use for the analysis. - Number of Experiment Groups: The total number of groups in the experiment, including control. - Test Group Size: Default is 1, meaning the test and control groups are the same size. For different sized groups, enter the multiplier applied to the test group size. E.g.: 0.5 means the test group is half the size of the control group. - ID Type: The Unit ID type that the experiment will be based on. - Custom Date Range: The date range for past data used to obtain the metric mean, variance, and estimate of the total available traffic. By default, the most recent 7 days are used. Use this feature to exclude holidays or other events that are not representative of typical data trends expected during the experiment. A 7 day period is recommended to account for weekly seasonality.
https://docs.statsig.com/experiments-plus/power-analysis
2022-06-25T08:12:00
CC-MAIN-2022-27
1656103034877.9
[array(['https://user-images.githubusercontent.com/90343952/145107332-49befa18-3ca0-4cde-89cb-e80a1663754c.png', 'image'], dtype=object) array(['https://user-images.githubusercontent.com/90343952/145110692-75e23199-1a1d-4cc7-bb53-445e43b9ce53.png', 'image'], dtype=object) array(['https://user-images.githubusercontent.com/90343952/145121657-c36d37ce-7c19-4088-bd20-6a216a00cd37.png', 'image'], dtype=object) array(['https://user-images.githubusercontent.com/90343952/145122364-02af83d7-ea3d-4b24-8a10-506c1f227f8b.png', 'image'], dtype=object) ]
docs.statsig.com
array of IPSetSummary objects for the IP sets that you manage. See also: AWS API Documentation See 'aws help' for descriptions of global parameters. list-ip-sets --scope <value> [--next-marker <value>] [--limit <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] --scope (string) Specifies whether this is for an Amazon CloudFront distribution or for a regional application. A regional application can be an Application Load Balancer (ALB), an Amazon --next-marker . --limit (integer) The maximum number of objects that you want WAF to return for this request. If more objects are available, in the response, WAF provides a NextMarkervalue that you can use in a subsequent call to get the next batch of IP sets The following list-ip-sets retrieves all IP sets for the account that have regional scope. aws wafv2 list-ip-sets \ --scope REGIONAL Output: { "IPSets":[ { "ARN":"arn:aws:wafv2:us-west-2:123456789012:regional/ipset/testip/a1b2c3d4-5678-90ab-cdef-EXAMPLE11111", "Description":"", "Name":"testip", "LockToken":"0674c84b-0304-47fe-8728-c6bff46af8fc", "Id":"a1b2c3d4-5678-90ab-cdef-EXAMPLE11111 " } ], "NextMarker":"testip" } For more information, see IP Sets and Regex Pattern Sets in the AWS WAF, AWS Firewall Manager, and AWS Shield Advanced Developer Guide. NextMarker -> . IPSets -> (list) Array of IPSets. This may not be the full list of IPSets that you have defined. See the Limitspecification for this request. (structure) High-level information about an IPSet , returned by operations like create and list. This provides information like the ID, that you can use to retrieve and manage an IPSet, and the ARN, that you provide to the IPSetReferenceStatement to use the address set in a Rule . Name -> (string)The name of the IP set. You cannot change the name of an IPSetafter you create it. Id -> (string)A unique identifier for the set. This ID is returned in the responses to create and list commands. You provide it to operations like update and delete. Description -> (string)A description of the IP set that helps with identification.. ARN -> (string)The Amazon Resource Name (ARN) of the entity.
https://docs.aws.amazon.com/cli/latest/reference/wafv2/list-ip-sets.html
2022-06-25T09:35:32
CC-MAIN-2022-27
1656103034877.9
[]
docs.aws.amazon.com
their ExtraHop appliances with, which cannot contain any spaces. For example, adalovelace. Full Name: A display name for the user, which can contain spaces. For example, Ada Lovelace. Password: The password for this account. Confirm Password: Re-type the password from the Password field. - In the Authentication Type section, select Local. - In the User Type section, select the type of privileges for the user. - Unlimited privileges enables full read and write access to the ExtraHop system, including Administration settings. - Limited privileges enable you to select from a subset of privileges and options. - Click Save. Thank you for your feedback. Can we contact you to ask follow up questions?
https://docs.extrahop.com/8.4/add_local_user/
2022-06-25T08:12:36
CC-MAIN-2022-27
1656103034877.9
[]
docs.extrahop.com
. Advanced Users¶ If you are familiar with running Chainlink oracle nodes, this information will get you started on the Moonbase Alpha TestNet quickly: - Chainlink documentation on Running a Chainlink Node - Moonbase Alpha WSS EndPoint: wss://wss.api.moonbase.moonbeam.network - Moonbase Alpha ChainId: 1287(hex: 0x507) - LINK Token on Moonbase Alpha: 0xa36085F69e2889c224210F603D836748e7dC0088 - You can get DEV tokens for testing on Moonbase Alpha once every 24 hours from the Moonbase Alpha Faucet Checking Prerequisites¶ To follow along with this guide, you will need to have: - Docker installed for running Postgres DB and ChainLink node containers - An account with funds. You can create one with Metamask. You can get DEV tokens for testing on Moonbase Alpha once every 24 hours from the Moonbase Alpha Faucet - Access to the Remix IDE in case you want to use it to deploy the oracle contract. For more information you can check out the Using Remix to Deploy to Moonbeam tutorial Getting Started¶ This guide will walk through the process of setting up the oracle node, summarized as: - Setup a Chainlink node connected to Moonbase Alpha - Fund the node - Deploy an oracle contract - Create a job on the node - Bond the node and oracle - Test using a client contract Node Setup¶ To get the node setup, you can take the following steps: Create a new directory to place all the necessary files mkdir -p ~/.chainlink-moonbeam && cd ~/.chainlink-moonbeam Create a Postgres DB with Docker (MacOS users may replace --network host \with -p 5432:5432) docker run -d --name chainlink_postgres_db \ --volume chainlink_postgres_data:/var/lib/postgresql/data \ -e 'POSTGRES_PASSWORD={YOUR-PASSWORD-HERE}' \ -e 'POSTGRES_USER=chainlink' \ --network host \ -t postgres:11 Make sure to replace {YOUR-PASSWORD-HERE}with an actual password. Docker will proceed to download the necessary images if they haven't already been downloaded Create an environment file for Chainlink in the chainlink-moonbeamdirectory. This file is read on the creation of the Chainlink container. MacOS users may replace localhostwith host.docker.internal echo "ROOT=/chainlink LOG_LEVEL=debug ETH_CHAIN_ID=1287 MIN_OUTGOING_CONFIRMATIONS=2 LINK_CONTRACT_ADDRESS={LINK-TOKEN-CONTRACT-ADDRESS} CHAINLINK_TLS_PORT=0 SECURE_COOKIES=false GAS_UPDATER_ENABLED=false ALLOW_ORIGINS=* ETH_URL=wss://wss.api.moonbase.moonbeam.network DATABASE_URL=postgresql://chainlink:{YOUR-PASSWORD-HERE}@localhost:5432/chainlink?sslmode=disable MINIMUM_CONTRACT_PAYMENT=0" > ~/.chainlink-moonbeam/.env Here, besides the password ( {YOUR-PASSWORD-HERE}), you also need to provide the LINK token contract ( {LINK-TOKEN-CONTRACT-ADDRESS}) Create an .apifile that stores the user and password used to access the node's API, the node's operator UI, and the Chainlink command line touch .api Set both an email address and another password echo "{AN-EMAIL-ADDRESS}" > ~/.chainlink-moonbeam/.api echo "{ANOTHER-PASSWORD}" >> ~/.chainlink-moonbeam/.api Lastly, you need another file that stores the wallet password for the node's address touch .password Set the third password echo "{THIRD-PASSWORD}" > ~/.chainlink-moonbeam/.password Launch the containers Note Reminder, do not store any production passwords in a plaintext file. The examples provided are for demonstration purposes only. To verify everything is running and that the logs are progressing use: docker ps #Containers Running docker logs --tail 50 {CONTAINER-ID} #Logs progressing Contract Setup¶ With the oracle node running, you can start to configure the smart contract side of things. First, you'll need to fund the oracle node by taking the following steps: Retrieve the address that the oracle node will use to send transactions and write data on-chain by logging into the ChainLink node's UI (located at). You'll need to use the credentials from the .apifile Go to the Configuration Page and copy the node address Fund the node. You can get DEV tokens for testing on Moonbase Alpha once every 24 hours from the Moonbase Alpha Faucet Next, you'll. For this example, you can use Remix to interact with Moonbase Alpha and deploy the contract. In Remix, you can create a new file and copy the following code: pragma solidity ^0.6.6; import "@chainlink/contracts/src/v0.6/Oracle.sol"; After compiling the contract, you can take the following steps to deploy and interact with the contract: - Head to the Deploy and Run Transactions tab - Make sure you've selected Injected Web3 and have MetaMask connected to Moonbase Alpha - Verify your address is selected - Enter the LINK token address and click Deploy to deploy the contract. MetaMask will pop-up and you can confirm the transaction - Once deployed, under the Deployed Contracts section, copy the address of the contract Lastly, you have to bond the oracle node and the oracle smart contract. A node can listen to the requests sent to a certain oracle contract, but only authorized (aka. bonded) nodes can fulfill the request with a result. To bond the oracle node and smart contract, you can take the following steps: - To set the authorization using the setFulfillmentPermission()function from the oracle contract, enter the address of the node that you want to bond to the contract - In the _allowedfield you can set a boolean that indicates the status of the bond, for this example enter in true - Click transact to send the request. MetaMask will pop-up and you can confirm the transaction - Check the oracle node is authorized with the getAuthorizationStatus()view function, passing in the oracle node address Creating a Job¶ The last step to have a fully configured Chainlink oracle is to create a job. Referring to Chainlink’s official documentation: A Job specifications, or specs, contain the sequential tasks that the node must perform to produce a final result. A spec contains at least one initiator and one task, which are discussed in detail below. Specs are defined using standard JSON so that they are human-readable and can be easily parsed by the Chainlink node. Seeing an oracle as an API service, a job here would be one of the functions that we can call and that will return a result. To get started creating your first job, take the following steps: - Go to the Jobs sections of your node - Click on New Job Next, you can create the new job: Paste the following JSON. This will create a job that will request the current ETH price in USD { "initiators": [ { "type": "runlog", "params": { "address": "YOUR-ORACLE-CONTRACT-ADDRESS" } } ], "tasks": [ { "type": "httpget", "params": { "get": "" } }, { "type": "jsonparse", "params": { "path": [ "USD" ] } }, { "type": "multiply", "params": { "times": 100 } }, { "type": "ethuint256" }, { "type": "ethtx" } ] } Make sure you enter your oracle contract address ( YOUR-ORACLE-CONTRACT-ADDRESS) - Create the job by clicking on Create Job And that is it! You have fully set up a Chainlink oracle node that is running on Moonbase Alpha. Using Any API¶ You can also create and use a job spec to work with any API. You can search for preexisting jobs from an independent listing service such as market.link. Please note that although the jobs might be implented for other networks, you'll be able to use the job spec to create the job for your oracle node on Moonbase Alpha. Once you find a job that fits your needs, you'll need to copy the job spec JSON and use it to create a new job. For example, the previous job spec can be altered to be more generic so it can be used for any API: { "initiators": [ { "type": "runlog", "params": { "address": "YOUR-ORACLE-CONTRACT-ADDRESS" } } ], "tasks": [ { "type": "httpget" }, { "type": "jsonparse" }, { "type": "multiply" }, { "type": "ethuint256" }, { "type": "ethtx" } ] } If you need a more custom solution, you can check out Chainlink's documentation to learn how to build your own External Adapter. Test the Oracle¶ To verify the oracle is up and answering requests, follow the using a Chainlink Oracle tutorial. The main idea is to deploy a client contract that makes requests to the oracle, and the oracle writes the requested data into the contract's storage.
https://docs.moonbeam.network/node-operators/oracle-nodes/node-chainlink/
2022-06-25T08:31:40
CC-MAIN-2022-27
1656103034877.9
[array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-banner.png', 'Chainlink Moonbeam Banner'], dtype=object) array(['/images/builders/integrations/oracles/chainlink/chainlink-basic-request.png', 'Basic Request Diagram'], dtype=object) array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-1.png', 'Docker logs'], dtype=object) array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-4.png', 'Deploy Oracle using Remix'], dtype=object) array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-5.png', 'Authorize Chainlink Oracle Node'], dtype=object) array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-6.png', 'Chainlink oracle New Job'], dtype=object) array(['/images/node-operators/oracle-nodes/chainlink/chainlink-node-7.png', 'Chainlink New Job JSON Blob'], dtype=object) ]
docs.moonbeam.network
Key Manager Service Upgrade Guide¶ This document outlines several steps and notes for operators to reference when upgrading their barbican from previous versions of OpenStack. Plan to Upgrade¶ The release notes should be read carefully before upgrading the barbican services. Starting with the Mitaka release, specific upgrade steps and considerations are well-documented in the release notes. Upgrades are only supported between sequential releases. When upgrading barbican, the following steps should be followed: Destroy all barbican services Upgrade source code to the next release Upgrade barbican database to the next release barbican-db-manage upgrade Start barbican services Upgrade from Newton to Ocata¶ The barbican-api-paste.ini configuration file for the paste pipeline was updated to add the http_proxy_to_wsgi middleware. It can be used to help barbican respond with the correct URL refs when it’s put behind a TLS proxy (such as HAProxy). This middleware is disabled by default, but can be enabled via a configuration option in the oslo_middleware group. See Ocata release notes. Upgrade from Mitaka to Newton¶ There are no extra instructions that should be noted for this upgrade. See Newton release notes. Upgrade from Liberty to Mitaka See Mitaka release notes.
https://docs.openstack.org/barbican/yoga/admin/upgrade.html
2022-06-25T07:44:32
CC-MAIN-2022-27
1656103034877.9
[]
docs.openstack.org
Appeon provides its own tools and plug-ins to configure an Appeon Server cluster and implement the load balancing and failover functionalities. An Appeon Server cluster is essentially a group of EAServers, each with Appeon Server and Appeon plug-in installed. Following are high level steps for configuring an Appeon Server cluster. For detailed instructions, please refer to Tutorial 5: Configure Appeon Server Cluster in Appeon Mobile Tutorials (Mobile only). The instructions are exactly the same for Appeon Web and Appeon Mobile. Install Appeon Server to multiple EAServers. Create Appeon Server cluster in AEM. Configure the Web server for the Appeon Server cluster. Install an Appeon application to the Appeon Server cluster and Web server(s).
https://docs.appeon.com/2016/installation_guide/bk11ch04.html
2020-07-02T20:08:27
CC-MAIN-2020-29
1593655879738.16
[]
docs.appeon.com
Creating a Command Workflow with a Custom Attribute You can create a command workflow to back up a VM, set the VM backup time, and send an email to notify the VM owner that the VM has been backed up, and also provide the backup time in the email. This topic explains how to: - Run a script through a workflow. - Set VM metadata using the output from a script step. - Set up email notification through a workflow. - Use variables to populate email addresses for notification. Note: The following steps are similar for all workflow types. Create a custom attribute Before creating the workflow, create a custom attribute to store the VM's last backup time. You will set a value for the custom attribute in a workflow step. - Click Configuration > Custom Attributes, and click Add. - In the Configure Attribute wizard, enter a name for the attribute, such as "Last Backup Time". These custom attributes can be displayed on the Details panel of the VM's Summary tab, so you should create user-friendly names. - Select the Text type. - Specify that the custom attribute should apply to Services, and click Next. - On the Configure Attribute page, leave the default selection, and click Finish. Create the command workflow - Go to Configuration > Command Workflows. - Click Add to create a new workflow. - On the Name & Type page, do the following: - Type a name for the workflow. - From Target Type, leave the default value of VM. - Click Next. Last Backup Time), type the following value: #{steps[1].output} This syntax uses the output of Step 1 in the workflow to input a value for the custom attribute. #{target.primaryOwner.email} The <a> tag is automatically added to links in emails (only the HTTP protocol is supported). For example, if the value of a custom attribute is a link, the value will be formatted as a link in the email. If you don't use HTML markup in the email body, the body is assumed to be plain text; <br> and <p> tags are automatically added for new lines. If you add HTML markup to the email body, however, no additional tags are added.
https://docs.embotics.com/commander/example_workflow.htm
2020-07-02T18:33:43
CC-MAIN-2020-29
1593655879738.16
[]
docs.embotics.com
Refer to Amazon Sales Channel 4.0+ for updated documentation. Onboarding: Catalog Search Step 2 Options for Listing Settings If you are managing a store that is in “Active” or “Inactive” status, see Catalog Search.. If you want to add attributes at this stage of onboarding, see Create Product Attributes for Amazon Matching. These settings are part of your store’s Listing Settings. Update these configurations during onboarding through the Listing Settings step. To configure Search Catalog settings: We recommend mapping these attributes and values if available. Completing this mapping is not required, but is beneficial for initial product matching and required for proper catalog syncing between Amazon and Magento. Expand the Catalog Search section. In the ASIN field, select the product attribute you created for the Amazon ASIN value. An ASIN (Amazon Standard Identification Numbers) is a unique block of 10. In the EAN field, select. In the GCID field, select. In the ISBN field, select the product attribute you created for the Amazon ISBN value. The International Standard Book Number (ISBN) is a unique commercial book identifier barcode. Each ISBN code uniquely identifies a book. ISBN have either 10 or 13 digits. All ISBN assigned after Jan 1, 2007 have 13 digits. In the UPC field, select the product attribute you created for the Amazon UPC value. The Universal Product Code (UPC) is a 12-digit bar code used extensively for retail packaging in United States. In the General Search field, select the product attribute you want to use for a general search match. This is an attribute that you can select to match Magento products to the appropriate Amazon listing. General search uses keyword searches from your catalog. As such, it is recommended to use a Magento attribute that carries relevant keywords, such as the product SKU or product name. General search may return many possible matches, and in such cases, you can select the appropriate Amazon listing from the possible matches. A common selection for this field is “Product Name.” When complete, continue to the Product Listing Condition section. Catalog Search
https://docs.magento.com/user-guide/sales-channels/amazon/ob-catalog-search.html
2020-07-02T20:05:09
CC-MAIN-2020-29
1593655879738.16
[array(['/user-guide/images/images/sales-channels/amazon/onboarding-step-listing-settings.png', None], dtype=object) ]
docs.magento.com
Exporting translations You can download a zip file with translations by clicking the Export button in the overview. This can be useful when you create translations in a staging environment, export them and import the translations in the production environment.
https://docs.onegini.com/msp/stable/token-server/topics/look-and-feel/translations/translations.html
2020-07-02T18:03:31
CC-MAIN-2020-29
1593655879738.16
[array(['img/add-translations.png', 'Configure translations'], dtype=object) array(['img/import-translations.png', 'Import translations'], dtype=object) array(['img/export-translations.png', 'Translations overview'], dtype=object) ]
docs.onegini.com
Configuring a Read-Write LDAP User Store¶ User management functionality is provided by default in all WSO2 Carbon-based products and is configured in the deployment.toml file found in the <PRODUCT_HOME>/repository/conf/ directory and the changes will be automatically applied to user-mgt.xml file in <PRODUCT_HOME>/repository/conf/ directory as well.. Info - Step 2: Updating the system administrator - Step 3: Starting the server Step 1: Setting up the read-write LDAP user store manager¶ Before you begin - Navigate to <PRODUCT_HOME>/repository/confdirectory to open deployment.tomlfile and do user_store_properties configurations as follows: [user_store.properties] TenantManager= "org.wso2.carbon.user.core.tenant.CommonHybridLDAPTenantManager" ConnectionURL="ldap://localhost:10390" ConnectionName="uid=admin,ou=system" UserSearchBase="ou=Users,dc=wso2,dc=org" GroupSearchBase="ou=Groups,dc=wso2,dc=org" ConnectionPassword="admin" AnonymousBind= "false" WriteGroups= "true" UserEntryObjectClass= "identityPerson" UserNameAttribute= "uid" UserNameSearchFilter= "(\u0026amp;(objectClass\u003dperson)(uid\u003d?))" UserNameListFilter= "(objectClass\u003dperson)" DisplayNameAttribute= "" GroupEntryObjectClass= "groupOfNames" GroupNameAttribute= "cn" GroupNameSearchFilter= "(\u0026amp;(objectClass\u003dgroupOfNames)(cn\u003d?))" GroupNameListFilter= "(objectClass\u003dgroupOfNames)" MembershipAttribute= "member" BackLinksEnabled= "false" SCIMEnabled= "true" IsBulkImportSupported= "true" UsernameJavaRegEx= "[a-zA-Z0-9._\\-|//]{3,30}$" RolenameJavaRegEx= "[a-zA-Z0-9._\\-|//]{3,30}$" PasswordHashMethod= "PLAIN_TEXT" ConnectionPoolingEnabled= "false" LDAPConnectionTimeout= "5000" ReplaceEscapeCharactersAtUserLogin= "true" EmptyRolesAllowed= "true" kdcEnabled= "false" defaultRealmName= "WSO2.ORG" StartTLSEnabled= "false" UserRolesCacheEnabled= "true" ConnectionRetryDelay= "2m" - The classattribute for a read-write LDAP is <UserStoreManager class="org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager"> [user_store] class="org.wso2.carbon.user.core.ldap.ReadWriteLDAPUserStoreManager" type = "database" Once the above points are made note of and completed, you can start configuring your external read-write LDAP as the primary user store. Note Note that these configurations will automatically applied to the user-mgt.xml file so you do not need to edit it. The configuration for the external read/write user store in the user-mgt.xml file looks as follows. For more information about each of the properties used in the deployment.toml file for configuring the primary user store , see Properties of User Stores . <UserStoreManager class="org.wso2.carbon.user.core.ldap.ReadOnlyLDAPUserStoreManager"> <Property name="IsBulkImportSupported">true</Property> <Property name="MaxUserNameListLength">100</Property> <Property name="MultiAttributeSeparator">,</Property> <Property name="ConnectionPassword">admin</Property> <Property name="UserNameUniqueAcrossTenants">false</Property> <Property name="StoreSaltedPassword">true</Property> <Property name="TenantManager">org.wso2.carbon.user.core.tenant.CommonHybridLDAPTenantManager</Property> <Property name="UserSearchBase">ou=system</Property> <Property name="GroupNameSearchFilter">(&(objectClass=groupOfNames)(cn=?))</Property> <Property name="ConnectionPoolingEnabled">true</Property> <Property name="CaseInsensitiveUsername">true</Property> <Property name="UserNameSearchFilter">(&(objectClass=person)(uid=?))</Property> <Property name="LDAPConnectionTimeout">5000</Property> <Property name="UserNameAttribute">uid</Property> <Property name="GroupNameAttribute">cn</Property> <Property name="UsernameJavaRegEx">[a-zA-Z0-9._\-|//]{3,30}$</Property> <Property name="WriteGroups">true</Property> <Property name="AnonymousBind">false</Property> <Property name="ConnectionName">uid=admin,ou=system</Property> <Property name="ConnectionURL">ldap://localhost:10390</Property> <Property name="RolenameJavaScriptRegEx">^[\S]{3,30}$</Property> <Property name="RolenameJavaRegEx">[a-zA-Z0-9._\-|//]{3,30}$</Property> <Property name="PasswordJavaRegEx">^[\S]{5,30}$</Property> <Property name="PasswordHashMethod">PLAIN_TEXT</Property> <Property name="GroupSearchBase">ou=system</Property> <Property name="ReadGroups">true</Property> <Property name="ReplaceEscapeCharactersAtUserLogin">true</Property> <Property name="ConnectionRetryDelay">120000</Property> <Property name="MembershipAttribute">member</Property> <Property name="PasswordJavaRegExViolationErrorMsg">Password length should be within 5 to 30 characters</Property> <Property name="MaxRoleNameListLength">100</Property> <Property name="PasswordJavaScriptRegEx">^[\S]{5,30}$</Property> <Property name="BackLinksEnabled">false</Property> <Property name="UsernameJavaRegExViolationErrorMsg">Username pattern policy violated</Property> <Property name="UserRolesCacheEnabled">true</Property> <Property name="GroupNameListFilter">(objectClass=groupOfNames)</Property> <Property name="SCIMEnabled">false</Property> <Property name="PasswordDigest">SHA-256</Property> <Property name="UserNameListFilter">(&(objectClass=person)(!(sn=Service)))</Property> <Property name="UsernameJavaScriptRegEx">[a-zA-Z0-9._\\-|//]{3,30}$</Property> <Property name="ReadOnly">false</Property> </UserStoreManager> To read and write to an LDAPuserstore, it is important to ensure that the ReadGroupsand WriteGroupsproperties in the <PRODUCT_HOME>/repository/conf/deployment.tomlfile are set to true. WriteGroups = "true" ReadGroups = "true" Set the attribute to use as the username, typically either cnor uidfor LDAP. Ideally, UserNameAttributeand UserNameSearchFiltershould refer to the same attribute. If you are not sure what attribute is available in your user store, check with your LDAP administrator. UserNameAttribute = "uid" Specify the following properties that are relevant to connecting to the LDAP in order to perform various tasks. ConnectionURL = "ldap://localhost:<LDAPServerPort>" ConnectionName = "uid=admin,ou=system" ConnectionPassword = "admin" Note If you are using ldaps(secured LDAP) to connect to the LDAP: - You need set the ConnectionURLas shown below. - You also need to enable connection pooling for LDAPS connections at the time of starting your server, which will enhance server performance. ConnectionURL = "ldaps://10.100.1.100:636" In WSO2 products based on Carbon 4.5.x, you can set the LDAPConnectionTimeoutproperty: If the connection to the LDAP is inactive for the length of time (in milliseconds) specified by this property, the connection will be terminated.. ReadGroups = "true" Set the GroupSearchBaseproperty to the directory name where the Roles are stored. That is, the roles you create using the management console of your product will be stored in this directory location. Also, when LDAP searches for groups, it will start from this location of the directory. For example: GroupSearchBase = "ou=system,CN=Users,DC=wso2,DC=test" Set the GroupSearchFilter andGroupNameAttributes. For example: GroupSearchFilter = "(objectClass=groupOfNames)" GroupNameAttribute = "cn" Set the MembershipAttributeproperty as shown below: MembershipAttribute = "member" To read roles based on a backlink attribute, use thefollowingcodesnipetinsteadofthe above: ReadGroups = "false" GroupSearchBase = "ou=system" GroupSearchFilter = "(objectClass=groupOfNames)" GroupNameAttribute = "cn" MembershipAttribute = "member" BackLinksEnabled = "true" MembershipOfAttribute = "memberOf" Step 2: Updating the system administrator¶'. Update the [super_admin]section of your configuration as shown below. You do not have to update the password element as it is already set in the user store. [super_admin] username = "admin" password = "admin" create_admin_account = false If the user store can be written to, you can add the super tenant user to the user store. Therefore, create_admin_accountshould be set to true as shown below. username = "admin" password = "admin" create_admin_account = true Step 3: Starting the server¶ Start your server and try to log in as the admin user you specified in Step 2 .Top
https://apim.docs.wso2.com/en/3.0.0/administer/product-administration/managing-users-and-roles/managing-user-stores/configure-primary-user-store/configuring-a-read-write-ldap-user-store/
2020-07-02T19:34:50
CC-MAIN-2020-29
1593655879738.16
[]
apim.docs.wso2.com
Refer to Amazon Sales Channel 4.0+ for updated documentation. Price Adjustment The Price Adjustment section is differs slightly for Standard and Intelligent repricing rules. The “Match Competitor Price” option is only available in the Price Action drop-down when the Rule Type field is set to “ Intelligent repricing rule.” Sections of an intelligent repricing rule include: - Select Rule Type - Competitor Conditional Variances - Price Adjustment - Floor Price - Optional Ceiling Price The price adjustment determines what you do once you have identified the competitor price source. To configure the Price Adjustment section: Define your pricing adjustment in the Price Adjustment section. For Price Action, select an option: Decrease By: Choose when you want the defined price source value to be to be adjusted down, creating a lower price for the rule, before listing to Amazon. Increase By: Choose when you want the defined price source value to be adjusted up, creating a higher price for the rule, before listing to Amazon. Match Competitor Price: (Intelligent repricing rule only) Choose when you want to change your Amazon listing price to match the lowest competitor price, based on your competitor feedback and variance parameters. When set to Match Competitor Price, the Apply and Adjustment Amount fields are removed. For Apply, select an option: Apply as percentage: Choose when you want the defined price source value adjusted by a percentage. Apply as fixed amount: Choose when you want the defined price source value adjusted by a fixed amount. For Adjustment Amount (required), enter the numerical value for the price adjustment. When Apply is set to Apply as percentage, enter the percent value (example: 25 for a 25% percent adjustment). When Apply is set to Apply as fixed amount, enter the numerical value for the fixed amount (example: 25 for a $25 fixed adjustment). Intelligent Repricing Rule: Price Adjustment
https://docs.magento.com/user-guide/sales-channels/amazon/price-adjustment.html
2020-07-02T19:41:36
CC-MAIN-2020-29
1593655879738.16
[]
docs.magento.com
Wallboard The Wallboard serves to monitor the operator activity in the mluvii application in real time. You can choose the monitoring metrics on your own. The Wallboard function is especially appreciated for managing a larger call center. Create a new wallboard To create a new wallboard, click the “Manage” button below the table. Click on the green “New Wallboard +” button on the right, name the wallboard and set the individual data. A number of rows in the wallboard can be set on the page, thus number of variables appearing on one page (1 to 4). The total number of variables set is not limited, the wallboard may have multiple pages. The switching frequency between them can be set in the box Speed of page rotation (measured in seconds). Information that can be viewed: Login operators Operators in the following statuses - Online, Away, Filling in of Feedback Form (ACW), in active session. Current sessions - at the level of the configuration packages. Completed sessions - for a certain period of time. Average session time in seconds - for a certain period of time. SLA in % - i.e. accepting clients in a given time slot for a certain time. Number of clients in the queue - at the configuration level. The longest waiting in the queue - for a certain period of time. Session ended with flag - i.e. with a certain result of the session for a certain period of time. Average session rating by the client - for a certain period of time. Average waiting time in the queue - for a certain period of time. By pressing ( ) you can start the transfer. Tip: The button ( ) on each edge of the line is used to adjust individual wallboards.
https://docs.mluvii.com/guide/en/for-administrators/company-management/wallboard.html
2020-07-02T18:02:00
CC-MAIN-2020-29
1593655879738.16
[]
docs.mluvii.com
Resolving macros using the API If you need to process macro expressions inside text values in your custom code, use the MacroResolver.Resolve method. Specify the string where you want to resolve macros as the method's input parameter. For example: using CMS.MacroEngine; ... // Resolves macros in the specified string using a new instance of the global resolver string resolvedTextGlobal = MacroResolver.Resolve("The current user is: {% CurrentUser.UserName %}"); The method evaluates the macros using a new instance of the global resolver and automatically ensures thread-safe processing. Macro resolvers are system components that provide the processing of macros. The resolvers are organized in a hierarchy that allows child resolvers to inherit all macro options from the parent. The global resolver is the parent of all other resolvers in the system. Resolving localization macros If you only need to resolve localization macros in text, call the ResHelper.LocalizeString static method. using CMS.Helpers; ... // Resolves localization macros in text string localizedResult = ResHelper.LocalizeString("{$general.actiondenied$}");
https://docs.kentico.com/k9/macro-expressions/resolving-macros-using-the-api
2020-07-02T19:09:52
CC-MAIN-2020-29
1593655879738.16
[]
docs.kentico.com
Authorize.Net Direct Post - Deprecated Deprecation Notice Due to the Payment Service Directive PSD2 and the continued evolution of many APIs, this payment integration is at risk of becoming outdated and no longer security compliant in the future. Additionally, Authorize.Net has deprecated the Authorize.Net Direct Post payment method. We are recommending that you disable it in your Magento configuration and transition to the Authorize.Net Magento Marketplace extension. This integration will be removed from the Magento 2.4.0 release and has been deprecated from current versions of 2.3. For details about making a secure transition from deprecated payment integrations, see our DevBlog. Authorize.Net handles all steps in the transaction process, such as payment data collection, data submission, and response to the customer, while the customer remains in your store. Customer Workflow Customer chooses payment method. During checkout, the customer chooses Authorize.Net Direct Post (Deprecated) as the payment method. Customer submits the order. The customer enters the credit card information, reviews the order, and taps the Place Order button. Authorize.Net completes the transaction. Authorize.Net validates the card information, and processes the transaction. If successful, the customer is redirected to the order confirmation page. If the transaction fails, an error message appears, and the customer can try a different card, or choose a different payment method. Setting Up Authorize.Net Direct Post (Deprecated) Step 1: Enable Authorize.Net Direct Post (Deprecated) On the Admin sidebar, tap Stores. Then under Settings, choose Configuration. In the panel on the left under Sales, choose Payment Methods. Expand the Authorize.Net Direct Post (Deprecated) section. Then, do the following: Set Enabled to “Yes.” Set Payment Action to one of the following: Enter a Title to identify the Authorize.Net Direct Post (Deprecated) payment method during checkout. Enable Authorize.Net Direct Post (Deprecated) Step 2: Enter Your Authorize.Net Account Credentials Enter the following credentials from your Authorize.Net merchant account: - API Login ID - Transaction Key In the Merchant MD5 field, enter the hash value from your Authorize.Net merchant Account. The value is located on the Authorize.Net website at Account > Settings > Security Settings > MD5-Hash. Set New Order Status to one of the following: - Suspected Fraud - Processing To operate temporarily in a test environment, set Test Mode to “Yes.” When you are ready to process live transactions, set Test Mode to “No.” When testing the configuration in a sandbox, use only the credit card numbers that are recommended by Authorize.Net. Enter the Gateway URL that establishes the connect to the Authorize.Net server. The default value is: If you have received a temporary URL for test transactions, don’t forget to restore the original URL when you are ready to process live transactions. Step 3: Complete Payment and Notification Information Verify that Accepted Currency is set to “US Dollar.” To save messages transmitted between your store and the Authorize.Net Direct Post system, set Debug to “Yes.” To set the notification options, do the following: If you want Authorize.Net to send an order confirmation notification to the customer, set Email Customer to “Yes.” In the Merchant’s Email field, enter the email address where you want to receive notification of orders placed with Direct Post. Leave blank if you do not want to receive notification. To complete the payment options, do the following: In the Credit Card Types list, select each credit card that is accepted in your store. To require customers to enter a card verification value (CVV), set Credit Card Verification to “Yes.” Complete the Payment Information Set Payment from Applicable Countries to one of the following:.) Specify the Applicable Countries When complete, tap Save Config.
https://docs.magento.com/user-guide/payment/authorize-net-direct-post.html
2020-07-02T20:06:18
CC-MAIN-2020-29
1593655879738.16
[]
docs.magento.com
Windows Developer News for the Week of Oct 22nd This week, we published a new Hilo Chapter, refreshed the Windows 7 and Windows Server 2008 R2 Application Quality Cookbook, and created a new thread in the Power management for tracking power management issues. Hilo Updates Hilo Chapter 15 was published today! This chapter, Hilo Chapter 15: Using Windows HTTP Services covers the integration piece that was performed for Flickr support. The coolest thing about this update is that it demonstrates RESTful implementation and support in C++ which effectively shows how you could integrate RESTful services (Netflix, Facebook, and Amazon to name a few) into Windows applications in native code. Windows 7 and Windows Server 2008 R2 Application Quality Cookbook The latest update to the Windows 7 and Windows Server 2008 R2 Application Quality Cookbook has been released. This update is very minor but it’s worth calling out that this content is an EXCELLENT way to review projects you are developing to ensure they are good Samaritans in the larger Windows ecosystem. Power Management The Power Management team started a thread in the Power Management Developer Forums where you can report power management problems to Microsoft and ISVs. If you have an application that is preventing Windows from sleeping, please file a bug report in the forums and we’ll try and get it fixed. See Also · PDC 2010 Site – Sessions are going live next week · Learn how to Write a Windows App with Hilo
https://docs.microsoft.com/en-us/archive/blogs/seealso/windows-developer-news-for-the-week-of-oct-22nd
2020-07-02T20:01:09
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
Contents - Overview - Output Streams View Features - The View's Toolbar - Streams Shown in Standard versus ISD Mode - Select One Stream or All - Select Multiple Streams - Select Streams with ISD Enabled - Stream Selection is Persisted - The Details Pane - Clearing the View - Sort By Columns - Show or Suppress the Time Column - Copy Tuples to Clipboard - Scroll Lock - Filtering Tuples - Creating a New StreamBase Unit Test Class - Creating a New Output Streams View - Disable Dequeuing - Preference Settings - Related Topics The Output Streams view shows tuples as they are emitted from a running application. In the default layout, the Output Streams view is located at the bottom left corner of the SB Test/Debug perspective, sharing the bottom left Eclipse folder with the Input Streams and other views. The figure below shows an example Output Streams view of the running Bollinger Band sample, which is included with StreamBase. This figure illustrates several features of the Output Streams view, which are discussed in the next section. The Output Streams view has the features described in the following subsections: The buttons on the Output Streams view's toolbar are used as shown in the following illustration. The area to the left of the toolbar is used to display messages, such as the current status of the tuple filter. The use of each button is described in the following sections below: The Output Streams view shows the output of different sets of streams, depending on how you started the currently running application: Run. If you started the application with the button in Studio's toolbar, or with a run configuration, then by default, the view shows the output of all output streams. ISD Enabled. If you started the application with a run or debug configuration that enables intermediate stream dequeuing (ISD), then the view shows the output of all output streams, all input streams, and all intermediate streams. See Intermediate Stream Dequeuing for more information on intermediate streams. When you first start an application running in Studio, the Output Streams view is set for (All Output Streams) to show the output of all available streams in the running application, as described in the previous section. Use the drop-down list in the Stream field to select a single stream. This restricts the view to the output of the selected stream. The list of streams always includes the system.error stream. You may need to restrict the view to the output of two or more streams, but fewer than all streams in the application. In this case, click the select link on the right side of the Stream field. This opens the Output Stream Selector dialog, like the following example: Select two or more streams and click Stream field shows a comma-separated list of the selected stream names, including the error stream from the system container. When running or debugging a top-level application that references one or more modules, the Output Stream Selector dialog organizes the available output streams hierarchically, to help you locate the streams of interest. With intermediate stream dequeuing enabled, the Output Stream Selector dialog shows a larger list of streams whose output you can monitor. The following illustration shows the Output Stream Selector dialog for two cases of running the Bollinger Band sample installed with TIBCO® Spotfire Data Streams. You see the dialog on the left when running with the default run configuration. It shows only the three output streams defined in the application. You see the dialog on the right when running with intermediate stream dequeuing enabled. It shows the same three output streams plus the output port of all intermediate components as selectable streams. See Intermediate Stream Dequeuing for more information on intermediate streams. For each application or module run in Studio, the selection of output streams you select for viewing in the Output Streams view is saved, and Studio restores the same stream selection the next time you run the same application. The saved stream selections persist even if Studio is closed and reopened. When you launch another application for the first time, the stream selection in the Output Streams view is reset to All Streams. The Details pane is enabled by default when an application starts running. Click the button in the view's toolbar to disable and re-enable the Details pane. By default, nothing shows in the Details pane. Select any tuple row in the output table to see a tree display of the individual fields of the selected tuple. The figure above shows an example of a populated Details pane. If the selected tuple contains fields of the tuple data type, then by default, the Details pane shows each tuple field in a single row with a arrowhead on its left: Click each arrowhead to see the contents of each tuple field: Tuple fields can be nested inside other tuple fields to any depth. Nested tuples are shown with indentation: Fields with the list data type are shown with their contents in array form on a single line with an open arrowhead to its left. Click the arrowhead to see the individual elements of the list field, one per line: Thebutton on the bottom right of the view clears all tuples currently in the view. By default, the Output Streams view is cleared whenever you restart an application. Clear the Clear on Application Restart check box to preserve tuples between runs of an application. Click a column header to sort the output table lexicographically by that column. An up or down arrowhead above the column title shows the current ascending or descending sort order. Click the column header again to change the sort order. Sorting the output table makes the most sense when examining the output of an application after it has run and stopped. In this case, you might, for example, sort the table by the Output Stream column to group the tuples by stream. You might also sort the output table before saving portions of it to a file or to a CSV file. It is possible to click a column header while the application is running and still emitting output tuples. However, doing this is not guaranteed to produce useful results. For example, after you click a column header, newly emitted tuples are still placed at the bottom or top of the table (depending on sort order), and are not automatically sorted into place elsewhere in the table while the application is still emitting tuples. TIBCO Software recommends stopping or pausing the application before using the column sort feature. By default, the Output Streams view shows the time received for each tuple in the output grid. (For applications fed by a fast feed simulation, the timestamps in this column can be the same for dozens or even hundreds of rows.) You can suppress the Time column as follows: Select any row in the output grid. Right-click to display the context menu. Select Show Time column from the context menu. The check mark shows the current state of the Time column. You can copy the selected rows in the output grid to the system clipboard, either as plain text or in one of several formats. You can use the Copy as options to generate input or output data for a StreamBase JUnit test. Follow these steps: If necessary, sort the table to group together the tuples of interest. See Sort By Columns. Select one or more rows that you want to copy. Hold down the Ctrl key while clicking to select noncontiguous rows. To specify all rows in the grid, use Select All from the context menu, or press Ctrl+A. Right-click to display the context menu. Select Copy or one of the Copy as options to copy the selected rows, as described below. Use the various Copy options as follows. In all cases, examples are shown with long lines wrapped for publication clarity, but the lines are copied to the clipboard without wrapping. - Copy The selected rows are copied to the clipboard as plain text with a column header row, including both field names and field values. The Time column's value is included or not, depending of the setting of the Show Time column option. Time Output Stream Fields 10:42:41 OutputIBM CSV The selected rows are copied to the clipboard as a comma-separated value string, using the CSV standard rules. The Time column's value is included or not, depending of the setting of the Show Time column option. When used in the Input Streams view, this option generates a string that can be sent to the specified input stream using sbc enqueue or using a custom client application. 10:42:41 - Copy Tuples to Manual Input View This option offers a convenient way to re-run a tuple from an input stream. StockPriceInput symbol=DELL, price=35.52, date=2020-03-10 16:00:00.000-0400 - Copy as JSON The selected rows are copied to the clipboard as a JSON object in which double quotes are used to designate string field names and field values. { Single Quote JSON The selected rows are copied to the clipboard as a JSON object in which single quotes are used to designate string field names and field values. Use this option to generate JSON-formatted tuples for use in StreamBase JUnit tests. { StreamBase Unit Test snippet with CSV When used in the Output Streams view, the selected rows are copied to the clipboard wrapped in Expectercode using the CSVTupleMaker, ready for pasting into an existing StreamBase JUnit file. new Expecter(server.getDequeuer("OutputIBM")).expect( CSVTupleMaker.MAKER, " ); When used in the Input Streams view, the selected rows are wrapped in a getEnqueuer().enqueue()call using the CSVTupleMaker: server.getEnqueuer("InputStream1").enqueue( CSVTupleMaker.MAKER, "IBM,75.91,2005-04-28 16:00:00.000-0400" ); - Copy as StreamBase Unit Test snippet with JSON Same as the previous option, but the generated code uses the JSONSingleQuotesTupleMaker. In the Output Streams view: new Expecter(server.getDequeuer("OutputIBM")).expect( JSONSingleQuotesTupleMaker.MAKER, "{'}" ); In the Input Streams view: server.getEnqueuer("InputStream1").enqueue( JSONSingleQuotesTupleMaker.MAKER, "{'symbol':'IBM','price':75.91,'date':'2005-04-28 16:00:00.000-0400'}" ); Use the Scroll Lock button in the view's toolbar to control how tuples scroll in the Output Streams table: When Scroll Lock is off (the default): incoming tuple rows are added to the bottom of the table (or to the top of the table, depending on column sort order). The most recently received tuple is usually visible. When Scroll Lock is on: the table display remains fixed on the currently selected tuple while new tuples arrive to fill the tuple buffer. If the view's tuple buffer overflows, incoming tuples might begin scrolling again, even with Scroll Lock enabled. You can adjust the size of the view's tuple buffer as described in Preference Settings. You can narrow the list of displayed tuples by matching a search string against field names or against the contents of the Fields column. You can elect to show only matching tuples, or to show all tuples but with color highlighting for the matching tuples. Click the Filter button in the view's toolbar to open the Output Streams Filter dialog: - Select tuples containing a field whose name matches this pattern To match against a field name in tuples, check the Select tuples containing check box, and specify the name of a field. This option can be used, for example, when running an application with intermediate stream dequeuing enabled, where output from all incoming and intermediate streams is included. You can match against a field name to follow the progress of a particular field as tuples flow through various operators in your application. - Select tuples with the following pattern in the Fields column Use this option to match against particular data values in tuples. The specified string is matched against the entire Fields column. This means you must specify wildcards before and after your search pattern in order to match your pattern as a substring. The example shown above matches the exact string symbol=DELLin any tuple. You can use one or both of the above options in the same filter. For example, you might match against a particular field name with the Select tuples containing option, and then match against particular data in that field using the Select tuples with option. - Results: Show only selected Select this option to narrow the list of tuples in the Output Streams view. Only the tuples matching the specified filter settings are shown. - Results: highlight selected Select this option to continue showing all tuples, but show any matching tuples with color highlighting, as illustrated in the first figure above. The default highlight color is yellow; you can adjust the highlight color in Studio Preferences. Use the Create a new StreamBase Unit Test Class button to build an EventFlow Fragment Unit Test, which is a Java file based on the org.junit and com.streambase.sb.unittest packages. See the unit test page more information. Use the New Output Streams View button to create a copy of the currently selected Output Streams view. Use the Disable Dequeuing button to halt dequeuing from all streams in the currently running application. You might select this option, for example, if you were optimizing the throughput of a test feed simulation in StreamBase Studio. When dequeuing is disabled, the grid of recent output tuples stops being populated, but is still active. You can still scroll and select any row to see individual output tuples in the details pane. The following Studio preference settings affect the Output Streams view: - Tuple Buffer Size The default tuple buffer for the Output Streams view is 400 tuples. You can increase or decrease this value as required for your application and conditions. - Highlight Color The default highlight color for tuples matching a filter setting is yellow. Change the color with this preference setting. You can open the Preferences page containing these settings in two ways: Use the Menu button in the view's toolbar. The menu has a single item, Preferences, which directly opens the StreamBase Studio > Test/Debug page of the Preferences dialog. In Studio's main menu, select Test/Debug in the StreamBase Studio section.> and open
https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/studioref/applicationoutput.html
2020-07-02T19:56:53
CC-MAIN-2020-29
1593655879738.16
[]
docs.streambase.com
Connectivity Architecture Couchbase Server is a fully distributed database, making connection management and efficient communication key components of the architecture. This section provides information about client to cluster, node to node, cluster to cluster, and cluster to external products communications. It also describes the phases of establishing a connection. Client to Cluster Communication Client applications communicate with Couchbase Server through a set of access points tuned for the data access category such as CRUD operations, N1QL queries, and so on. Each access point supports clear text and encrypted communication ports. There are five main types of access points that drive the majority of client to server communications. For information on how a connection is established when a request from the client side is received, see Connectivity Phases. Node to Node Communication Nodes of the cluster communicate with each other to replicate data, maintain indexes, check health of nodes, communicate changes to the configuration of the cluster, and much more. Node to node communication is optimized for high efficiency operations and may not go through all the connectivity phases (authentication, discovery, and service connection). For more information about connectivity phases, see Client to Cluster Communication. Cluster to Cluster Communication Couchbase Server clusters can communicate with each other using the Cross Datacenter Replication (XDCR) capability. XDCR communication is set up from a source cluster to a destination cluster. For more information, see Cross Datacenter Replication. External Connector Communication Couchbase Server also communicates with external products through connectors. Couchbase has built and supports connectors for Spark, Kafka, Elasticsearch, SOLR, and so on. The community and other companies have also built more connectors for ODBC driver, JDBC driver, Flume, Storm, Nagios connectors for Couchbase, and so on. External connectors are typically built using the existing client SDKs, the direct service or admin APIs listed in the client to cluster communication section, or feed directly from the internal APIs such as the Database Change Protocol (DCP) API. For more information about Database Change Protocol, see Intra-cluster Replication. Connectivity Phases When a connection request comes in from the client side, the connection is established in three phases: authentication, discovery, and service connection. Authentication: In the first phase, the connection to a bucket is authenticated based on the credentials provided by the client. In case of Admin REST API, admin users are authenticated for the cluster and not just a bucket. Discovery: In the second phase, the connection gets a cluster map which represents the topology of the cluster, including the list of nodes, how data is distributed on these nodes, and the services that run on these nodes. Client applications using the SDKs only need to know the URL or address to one of the nodes in the cluster. Client applications with the cluster map discover all other nodes and the entire topology of the cluster. Service Connection: Armed with the cluster map, client SDKs figure out the connections needed to establish and perform the service level operations through key-value, N1QL, or View APIs. Service connections require a secondary authentication to the service to ensure the credentials passed on to the service have access to the service level operations. With authentication cleared, the connection to the service is established. At times, the topology of the cluster may change and the service connection may get exceptions on its requests to the services. In such cases, client SDKs go back to the previous phase to rerun discovery and retry the operation with a new connection.
https://docs.couchbase.com/server/5.0/architecture/connectivity-architecture.html
2020-07-02T17:53:28
CC-MAIN-2020-29
1593655879738.16
[]
docs.couchbase.com
DGL at a Glance¶ Author: Minjie Wang, Quan Gan, Jake Zhao, Zheng Zhang DGL is a Python package dedicated to deep learning on graphs, built atop existing tensor DL frameworks (e.g. Pytorch, MXNet) and simplifying the implementation of graph-based neural networks. The goal of this tutorial: - Understand how DGL enables computation on graph from a high level. - Train a simple graph neural network in DGL to classify nodes in a graph. At the end of this tutorial, we hope you get a brief feeling of how DGL works. This tutorial assumes basic familiarity with pytorch. Tutorial problem description¶ The tutorial is based on the “Zachary’s karate club” problem. The karate club is a social network that includes 34 members and documents pairwise links between members who interact outside the club. The club later divides into two communities led by the instructor (node 0) and the club president (node 33). The network is visualized as follows with the color indicating the community: The task is to predict which side (0 or 33) each member tends to join given the social network itself. Step 1: Creating a graph in DGL¶ Create the graph for Zachary’s karate club as follows: import dgl import numpy as np def build_karate_club_graph(): # All 78 edges are stored in two numpy arrays. One for source endpoints # while the other for destination endpoints. src = np.array([1, 2, 2, 3, 3, 3, 4, 5, 6, 6, 6, 7, 7, 7, 7, 8, 8, 9, 10, 10, 10, 11, 12, 12, 13, 13, 13, 13, 16, 16, 17, 17, 19, 19, 21, 21, 25, 25, 27, 27, 27, 28, 29, 29, 30, 30, 31, 31, 31, 31, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 32, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33, 33]) dst = np.array([0, 0, 1, 0, 1, 2, 0, 0, 0, 4, 5, 0, 1, 2, 3, 0, 2, 2, 0, 4, 5, 0, 0, 3, 0, 1, 2, 3, 5, 6, 0, 1, 0, 1, 0, 1, 23, 24, 2, 23, 24, 2, 23, 26, 1, 8, 0, 24, 25, 28, 2, 8, 14, 15, 18, 20, 22, 23, 29, 30, 31, 8, 9, 13, 14, 15, 18, 19, 20, 22, 23, 26, 27, 28, 29, 30, 31, 32]) # Edges are directional in DGL; Make them bi-directional. u = np.concatenate([src, dst]) v = np.concatenate([dst, src]) # Construct a DGLGraph return dgl.DGLGraph((u, v)) Print out the number of nodes and edges in our newly constructed graph: G = build_karate_club_graph() print('We have %d nodes.' % G.number_of_nodes()) print('We have %d edges.' % G.number_of_edges()) Out: We have 34 nodes. We have 156 edges. Visualize the graph by converting it to a networkx graph: import networkx as nx # Since the actual graph is undirected, we convert it for visualization # purpose. nx_G = G.to_networkx().to_undirected() # Kamada-Kawaii layout usually looks pretty for arbitrary graphs pos = nx.kamada_kawai_layout(nx_G) nx.draw(nx_G, pos, with_labels=True, node_color=[[.7, .7, .7]]) Step 2: Assign features to nodes or edges¶ Graph neural networks associate features with nodes and edges for training. For our classification example, since there is no input feature, we assign each node with a learnable embedding vector. # In DGL, you can add features for all nodes at once, using a feature tensor that # batches node features along the first dimension. The code below adds the learnable # embeddings for all nodes: import torch import torch.nn as nn import torch.nn.functional as F embed = nn.Embedding(34, 5) # 34 nodes with embedding dim equal to 5 G.ndata['feat'] = embed.weight Print out the node features to verify: # print out node 2's input feature print(G.ndata['feat'][2]) # print out node 10 and 11's input features print(G.ndata['feat'][[10, 11]]) Out: tensor([-2.4510, -0.6563, -0.7395, 1.6401, 2.2282], grad_fn=<SelectBackward>) tensor([[-1.6371, 0.1195, -0.5801, 0.6771, -1.2104], [-0.0391, -0.2906, -0.7418, -0.0126, -0.3276]], grad_fn=<IndexBackward>) Step 3: Define a Graph Convolutional Network (GCN)¶ To perform node classification, use the Graph Convolutional Network (GCN) developed by Kipf and Welling. Here is the simplest definition of a GCN framework. We recommend that you read the original paper for more details. - At layer \(l\), each node \(v_i^l\) carries a feature vector \(h_i^l\). - Each layer of the GCN tries to aggregate the features from \(u_i^{l}\) where \(u_i\)‘s are neighborhood nodes to \(v\) into the next layer representation at \(v_i^{l+1}\). This is followed by an affine transformation with some non-linearity. The above definition of GCN fits into a message-passing paradigm: Each node will update its own feature with information sent from neighboring nodes. A graphical demonstration is displayed below. In DGL, we provide implementations of popular Graph Neural Network layers under the dgl.<backend>.nn subpackage. The GraphConv module implements one Graph Convolutional layer. from dgl.nn.pytorch import GraphConv Define a deeper GCN model that contains two GCN layers: class GCN(nn.Module): def __init__(self, in_feats, hidden_size, num_classes): super(GCN, self).__init__() self.conv1 = GraphConv(in_feats, hidden_size) self.conv2 = GraphConv(hidden_size, num_classes) def forward(self, g, inputs): h = self.conv1(g, inputs) h = torch.relu(h) h = self.conv2(g, h) return h # The first layer transforms input features of size of 5 to a hidden size of 5. # The second layer transforms the hidden layer and produces output features of # size 2, corresponding to the two groups of the karate club. net = GCN(5, 5, 2) Step 4: Data preparation and initialization¶ We use learnable embeddings to initialize the node features. Since this is a semi-supervised setting, only the instructor (node 0) and the club president (node 33) are assigned labels. The implementation is available as follow. inputs = embed.weight labeled_nodes = torch.tensor([0, 33]) # only the instructor and the president nodes are labeled labels = torch.tensor([0, 1]) # their labels are different Step 5: Train then visualize¶ The training loop is exactly the same as other PyTorch models. We (1) create an optimizer, (2) feed the inputs to the model, (3) calculate the loss and (4) use autograd to optimize the model. import itertools optimizer = torch.optim.Adam(itertools.chain(net.parameters(), embed.parameters()), lr=0.01) all_logits = [] for epoch in range(50): logits = net(G, inputs) # we save the logits for visualization later all_logits.append(logits.detach()) logp = F.log_softmax(logits, 1) # we only compute loss for labeled nodes loss = F.nll_loss(logp[labeled_nodes], labels) optimizer.zero_grad() loss.backward() optimizer.step() print('Epoch %d | Loss: %.4f' % (epoch, loss.item())) Out: Epoch 0 | Loss: 0.5709 Epoch 1 | Loss: 0.5381 Epoch 2 | Loss: 0.5062 Epoch 3 | Loss: 0.4742 Epoch 4 | Loss: 0.4429 Epoch 5 | Loss: 0.4122 Epoch 6 | Loss: 0.3817 Epoch 7 | Loss: 0.3518 Epoch 8 | Loss: 0.3218 Epoch 9 | Loss: 0.2928 Epoch 10 | Loss: 0.2650 Epoch 11 | Loss: 0.2386 Epoch 12 | Loss: 0.2138 Epoch 13 | Loss: 0.1909 Epoch 14 | Loss: 0.1696 Epoch 15 | Loss: 0.1502 Epoch 16 | Loss: 0.1324 Epoch 17 | Loss: 0.1163 Epoch 18 | Loss: 0.1019 Epoch 19 | Loss: 0.0892 Epoch 20 | Loss: 0.0780 Epoch 21 | Loss: 0.0681 Epoch 22 | Loss: 0.0595 Epoch 23 | Loss: 0.0520 Epoch 24 | Loss: 0.0455 Epoch 25 | Loss: 0.0399 Epoch 26 | Loss: 0.0350 Epoch 27 | Loss: 0.0308 Epoch 28 | Loss: 0.0272 Epoch 29 | Loss: 0.0241 Epoch 30 | Loss: 0.0214 Epoch 31 | Loss: 0.0190 Epoch 32 | Loss: 0.0170 Epoch 33 | Loss: 0.0153 Epoch 34 | Loss: 0.0137 Epoch 35 | Loss: 0.0124 Epoch 36 | Loss: 0.0112 Epoch 37 | Loss: 0.0102 Epoch 38 | Loss: 0.0094 Epoch 39 | Loss: 0.0086 Epoch 40 | Loss: 0.0079 Epoch 41 | Loss: 0.0073 Epoch 42 | Loss: 0.0068 Epoch 43 | Loss: 0.0063 Epoch 44 | Loss: 0.0059 Epoch 45 | Loss: 0.0055 Epoch 46 | Loss: 0.0052 Epoch 47 | Loss: 0.0049 Epoch 48 | Loss: 0.0046 Epoch 49 | Loss: 0.0044 This is a rather toy example, so it does not even have a validation or test set. Instead, Since the model produces an output feature of size 2 for each node, we can visualize by plotting the output feature in a 2D space. The following code animates the training process from initial guess (where the nodes are not classified correctly at all) to the end (where the nodes are linearly separable). import matplotlib.animation as animation import matplotlib.pyplot as plt def draw(i): cls1color = '#00FFFF' cls2color = '#FF00FF' pos = {} colors = [] for v in range(34): pos[v] = all_logits[i][v].numpy() cls = pos[v].argmax() colors.append(cls1color if cls else cls2color) ax.cla() ax.axis('off') ax.set_title('Epoch: %d' % i) nx.draw_networkx(nx_G.to_undirected(), pos, node_color=colors, with_labels=True, node_size=300, ax=ax) fig = plt.figure(dpi=150) fig.clf() ax = fig.subplots() draw(0) # draw the prediction of the first epoch plt.close() The following animation shows how the model correctly predicts the community after a series of training epochs. ani = animation.FuncAnimation(fig, draw, frames=len(all_logits), interval=200) Next steps¶ In the next tutorial, we will go through some more basics of DGL, such as reading and writing node/edge features. Total running time of the script: ( 0 minutes 0.565 seconds) Gallery generated by Sphinx-Gallery
https://docs.dgl.ai/tutorials/basics/1_first.html
2020-07-02T18:14:39
CC-MAIN-2020-29
1593655879738.16
[array(['https://data.dgl.ai/tutorial/img/karate-club.png', 'https://data.dgl.ai/tutorial/img/karate-club.png'], dtype=object) array(['../../_images/sphx_glr_1_first_001.png', '../../_images/sphx_glr_1_first_001.png'], dtype=object) array(['https://data.dgl.ai/tutorial/1_first/mailbox.png', 'mailbox'], dtype=object) array(['https://data.dgl.ai/tutorial/1_first/karate0.png', 'https://data.dgl.ai/tutorial/1_first/karate0.png'], dtype=object) array(['https://data.dgl.ai/tutorial/1_first/karate.gif', 'https://data.dgl.ai/tutorial/1_first/karate.gif'], dtype=object)]
docs.dgl.ai
Source Code Fundamentals: Script Inclusion When creating large applications or building component libraries, it is useful to be able to break up the source code into small, manageable pieces each of which performs some specific task, and which can be shared somehow, and tested, maintained, and deployed individually. For example, a programmer might define a series of useful constants and use them in numerous and possibly unrelated applications. Likewise, a set of class definitions can be shared among numerous applications needing to create objects of those types. An include file is a file that is suitable for inclusion by another file. The file doing the including is the including file, while the one being included is the included file. A file can be either an including file or an included file, both, or neither. The recommended way to approach this is to use an autoloader - however, first you need to include the autoloader itself. It is important to understand that unlike the C/C++ (or similar) preprocessor, file inclusion in Hack is not a text substitution process. That is, the contents of an included file are not treated as if they directly replaced the inclusion operation source in the including file. The require_once() directive is used for this: namespace MyProject; require_once(__DIR__.'/../vendor/autoload.hack'); <<__EntryPoint>> function main(): void { \Facebook\AutoloadMap\initialize(); someFunction(); } The name used to specify an include file may contain an absolute or relative path; absolute paths are strongly recommended, using the __DIR__ constant to resolve paths relative to the current file. require_once() will raise an error if the file can not be loaded (e.g. if it is inaccessible or does not exist) , and will only load the file once, even if require_once() is used multiple times with the same file. Future Changes We expect to make autoloading fully-automatic, and remove inclusion directives from the language. Legacy Issues For relative paths, the configuration directive include_path is used to resolve the include file's location. It is currently possible (though strongly discoraged) for top-level code to exist in a file, without being in a function. In this cases, including a file may execute code, not just import definitions. Several additional directives exist, but are strongly discouraged: require(): like require_once(), but will potentially include a file multiple times include(): like require(), but does not raise an error if the file is inaccessible include_once(): like require_once(), but does not raise an error if the file is inaccessible.
https://docs.hhvm.com/hack/source-code-fundamentals/script-inclusion
2020-07-02T19:27:19
CC-MAIN-2020-29
1593655879738.16
[]
docs.hhvm.com
My next favorite role at Microsoft Corporation India as an IT Pro Evangelist
https://docs.microsoft.com/en-us/archive/blogs/aviraj/my-next-favorite-role-at-microsoft-corporation-india-as-an-it-pro-evangelist
2020-07-02T20:08:03
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
An “secret” SSIS XML Destination Provider you might not found yet (Sample code included at the end of the post) The initiator for this post was Dan Atkins who wanted to create a feed from relational data to consume it directly from a created gadget. Where can I find that in the toolbox ? First of all, you won’t a XML destination adapter as of the shipped components in SQL Server 2005 and 2008. There are for sure third party components which can directly convert data from the data pipeline to defined XML but sometimes it is much easier than that and you just want to create an XML file from any data source which is able to produce XML look-a-like data. What does that mean ? Let me show you in a quick sample. Many people are not aware of the great XML handling relational databases like SQL Server are capable of. They can generate XML data from a relational set / query and give you the string representation or the binary data to work with. The older brother of XML So the destination should be a XML file, right ? How would you describe a XML file in comparison to any other file type like a Word-Document ? Well, compared to a Word document, XML can be opened and read in plain text with any reader like notepad. At the end it is simply a flat file with clear text data. The older brother of XML files is a CSV file which can be produced by SSIS using a Flat File destination. Not touching the logic of XML files, it can be compared to a CSV with one column of data (That is really a high-level view :-) But that is the direction this sample will talk about. We want to get data from a source that can produce XML Data representations (which can also be script tasks) and create an XML file from that. (See my former blog post on that here) Creating the sample SSIS package The source For that I create a SSIS package with a simple OLEDB source. As I wanted to make it easily reproducible for you without having the need to create a Northwind or Adventureworks database on your machine, I used the new feature of row constructors in SQL Server 2008 which is able to create a table on the fly within a query (very handy if you don’t want to persist static data which is only used for one single purpose). Notice that I created a full XML set with a root and several nodes. If you execute this in the execution engine of your choice, you will already get a nice XML representation. Depending on your needs, you might want to put some data in attributes instead of nodes, but that is all described in he blog entry below. Notice that I put a SELECT ( XMLQueryHere) AS YourColumn in the query, as this will directly bring back the text representation of the XML to the output. Without that you will get binary data (System.Byte[]) which might not be the right choice in that situation. I addition, the created column names will have GUIDs within if you do not use this notation making it hard to have a predictable column name for the mapping later on. The target The target is even simpler than the source. Map the output of the source to the flat file destination and open the editor of the flat file destination. It is a flat file destination (created as a UNICODE file) create manually a column of type DT_NTEXT (1). Deselect the Option “Column headers in the first row” to get the pure value of the XML. Navigate to the Flat file destination adapter and map it in the Mappings section the XML input column to the destination flat file column and you are already done. The result Running that will bring you the pure XML created by the relational engine (in t´hat case SQL Server). I a aware that this isn’t the 100% perfect pipeline version of the XML adapter, but sometimes this is already enough to make data interchangeable with other partners and prevent you from using bcp and dynamic SQL execution at all. The sample SSIS package can be downloaded here.
https://docs.microsoft.com/en-us/archive/blogs/jenss/an-secret-ssis-xml-destination-provider-you-might-not-found-yet
2020-07-02T20:00:36
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
New TOP500 list announced at ISC07 in Dresden The 29th version of the list of the 500 supercomputers will be announced later this week in Dresden. Dr. Erich Strohmaier, Computer Scientist, Lawrence Berkeley National Laboratory, USA, will share the new list during his opening session "Highlights of the 29th TOP500 List". Last time Microsoft was on the TOP500 with CCS for the first time I believe. Let's see what happens this year. You can read more about the Top500 here and about Dr. Strohmaier at ISC'07.
https://docs.microsoft.com/en-us/archive/blogs/volkerw/new-top500-list-announced-at-isc07-in-dresden
2020-07-02T20:13:05
CC-MAIN-2020-29
1593655879738.16
[]
docs.microsoft.com
ThoughtSpot uses emails to send critical notifications to ThoughtSpot Support. A relay host for SMTP traffic routes the alert and notification emails coming from ThoughtSpot through an SMTP email server.: $ tscli smtp set-relayhost <IP_address> ThoughtSpot uses port 25 to connect to the relay host. If port 25 is blocked in your environment, contact ThoughtSpot Support to use a custom port..
https://docs.thoughtspot.com/5.3/admin/setup/set-up-relay-host.html
2020-07-02T19:14:23
CC-MAIN-2020-29
1593655879738.16
[]
docs.thoughtspot.com
The Webhooks API allows you to subscribe to events happening in a HubSpot account with your integration installed. Rather than making an API call when an event happens in a connected account, HubSpot can send an HTTP request to an endpoint you configure. You can configure subscribed events in your app’s settings or using the endpoints detailed below. Webhooks can be more scalable than regularly polling for changes, especially for apps with a large install base. Using the Webhooks API requires the following: Webhooks are set up for a HubSpot app, not individual accounts. Any accounts that install your app (by going through the OAuth flow) will be subscribed to its webhook subscriptions. In order to use webhooks, your app will need to be configured to require the contacts scope. This scope must be set up before you will be able to create any any webhook subscriptions, either from your developer account in the app settings, or when using the Webhooks API to configure subscriptions. See the OAuth documentation for more details about scopes and setting up the authorization URL for your app. If your app is already using webhooks, you will not be able to remove the contacts scope until you remove all webhook subscriptions from your app. Note: Webhook settings can be cached for up to five minutes. When making changes to the webhook URL, rate limits, or subscription settings, it may take up to five minutes to see your changes go into effect. Before setting up your webhook subscriptions, you need to specify a URL to send those notifications to. Additionally, you can also tune the rate at which your endpoint can handle requests. Setting this rate limit helps us send you notifications as fast as possible without putting too much load on your API. If you set this limit too low, you may find that notifications timeout if there are too many notifications being sent to your API such that this limit is saturated for more than a few seconds. This will result in delays before getting notifications. If you set this limit too high, you may saturate the resources available to your endpoint which could result in slow responses, notification delays, or result in your endpoint becoming unresponsive. You can manage your URL and rate limit settings via your app’s configuration page in your developer account: In the Apps dashboard, select the app that you'd like to set up webhooks for. Select the Webhooks left nav item. This screen provides an interface you can use to set the target URL and event throttling limit. These endpoints allow you to programmatically set webhook settings for an app. It is recommended that you use the UI to get your apps set up. You will need to use your developer API key when making requests to these endpoints. The settings object has the following fields: webhookUrl - The URL where we'll send webhook notifications. Must be served over HTTPS. maxConcurrentRequests - The concurrency limit for this URL, as discussed in "Webhook URL and Concurrency Limit" The following endpoint access the webhook settings current set up for your application, if any: GET{appId}/settings The result will look something like: { "webhookUrl": "", "maxConcurrentRequests": 20 } To modify these settings you can use the following endpoint: PUT{appId}/settings With a request body like: { "webhookUrl": "", "maxConcurrentRequests": 25 } Validation: webhookUrl must be a valid URL served over HTTPS. maxConcurrentRequests must be a number greater than five. Once you’ve set up your webhook URl and rate limit settings, you’ll need to create one or more subscriptions. Webhook subscriptions tell HubSpot which events your particular app would like to receive. We currently support the following subscription types: Contact creations Contact deletions Privacy compliant contact deletions - see below for more details Contact property changes Company creations Company deletions Company property changes Deal creations Deal deletions Deal property changes A couple things to keep in mind about subscriptions: property change subscriptions. These properties are: Contact properties: days_to_close recent_conversion_event_name recent_conversion_date first_conversion_event_name first_conversion_date num_unique_conversion_events num_conversion_events Deal properties: num_associated_contacts Subscriptions apply to all customers that have installed your integration. This means that you only need to specify what subscriptions you need once. Once you've enabled a subscription for an application, it will automatically start getting webhooks for all customers that have installed your application, and you will automatically start getting webhooks from any new customers that install your integration going forward. There can be up to a five minute delay between creating or changing your subscriptions and those changes taking effect. You can create webhook subscriptions via your developer account: In the developer Apps dashboard, select the app that you'd like to send webhooks for. Select the "Webhook subscriptions" nav item. Click “Create subscription” Select the object type (i.e. contact, company, deal) and event type (i.e. creation, change, deletion). Note: Deleted for privacy can only be selected when "contacts" is the only selected object type. (Optional) For property change events, select the property in question. Note: You can also manually enter property names, allowing you to use custom properties. NOTE: New subscriptions are created in the paused state. You will need to activate the subscription for webhooks to send. These endpoints allow you to programmatically create subscriptions. There is nothing in these endpoints that you cannot do via the UI, and the UI is the preferred way to set up your app. You will need to use your developer API key when making requests to these endpoints. A subscription object has the following fields: id - A number representing the unique ID of a subscription. createdAt - When this subscription was created, this is a millisecond timestamp. createdBy - The userId of the user that created this subscription. enabled - Whether or not this subscription is currently active and triggering notifications. subscriptionDetails - This describes what types of events this subscription is listening for subscriptionType - A string representing what type of subscription this is--as defined in 'Subscription Types` below. propertyName - Only needed for property-change types. The name of the property to listen for changes on. This can only be a single property name. The subscriptionType property can be one of the following values: contact.creation - To get notified if any contact is created in a customer's account. contact.deletion - To get notified if any contact is deleted in a customer's account. contact.privacyDeletion- To get notified if a contact is deleted for privacy compliance reasons. See below for more details. contact.propertyChange - To get notified if a specified property is changed for any contact in a customer's account. company.creation - To get notified if any company is created in a customer's account. company.deletion - To get notified if any company is deleted in a customer's account. company.propertyChange - To get notified if a specified property is changed for any company in a customer's account. deal.creation - To get notified if any deal is created in a customer's account. deal.deletion - To get notified if any deal is deleted in a customer's account. deal.propertyChange - To get notified if a specified property is changed for any deal in a customer's account. To retrieve the list of subscriptions: GET{appId}/subscriptions The response will be an array of objects representing your subscriptions. Each object will include information on the subscription like the ID, create date, type, and whether or not it’s currently enabled. Here’s what an example response would look like: [ { "id": 25, "createdAt": 1461704185000, "createdBy": 529872, "subscriptionDetails": { "subscriptionType": "contact.propertyChange", "propertyName": "lifecyclestage" }, "enabled": true }, { "id": 59, "createdAt": 1462388498000, "createdBy": 529872, "subscriptionDetails": { "subscriptionType": "company.creation" }, "enabled": false }, { "id": 108, "createdAt": 1463423132000, "createdBy": 529872, "subscriptionDetails": { "subscriptionType": "deal.creation" }, "enabled": true } ] To create new subscriptions: With a request body like: { "subscriptionDetails": { "subscriptionType": "company.propertyChange", "propertyName": "companyname" }, "enabled": false } Where the fields in this request body match those defined in "Subscription Fields". However, with this endpoint you cannot specify id, createdAt, or createdBy, as those fields are set automatically. Validation: subscriptionType must be a valid subscription type as defined in the above section on "Subscription Types". propertyName must be a valid property name. If a customer has no property defined that matches this value, then this subscription will not result in any notifications. enabled must be true or false. You can only update the enabled flag of a subscription. To do so you can use the following endpoint: PUT{appId}/subscriptions/{subscriptionId} Request body: { "enabled" : false }{ "enabled" : false } To delete a subscription you can call the following endpoint: DELETE{appId}/subscriptions/{subscriptionId} The endpoint at the URL you specify in webhooks settings will receive requests like the following: Method type: Headers: Example request body: [ { } ] Request fields: objectId : The ID of the object that was created/changed/deleted. For contacts this is the vid; for companies, the companyId; and for deals the dealId. propertyName : This is only sent when the notification is for a property change. This is the name of the property that was changed propertyValue : This is only sent when the notification is for a property change. This is the new value that was set for this property that triggered this notification. changeSource : The source of this change. Can be any of the change sources that you find on contact property histories. eventId : The unique ID of the event that triggered this notification. subscriptionId : The ID of the subscription that caused us to send you a notification of this event. portalId : The customer's portalId that this event came from. appId : The ID of your application. (In case you have multiple applications pointing to the same webhook URL.) occurredAt : When this event occurred, as a millisecond timestamp. subscriptionType : The type of event this notification is for. See the list of subscription types in the above section on "Subscription Types". attemptNumber : Which attempt this is to notify your service of this event (starting at 0). If your service times-out or throws an error as described in "Retries" below, we will attempt to send the notification to your service again. NOTE ON BATCHING: As shown above, you will receive multiple notifications in a single request. The batch size can vary, but will be under 100 notifications. We will send multiple notifications only when a lot of events have occurred within a short period of time. For example, if you've subscribed to new contacts and a customer imports a large number of contacts, we will send you the notifications for these imported contacts in batches and not one-per-request. NOTE ON ORDER: We do not guarantee that you get these notifications in the order they occurred. Please use the occurredAt property for each notification to determine when the notification occurred. NOTE ON UNIQUENESS: We do not guarantee that you will only get a single notification for an event. Though this should be rare, it is possible that we will send you the same notification multiple times. HubSpot users have the ability to permanently delete a contact record to comply with privacy laws. Please see the following help article for more details on this feature: You can subscribe to the contact.privacyDeletion subscription type to receive webhook notifications when a user performs a privacy compliant contact deletion. Note: Webhooks use the v1 version of the X-HubSpot-Signature header. Please see this page for more details on validating this version of the signature, and this page for more details about validating requests in general. To ensure that the requests you're getting at your webhook endpoint are actually coming from HubSpot, we populate a X-HubSpot-Signature header with a SHA-256 hash of the concatenation of the app-secret for your application and the request body we someone else who knows your application secret. It's important to keep this value secret.) If these values do not match, than this request may have been tampered with in-transit or someone may be spoofing webhook notifications to your endpoint. If your service has problems handling notifications at any time, we will attempt to re-send failed notifications up to 10 times. We will retry in the following cases: Retries will occur with an exponential backoff based on the next attemptNumber. So the first retry will happen in 2 seconds, the second retry in 4 seconds, the third retry in 8 seconds, etc. You can create a maximum of 1000 subscriptions per application. If you attempt to create more you will receive a 400 bad request in return with the following body: { "status": "error", "message": "Couldn't create another subscription. You've reached the maximum number allowed per application (1000).", "correlationId": "2c9beb86-387b-4ff6-96f7-dbb486c00a95", "requestId": "919c4c84f66769e53b2c5713d192fca7" }
https://legacydocs.hubspot.com/docs/methods/webhooks/webhooks-overview
2020-07-02T19:47:41
CC-MAIN-2020-29
1593655879738.16
[]
legacydocs.hubspot.com
Refer to Amazon Sales Channel 4.0+ for updated documentation. Onboarding: Listing Preview Step 4 of 7 for Onboarding Amazon Sales Channel Once you continue to the Listing Preview (step 4), you cannot change the website selected in your listing rules (step 3) for this integration. To change the website after leaving step 3, you must exit this store integration and begin a new one. At this step of onboarding, you have created your listing rules that determine which of your Magento catalog products are eligible to be listed on Amazon. Your current Amazon listings are compared against your rules, based on the conditions you defined. You can then review which products will move to an ineligible status based on your current Amazon Seller Central account, which products will move from an ineligible state back to an eligible state, and which products will be New Amazon Listings and added to your Amazon listing from your eligible Magento catalog. Listing Preview allows you to preview your potential Amazon listings and make any necessary adjustments to your listing rules. If you need to adjust your listing rules, click Listing Rules on the store dashboard. Your potential Amazon listings will populate on the Listing Preview page in one of three tabs: Ineligible Listings: Products listed are not eligible for Amazon listing based on your current listing rules. Ineligible products will not be published to Amazon. If an ineligible product is already listed on Amazon and you match the Amazon listing to your Magento catalog product, the quantity for the Amazon listing will change to 0to prevent sales of the product. To remove a listing manually, see Ending an Amazon Listing. Products that are not eligible by Amazon requirements are not listed here. Those products are listed on the Inactive Listings tab. Eligible Listings: Products listed are eligible for Amazon listing based on your current listing rules and are eligible by Amazon requirements. This list includes your existing Amazon listings that will import (if you have Import Third Party Listings set to Import Listingin your Listing Settings). New Listings: Products listed include your Magento catalog products that are newly eligible for Amazon listing based on your current listing rules and will create new Amazon listings. See Listing Preview for column descriptions. Listing Preview Listing Preview Workflow Listing Preview Workflow Continue to Step 5 of Onboarding
https://docs.magento.com/user-guide/sales-channels/amazon/ob-listing-preview.html
2020-07-02T19:59:50
CC-MAIN-2020-29
1593655879738.16
[array(['/user-guide/images/images/sales-channels/amazon/onboarding-step-listing-preview.png', None], dtype=object) ]
docs.magento.com
Contents The disk space requirements for TIBCO Spotfire Data Streams can be considerable, and fall into the following categories. - Installation Directory Recent releases of Spotfire Data Streams have required approximately: 2.0 GB on Windows 1.7 GB on macOS 2.6 GB on Linux - Installer Files The installer files for Spotfire Data Streams require approximately the following amounts, plus several hundred megabytes of temporary space during installation. The installer files can be removed after successful installation. 1.3 GB on Windows 1.4 GB on macOS 1.7 GB on Linux - StreamBase Studio Workspace Disk use for your StreamBase Studio workspace for developing applications depends on the number of projects you maintain and volume of data they store. Expect to need another gigabyte for a large number of projects — but see Node Directories, next. - Node Directories Running a StreamBase EventFlow or LiveView fragment or application in the StreamBase Runtime creates a node directory that consumes disk space. The node directory includes a memory-mapped file, ossm, that represents the memory potentially consumable by the node. On Windows, Linux, and recent macOS versions, this is a sparse file, whose size appears to be equal to the full memory size configured for the node. However, the disk space actually consumed by this file is only the amount actually consumed. However, macOS Sierra 10.12 and earlier versions did not support the APFS file system, and therefore did not support sparse files. On those systems the ossmfiles in node directories actually do consume the amount of disk space they appear to consume. In general, node directories are temporary and are automatically removed when the node is removed. Node directories are preserved in case of error so that you can analyze log files stored there. Running fragments and applications from the command line creates one node directory per node, and you specify the location of the node directory at node installation time. This means you can specify a fast, local disk with plenty of room for these node directories. TIBCO does not recommend using a network-attached location for node directories. Running fragments and applications in Studio creates one node directory per node in the Studio workspace, in a folder named .nodes, which is hidden by default. As with command line node directories, Studio-launched node directories are temporary and are automatically removed at Studio exit time for successfully removed nodes. However, the node directories for failed nodes are preserved for analysis, which can increase the disk space consumed by your Studio workspace. TIBCO recommends inspecting any node directories left by failed nodes as soon as practical after the failure, and creating a snapshot zip file, so that the failed node directory can be removed. - Local Maven Repository StreamBase uses the Maven software build system, which automatically retrieves dependencies from a network location and stores them in a local disk repository. The local Maven repository is placed by default in the .m2/repositoryfolder of your home directory. On first use of Studio, it inspects the local repository and populates it with a base set of artifacts (usually JAR files) that are required for most EventFlow and LiveView development for the current release of StreamBase. When you load and run a project, Studio loads more artifacts to support the project, especially one that uses any adapters. It does not take long for the local repository to take up over one gigabyte. The more projects you load and run, more artifacts may be downloaded into the local repository. Each StreamBase release has its own version of the JAR file artifacts, which are loaded parallel to any existing artifacts, with any similar file names distinguished by release number. Also remember that the Maven repository is a universal resource, used by any other programs you install that are based on Maven. In short, prepare for your local Maven repository to grow to multiple gigabyte sizes. TIBCO strongly recommends placing your local Maven repository on a fast, local, solid state disk. If your organization configures your home directory to be on a network server, this can result in slower performance when configuring and building StreamBase and LiveView applications. You may prefer to specify a non-default location for your local Maven repository in cases like the following: Your organization places your home directory on a network drive and you want to move the repository to a fast local drive. You are running out of disk space on your home directory's drive. You want to keep different local repositories for different purposes while developing a StreamBase application, or to keep your StreamBase-related development separate from other Maven projects. The simplest way to move your local Maven repository is with a single line in a ~/.m2/settings.xml file. For example: <settings xmlns="" xmlns: <localRepository>${user.home}/tempus/fugit/repository</localRepository> </settings> The <localRepository> element of this settings.xml file moves the local repository to the tempus/fugit subdirectory of your home directory. The <localRepository> element must specify a full absolute path, or use the ${user.home} variable. This method moves the local repository for all Maven purposes until changed in the settings.xml file. Notice that we still use ~/.m2 to contain the settings.xml file itself. This standard location allows both Studio and command line mvn to locate your settings. You can leave the settings.xml file as the only contents of the ~/.m2 folder and still have the gigabytes of the repository folder placed elsewhere. To identify the repository location that StreamBase Studio is currently using, open Local Repository field.> (Windows) or > (Mac). Then open > and look in the By default, Maven downloads publicly available artifacts from Maven Central, a large distributed server complex maintained by the Apache Maven project. The server is very busy, and requires direct Internet access from your workstation outside your organization's firewall. Most organizations with more than a few developers using Maven will want to configure a Repository Manager, which is server software you install locally to act as a proxy server for the public Maven repositories. A Repository Manager is especially helpful for organizations with security requirements that restrict open access to the Internet. See this Best Practice page in the Apache Maven documentation for a list of Repository Manager software providers, both commercial and open source. As described above, StreamBase Studio populates the local repository automatically when Studio first starts in a new workspace, with continuous updates as needed. However, there are conditions under which you may prefer a manual approach. StreamBase provides two ways to help you manage your local Maven repository. - Help > Synchronize StreamBase Maven Artifacts This Studio menu option forces Studio to rerun its population of the local Maven repository with all StreamBase-related artifacts. You can run this command after you have moved the local repository as described above, or to replace some of the contents of the repository that were incorrectly deleted. This command runs a smart merge that checks to see what files are present and what version they have before recopying the files. Only missing files are recopied. - The epdev Command Use the epdev command to rerun Studio's local repository update in the same way as the > menu option. You can also use epdev to install local copies of dependencies that would normally be retrieved from Maven Central or from other remote locations, to prepare a computer for offline use while traveling. See epdev for syntax and further options.
https://docs.streambase.com/latest/topic/com.streambase.sfds.ide.help/data/html/install/disk-space.html
2020-07-02T19:34:56
CC-MAIN-2020-29
1593655879738.16
[]
docs.streambase.com
Deprecations Endpoints currently sunsetting The following endpoints are being gradually disabled and will stop working completely after the date indicated in the table. Before the sunset date, those endpoints will randomly, at an increasing rate, return 410 Gone responses. After the sunset date endpoints will return an error response to all requests. Deprecated endpoints The following endpoints have been deprecated which means they should not be used in new applications. Instead, new endpoints should be used as indicated in the table below.
https://docs.vendhq.com/deprecations
2020-07-02T18:04:18
CC-MAIN-2020-29
1593655879738.16
[]
docs.vendhq.com
Common Problems with DaCHS¶ Contents This document tries to discuss some error messages you might encounter while running DaCHS. The rough idea is that when you can grep in this file and get some deeper insight as to what happened and how to fix it. We freely admit that some error messages DaCHS spits out are not too helpful. Therefore, you’re welcome to complain to the authors whenever you don’t understand something DaCHS said. Of course, we’re grateful if you checked this file first. DistributionNotFound¶ When trying to run any program, you may see tracebacks like: Traceback (most recent call last): File "/usr/local/bin/gavo", line 5, in <module> from pkg_resources import load_entry_point [...] File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 552, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: gavodachs==0.6.3 This is usually due to updates to the source code when you have installed your source in development mode. Simple do sudo python setup.py develop in the root of the source distribution again. Another source of this error can be unreadable source directories. Please check that the user that’s trying to execute the command can actually read the sources you checked out. ‘gavodachs’ package upgrade fails¶ If you try to upgrade an older version (< 0.9) of the ‘gavodachs’ package, e.g. by typing: sudo apt-get update && sudo apt-get upgrade it could happen that you run into troubles when the gavodachs server is going to be stopped (and restarted). If the server stop fails, the installation of the ‘gavodachs-server’ package will aborted which leaves the package in a half-configured state. The corresponding error message would be something similar to: Stopping VO server: dachsTraceback (most recent call last): [...] File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 584, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: gavodachs==0.9 In that case try: sudo dpkg --remove --force-all python-gavodachs sudo apt-get -f install gavodachs-server With these commands you should end up in the state obtained by a successful package upgrade. ignoreOn in a rowmaker doesn’t seem to work¶ The most likely reason is that you are testing for the presence of a key that is within the table. This will not work since rowmakers add key->None mapping for all keys missing but metioned in a map (also implicitely via idmaps. If more than one rowmake operate on a source, things get really messy since rowmakers change the row dictionaries in place. Maybe this should change at some point, but right now that’s the way it is. Thus, you can never reliably expect keys used by other tables to be present or absent since you cannot predict the order in which the various table’s rowmakers will run. To fix this, you can check against that key’s value being NULL, e.g., like this: <keyIs key="accessURL" value="__NULL__"/> You could also instruct the rowmaker to ignore that key; this would require you to enumerate all rows you want mapped. Import fails with “Column xy missing” and very few keys¶ This error is caused by the row validation of the table ingestor – it wants to see values for all table columns, and it’s missing one or more. This may be flabbergasting when your grammar yields the fields that are missing here. The reason is simple: You must map them in the rowmaker. If you see this error, you probably wanted to write idmaps="*" or something like that in the rowmaker. Server is Only Visible from the Local Host¶ When the server is running ( gavo serve start) and you can access pages from the machine the server runs on just fine, but no other machines can access the server, you run the server with the default web configuration. It tells the server to only bind to the loopback interface (127.0.0.1, a.k.a. localhost). To fix this, say: [web] bindAddress: in your /etc/gavo.rc. Transaction Deadlocking¶ When gavo imp (or possibly requests to the server) just hangs without consuming CPU but not doing anything useful, it is quite likely that you managed to provoke a deadlock. This happens when you have a database transaction going on a table while trying to access it from the outside. To give an example: from gavo import base from gavo import rsc t = rsc.TableForDef(tableDefForFoo) q = base.SimpleQuerier().query("select * from foo") This will deadlock if tableDefForFoo actually defines an onDisk table foo. The reason is that instanciating a database table object will create a connection and start a transaction (e.g., to see if the table is actually present on disk). SimpleQuerier, on the other hand, creates another connection and another transaction. In general, the result of this second transaction will depend on the outcome of the first one. Postgres will notice that and postpone creating the result until the t’s transaction if finished. That will never happen with this code. To diagnose what’s happening, it is useful to see the server’s idea of what is going on inside itself. The following script (that you might call psdb) will help you: #!/bin/sh psql gavo << EOF select procpid, usename, current_query, date_trunc('seconds', query_start::time) from pg_stat_activity order by procpid EOF (this assumes your database is called gavo and you have sufficient rights on that database; it’s not hard to figure out the psql command line for other scenarios). This could output something like: procpid | usename | current_query | date_trunc ---------+-----------+-------------------------+------------ 9301 | gavoadmin | <IDLE> | 16:55:39 9302 | gavoadmin | <IDLE> in transaction | 16:55:39 9303 | gavoadmin | <IDLE> in transaction | 16:55:39 9306 | gavoadmin | <IDLE> in transaction | 16:55:43 9309 | gavoadmin | SELECT calPar FROM l... | 16:55:43 (5 Zeilen) The procpid is the pid of the process handling the connection. Usually, you will see one running query and possibly quite a few connections that are idle in transaction (which are tables waiting to be fed, etc.). The query should give you some idea where the deadlock occurs. To escape the deadlock (which, under CPython, will block ^C as well), kill the process trying the query – this will give you a traceback to the offending instruction. Of course, you will need to become the postgres or root user to do that, so it may be easier to forego the traceback and just kill gavo imp. To fix such a situation, there are various options. You could commit the table’s transaction: from gavo import base from gavo import rsc t = rsc.TableForDef(tableDefForFoo) t.commit() q = base.SimpleQuerier().query("select * from foo") but that is not usally what you want to do. Much more often, you want to execute the second query in t’s transaction. In this case, this could work like this: from gavo import base from gavo import rsc t = rsc.TableForDef(tableDefForFoo) q = base.SimpleQuerier(connection=t.connection).query("select * from foo") Of course, it is not always easy to access the connection object. Note, however, that in most procedure definitions, you have the target data set available as data. If you have that, you can usually obtain the current connection (and thus transaction) via: data.getPrimaryTable().connection – at least if you designate one of the data’s tables as primary through its make elements. ‘prodtblAccref’ not found in a mapping¶ You get this error message when you make a table that mixes in //products#table (possibly indirectly, e.g., via SSAP or SIAP mixins) with a grammar that does not use the //products#define row filter. So, within the grammar, say (at least, see the reference documentation for other parameters for rowgen): <rowfilter procDef="//products#define"> <bind name="table">"\schema.table"</bind> </rowfilter> – substituting dest.table with the actual name of the table fed. The reason why you need to manually give the table is that the grammar doesn’t now what table the rows generated end up in. On the other hand, the information needs to be added in the grammar, since it is fed both to your table and the system-wide products table. I get “Column ssa_dstitle missing” when importing SSA tables¶ The //ssap#setMeta rowmaker application does not directly fill the output rowdict but rather defines new input symbols. This is done to give you a chance to map things set by it, but it means that you must idmap at least all ssa symbols (or map them manually, but that’s probably too tedious). So, in the rowmaker definition, you write: <rowmaker idmaps="ssa_*"> “unpack requires a string argument of length”¶ These typically come from a binary grammar parsing from a source with armor=fortran. Then, the input parser delivers data in parcels given by the input file, and the grammar tries to parse it into the fields given in your binaryRecordDef. The error message means that the two don’t match. This can be because the input file is damaged, you forgot to skip some header, but it can also be because you forgot fields or your binaryRecordDef doesn’t match the input in some other way. “resource directory ‘<whatever>’ does not exist”¶ DaCHS expects each RD to have a “resource directory” that contains input files, auxillary data, etc. Multiple RDs may share a single resource directory. By default, the resource directory is <inputsDir>/<schema>. If you don’t need any auxillary files, the resdir doesn’t need to exist. In that case, you’ll see the said warning. To suppress it, you could just say: <resource schema="<whatever>" resdir="__system"> The __system resource directory is used by the built-in RDs and thus should in general exist. However, the recommended layout is, below inputsDir, a subdirectory named like the resource schema, and the RD immediately within that subdirectory. In that case, you don’t need a resdir attribute. Only RDs from below inputsDir may be imported¶ RDs in DaCHS must reside below inputsDir (to figure out what that is on your installation, say gavo config inputsDir). The main reason for that restriction is that RDs have identifiers, and these are essentially the inputsDir-relative paths of the file. Out-of-tree RDs just cannot compute this. Therefore, most subcommands that accept file paths just refuse to work when the file in question is not below inputsDir. Not reloading services RD on server since no admin password available¶ That’s a warning you can get when you run gavo pub. The reason for it is that the DaCHS server caches quite a bit of information (e.g., the root page) that may depend on the table of published services (see also Managing Runtime Resources. Therefore, gavo pub tries to make the running server discard such caches. To do this, it inspects the serverURL config item and tries access a protected resource. Thus, it needs the value of the config setting adminpasswd (if set), and that needs to be identical on the machine gavo pub executes on and on whatever serverURL points to. If anything goes wrong, a warning is emitted. The publication has happened still, but you may need to run gavo serve reload on the server to make it visible. I’m getting “No output columns with these settings.” instead of result rows¶ This is particularly likely to happen with the scs.xml renderer. There, it can happen the the server doesn’t even bother to run database queries but instead keeps coming back with an error message No output columns with these settings.. This happens when the “verbosity” (in SCS, this is computed as 10*VERB) of the query is lower than the verbLevel of all the columns. By default, this verbLevel is 20. In order to ensure that a column is returned even with VERB=1, say: <column name=... gavo imp dies with Permission denied: u’/home/gavo/logs/dcErrors’¶ (or something similar). The reason for these typically is that the user that runs gavo imp is not in the gavo group (actually, whatever [general]gavoGroup says). To fix it, add that user. If that user was, say, fred, you’d say: sudo adduser fred gavo Note that fred will either have to log in and out (or similar) or say newgrp gavo after that. Warnings about popen, md5, etc being deprecated¶ The python-nevow package that comes with Debian sequeeze is outdated. Install I’m using reGrammar to parse a file, but no splitting takes place¶ This mostly happens for input lines like a|b|c; the underlying problem is that you’re trying to split along regular expression metacharacters. The solution is to escape the the metacharacter. In the example, you wouldn’t write: <reGrammar fieldSep="|"> <!-- doesn't work --> but rather: <reGrammar fieldSep="\|"> <!-- doesn't work --> IntegrityError: duplicate key value violates unique constraint “products_pkey”¶ This happens when you try to import the same “product” twice. There are many possible reasons why this might happen, but the most common (of the non-obvious ones) probably is the use of updating data items with row triggers. If you say something like: <!-- doesn't work reliably --> <table id="data" mixin="//products#table" ... <data id="import" updating="True"> <sources> ... <ignoreSources fromdb="select accref from my.data"/> </sources> <fitsProdGrammar... <make table="data"> <rowmaker> <ignoreOn name="Skip plates not yet in plate cat"> <keyMissing key="DATE_OBS"/></ignoreOn> ... you’re doing it wrong. The reason this yields IntegrityErrors is that if the ignoreOn trigger fires, the row will not be inserted into the table data. However, the make feeding the dc.products table implicitely inserted by the //products#table mixin will not skip an ignored image. So, it will end up in dc.product, but on the next import, that source will be tried again – it didn’t end up in my.data, which is where ignoreSources takes its file names from –, and boom. If you feed multiple tables in one data and you need to skip an input row entirely, the only way to do that reliably is to trigger in the grammar, like this: <table id="data" mixin="//products#table" ... <data id="import" updating="True"> <sources> ... <ignoreSources fromdb="select accref from my.data"/> </sources> <fitsProdGrammar... <ignoreOn name="Skip plates not yet in plate cat"> <keyMissing key="DATE_OBS"/></ignoreOn> </fitsProdGrammar> <make table="data"> ... gavo init/installation dies with UnicodeDecodeError: ‘ascii’ codec…¶ The full signature is something like: File "/usr/lib/python2.7/dist-packages/pyfits/core.py", line 103, in formatwarning return unicode(message) + '\n' UnicodeDecodeError: 'ascii' codec can't decode byte 0xc2 in position 65: ordinal not in range(128) This is a bug in pyfits, together with carelessness in passing through error messages on our side. We’ll see which side will fix this first; meanwhile, the easy workaround is to set lc_messages = 'C' in postgresql.conf (e.g., /etc/postgresql/9.1/main/postgresql.conf on Debian wheezy). That’s probably a good idea anyway since TAP may expose postgres error messages to the user, and these aren’t nearly as useful to remote users as to you if they’re in your local language. relation “dc.datalinkjobs” does not exist¶ This happens when you try to run asynchronous datalink (the dlasync renderer) when you’ve not created the datalink jobs table. This is not (yet) done automatically on installation since right now we consider async datalink to be a bit of an exotic case. To fix this, run: gavo imp //datalink (some column) may be null but has no explicit null value¶ These are warnings emitted by the DaCHS’ RD parser – since they are warnings, you could ignore them, but you shouldn’t. This is about columns that have no “natural” NULL serialisation in VOTables, mostly integers. Without such a natural NULL, making VOTables out of anything that comes out of these tables can fail under certain circumstances. There are (at least) two ways to fix this, depending on what’s actually going on: you’re sure there are no NULLs in this column. In that case, just add required="True", and the warnings will go away. Note, however, that DaCHS will instruct the database to check that you’re not cheating, and an import will fail if your try to put NULLs into such columns. there are NULLs in this column. In that case, find a value that will work for NULL, i.e., one that is never used as an actual value. “Suspicious” values like 0, -1, -9999 or the like are preferred as this increases the chance that careless programs, formats, or users who ignore a NULL value specification have a chance to catch their error. Then declare that null value like this: <column name="withNull" type="integer"...> <values nullLiteral="-9999"/> </column> Column rave.main.logg_k: Unit dex is not interoperable¶ The VOUnit standard lets you use essentially arbitrary strings as units – so does DaCHS. VOUnit, however, contains a canon of units VO clients should understand. If DaCHS understands units, you can, for instance, change them on form input and output using the displayUnit displayHint – other programs may allow automatic conversion and similar comforts. When DaCHS warns that a unit is not interoperable, this means your unit will not be understood in that way. There are cases when that’s justified, so it’s just a warning, but be sure you understand what you’ve written and there actually is no interoperable (i.e., using the canonical VOUnits) way to express what you want to say. Also not that it is an excellent idea to quote free (i.e., non-canonical) units, i.e., write unit='"Crab"'. The reason is that in the non-quoted case, VOUnit parsers will try do separate off SI prefixes, such that, e.g., dex will be interpreted as dezi-ex, i.e., a tenth of an ex (which happens to actually be a unit, incidentally, although not a canonical VOUnit one). And yes, dex itself would be a free unit. If you look, quantities given with “dex” as a unit string actually are dimensionless. Our recommendation therefore is to have empty units for them. Column tab.foo is not a regular ADQL identifier¶ This is a message you may see when running gavo val. It means that the column in question has a name that will get you in trouble as soon as you open the table in question to TAP queries (and trust me, you will sooner or later). Regular ADQL identifiers match the regular expression [A-Za-z][A-Za-z0-9_]* with the additional restriction that ADQL reserved words (including terms like distance, size, etc) are not allowed either. If you see the message, just change the name in question. There’s so many nice words that there’s really no need to use funny characters in identifiers or hog ADQL reserved words. If you must keep the name anyway, you can prefix it by quoted/ to make it a delimited identifier. There’s madness down that road, though, so don’t complain to us if you do that and regret it too late. In particular, you may have a hard time referencing such columns from STC declarations, when creating indices, etc. So: Just don’t do it. Unhandled exception ProgrammingError while importing an obscore table¶ This typically looks somewhat like this: ProgrammingError: syntax error at or near "/" LINE 28: CAST(/RR/V/ AS text) AS pol_states, ^ *** Error: Oops. Unhandled exception ProgrammingError. Exception payload: syntax error at or near "/" LINE 28: CAST(/RR/V/ AS text) AS pol_states, While ProgrammingErrors in general happen whenever an invalid query is sent to the database engine, when they pop up in gavo imp with obscore not far away it almost invariably means that there is a syntax error, most likely forgotten quotes, in the obscore mixin definitions of one of the tables published through obscore. The trick is to figure out which of them causes the trouble. The most straightforward technique is to take the fragement shown in the error message and look in ivoa._obscoresources like this: $ psql gavo ... # select tablename - from ivoa._obscoresources where sqlfragment like '%CAST(/RR/V/%'; tablename -------------------- test.pgs_siaptable You could gavo purge the table in question to fix this the raw way, but it’s of course much more elegant to just remove the offending piece from _obscoresources: # delete from ivoa._obscoresources where tablename='test.pgs_siaptable'; Then fix the underlying problem – in this case that was replacing: <mixin polStates="/RR/V/" ... with: <mixin polStates="'/RR/V/'" ... – and re-import the obscore meta; you’ll usually use gavo imp -m && gavo imp //obscore for that (see also updating obscore) cert already in hash table¶ Under circumstance we’ve not quite understood yet either, in Debian stretch DaCHS may dump a long traceback on an error like this: cryptography.exceptions.InternalError: Unknown OpenSSL error. This error is commonly encountered when another library is not cleaning up the OpenSSL error stack. If you are using cryptography with another library that uses OpenSSL try disabling it before reporting a bug. Otherwise please file an issue at with information on how to reproduce this. ([_OpenSSLErrorWithText(code=185057381L, lib=11, func=124, reason=101, reason_text='error:0B07C065:x509 certificate routines:X509_STORE_add_cert:cert already in hash table')]) This is a race condition deep within python-cryptography, which is in turn is several levels below nevow; what’s worse, this happens during import, so even monkeypatching is at least very difficult. We currently try to work around it by importing the racing component before any threads can occur. The hack will be in DaCHS 1.0 and the 1.0.2 beta. It’s a hack, though, and it’s possible it won’t work for you. Let us know if you hit this. dachs init fails with “type spoint does not exist”¶ This always means that the pgsphere extension could not be loaded, and DaCHS can no longer do without it. Actually, we could try to make it, but you really need pg_sphere in almost all installations, so it’s better to fix this than to work around it. Unfortunately, there is any number of reasons for a missing pgsphere. If, for instance, you see this error message and have installed DaCHS from tarball or svn, with manual dependency management, just install the pgsphere postgres extension (and, while you’re at it, get q3c, too). If this happens while installing the Debian package, in all likelihood DaCHS is not talking to the postgres version in thinks it is. This very typically happens if you already have an older postgres version on the box. Unless you’re sure you know what you’re doing, just perform an upgrade to the version DaCHS wants – see howDoI.html#upgrade-the-database-engine. If you’d need to downgrade, that’s trouble. Complain to the dachs-support mailing list – essentially, someone will have to build a pgsphere package for your postgres version. relation “ivoa._obscoresources” does not exist¶ This happens when you try to import an obscore-published table (these mix in something like //obscore#publish-whatever) without having created the obscore table itself. The fix is easy: Either remove the mixin if you don’t want the obscore publication (which would be odd for production data) or, more typically, create the obscore table: dachs imp //obscore duplicate key value violates unique constraint “tables_pkey”¶ This typically happens on dachs imp. The immediate reason is that dachs imp tries to insert a metadata row for one of the tables it just created into the dc.tablemeta system table, but a row for that name is already present. For instance, if you’re importing into arihip.main, DaCHS would like to note that the new table’s definition can be found at arihip/q#main. Now, if dc.tablemeta already says arihip.main was defined in quicktest/q#main, there’s a problem that DaCHS can’t resolve by itself. 90% of the time, the underlying reason is that you renamed an RD (or a resource directory). Since the identifier of an RD (the RD id) is just is relative path of the RD to the inputs directory (minus the .rd), and the RD id is used in many places in DaCHS, you have to be careful when you do that (essentially: dachs drop --all old/rd; mv old new; dachs imp new/rd). If you’re seeing the above message, it’s already too late for that careful way. The simple way to repair things nevertheless is to look for the table name (it should be given in the DETAILS of the error message) and simply tell DaCHS to forget all about that table: dachs purge arigfh.main This might leave other traces of the renamed RD in the system, which might lead to trouble later. If you want to be a bit more throrough, figure out the RD id of the vanished RD by running psql gavo and asking something like: select sourcerd from dc.tablemeta where tablename='arihip.main' This will give you the RD id of the RD that magically vanished, and you can then say: dachs drop -f old/rdid DaCHS will then hunt down all traces of the old RD and delete them. Don’t do this without an acute need; such radical measures will clean up DaCHS’ mind, but in a connected society, amnesia can be a strain on the rest of the society. In the VO case, dachs drop -f might, for instance, cause stale registry records if you had registered services inside of the RD you force-drop.
https://dachs-doc.readthedocs.io/commonproblems.html
2020-07-02T18:38:25
CC-MAIN-2020-29
1593655879738.16
[]
dachs-doc.readthedocs.io
Key concepts This section introduces you to high-level concepts that you should understand before you use the BMC Asset Management product. - Business value - ITIL processes - Calbro Services company example - End-to-end process - User roles - Architecture - Contract Management - Software license management - Standard configurations - About software usage - Schedules - Configuration items - Costing and charge-backs - Managing inventory - Configuration catalog - Reports - Requisition management Was this page helpful? Yes No Submitting... Thank you
https://docs.bmc.com/docs/asset91/key-concepts-609066854.html
2020-07-02T19:41:32
CC-MAIN-2020-29
1593655879738.16
[]
docs.bmc.com
Routing By using routing commands, you can divide your customers based on certain commands and preferences. Each customer can be routed to one or more operator groups or one chatbot. Group of operators Routing can be set in the group sequentially according to operator, or to the whole group at once. See information found in the group settings section. Chatbot If you have chosen a chatbot as a target, the session no longer falls into other groups. The chatbot is the first to create and connect with Mluvii. You can read about chatbot settings in the following instructions. Routing commands any or several conditions - if multiple conditions are defined, all must be met at the same time. Interaction is routed only to groups that meet all defined conditions. one group of operators or chatbot - interactions directed here in case of met conditions. Each command has a priority according to rank. Interactions “fall” from top priority to lowest. Fallback In case the routing is unable to find a suitable operator group given the configured routing conditions (because the group is not available or its queue is full), the session can be optionally routed to fallback (if enabled in routing configuration). Fallback is enabled The session is routed to fallback groups. These are all the groups defined in the routing configuration, including groups that do not meet routing conditions. However, the group with the highest priority that satisfies the routing conditions is always preferred. Fallback is disabled The behaviour is different for each channel. Behaviour for the main channel (mluvii chat and mluvii videocall) The guest is routed to the configured offline form. Behaviour for the phone channel The behaviour is defined on the phone gateway. For example the gateway can play a predefined IVR message or hangup the call. Behaviour for the facebook channel The sessions will be waiting for an available operator indefinitely. Behaviour for the email channel The sessions will be waiting for an available operator indefinitely. A user with sufficient privileges can forward the email to any operator manually. Routing rejection after working hours The application automatically treats the situation when the client opens a chat just before the end of working hours and enters the chat session after working hours. When entering the queue, the application checks the availability of each operator and, if no one is online, redirects the client to the offline form.
https://docs.mluvii.com/guide/en/for-administrators/tenant-management/settings/routing.html
2020-07-02T19:57:15
CC-MAIN-2020-29
1593655879738.16
[]
docs.mluvii.com
Delete vCenter Virtual Servers Disks To delete a disk: - Go to your Control Panel > Cloud > Virtual Servers menu. - Make sure your virtual server is powered off, then click its label to open its details screen. - Click the Storage > Disks tab. - Click the Actions button next to the disk you want to delete, then click Delete.
https://docs.onapp.com/vcenter/latest/user-guide/vcenter-virtual-servers-disks/delete-vcenter-virtual-servers-disks
2020-07-02T18:26:15
CC-MAIN-2020-29
1593655879738.16
[]
docs.onapp.com
Migrate vCenter Virtual Servers > Disks. - Click the Actions button next to the disk you want to move to another data store, then click the Import link. On the screen that appears, select a target data store from a drop-down box.You can only migrate disks to data stores in data store zones assigned to your bucket. Click Start Migrate.You cannot migrate a disk to a data store with less capacity than the disk size! If you move an 850GB disk between aggregates with 10GB actual usage, the 'dd' image of the local volume manager will take 850GB space, because the entire local volume manager is copied, including zero 'd space which may not be able to be recovered.
https://docs.onapp.com/vcenter/latest/user-guide/vcenter-virtual-servers-disks/migrate-vcenter-virtual-servers-disks
2020-07-02T18:09:30
CC-MAIN-2020-29
1593655879738.16
[]
docs.onapp.com
- Run the most time-consuming parts of pg_restore — those which load data, create indexes, or create constraints — using multiple concurrent jobs. This option can dramatically reduce the time to restore a large database to a server running on a multiprocessor machine. - Each job is one process or one thread, depending on the operating system, and uses a separate connection to the server. - The optimal value for this option depends on the hardware setup of the server, of the client, and of the network. Factors include the number of CPU cores and the disk setup. A good place to start is the number of CPU cores on the server, but values larger than that can also lead to faster restore times in many cases. Of course, values that are too high will lead to decreased performance because of thrashing. -. - data, to the extent that schema entries are present in the archive. This option is the inverse of --data-only. It is similar to, but for historical reasons not identical to, specifying --section=pre-data --section=post-data. .Note: Greenplum Database does not support user-defined triggers. - -. - - after connecting to the database. It is useful when the authenticated user (specified by -U) lacks privileges needed by pg_restore, but can switch to a role with the required rights. Some installations have a policy against logging in directly as a superuser, and use of this option allows restores to be performed without violating the policy.2
https://gpdb.docs.pivotal.io/6-4/utility_guide/ref/pg_restore.html
2020-07-02T19:59:52
CC-MAIN-2020-29
1593655879738.16
[]
gpdb.docs.pivotal.io